taesiri commited on
Commit
5e368a6
1 Parent(s): f887208

Upload papers/2401/2401.11977.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2401/2401.11977.tex +279 -0
papers/2401/2401.11977.tex ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass[lettersize,journal]{IEEEtran}
4
+ \usepackage{algorithmic}
5
+ \usepackage{algorithm}
6
+ \usepackage{amsmath,amsfonts}
7
+ \usepackage{graphicx}
8
+ \usepackage[colorlinks,linkcolor=red,anchorcolor=blue,citecolor=green]{hyperref}
9
+ \usepackage[caption=false,font=normalsize,labelfont=sf,textfont=sf]{subfig}
10
+ \usepackage{cite}
11
+ \IEEEoverridecommandlockouts \title{\LARGE \bf
12
+ Adaptive Motion Planning for Multi-fingered Functional Grasp via Force Feedback
13
+ }
14
+
15
+
16
+ \author{Dongying Tian$^{1,2}$, Xiangbo Lin$^{1}$ and Yi Sun$^{1}$\thanks{This work was supported by the National Natural Science Foundation of China [grant numbers 61873046, U1708263].\emph{(Corresponding author:Yi Sun)}}\thanks{$^{1}$Dongying Tian, Xiangbo Lin and Yi Sun are with
17
+ the School of Information and Communication and Engineering,
18
+ Dalian University of Technology, Dalian, 116024, China. (email:
19
+ tiandongying@sia.cn, linxbo@dlut.edu.cn, lslwf@dlut.edu.cn.)}
20
+ \thanks{$^{2}$Dongying Tian is also with Shenyang Institute of Automation, Chinese Academy
21
+ of Sciences, Shenyang 110016, China.}
22
+ }
23
+
24
+
25
+ \begin{document}
26
+
27
+
28
+
29
+ \maketitle
30
+ \thispagestyle{empty}
31
+ \pagestyle{empty}
32
+
33
+
34
+ \begin{abstract}
35
+
36
+ Enabling multi-fingered robots to grasp and manipulate objects with human-like dexterity is especially challenging during the dynamic, continuous hand-object interactions. Closed-loop feedback control is essential for dexterous hands to dynamically finetune hand poses when performing precise functional grasps. This work proposes an adaptive motion planning method based on deep reinforcement learning to adjust grasping poses according to real-time feedback from joint torques from pre-grasp to goal grasp. We find the multi-joint torques of the dexterous hand can sense object positions through contacts and collisions, enabling real-time adjustment of grasps to generate varying grasping trajectories for objects in different positions. In our experiments, the performance gap with and without force feedback reveals the important role of force feedback in adaptive manipulation. Our approach utilizing force feedback preliminarily exhibits human-like flexibility, adaptability, and precision.
37
+
38
+ \end{abstract}
39
+ \begin{IEEEkeywords}
40
+ Multifingered Hands, Force feedback, Motion planning, Grasping.
41
+ \end{IEEEkeywords}
42
+ \section{INTRODUCTION}
43
+
44
+ \IEEEPARstart{E}{nabling} multi-fingered robots to grasp and manipulate objects with human-like dexterity is a challenging task that typically involves two steps: grasp synthesis\cite{newbury2023deep} and motion planning. Previous studies have primarily focused on synthesizing static grasps for specific objects \cite{grady2021contactopt,jiang2021hand,zhu2021toward}. However, specifying only the static grasp configuration is insufficient, as grasp and manipulation involve continuous hand-object interaction. Errors in the hand or object pose can result in collisions, complicating motion planning and trajectory optimization. Multi-fingered robot hands face even greater difficulty as the possible interaction modes grow exponentially with the number of hand links and object contact points. This is particularly true for functional grasps, such as those required for dexterous tool use, which require accurate touching of specific functional parts, such as the nozzle of a watering can or the button of an electric drill. Closed-loop feedback control based on sensory observations is therefore essential to dynamically update and adapt the grasping process to disturbances and errors.
45
+
46
+ Recently, deep reinforcement learning has emerged as an approach to control complex dynamical systems, especially in high-dimensional dexterous manipulation \cite{she2022learning,christen2022d}. These approaches often rely on human demonstrations \cite{rajeswaran2017learning} or assume precise knowledge of the object's pose throughout the grasp and manipulation process \cite{christen2022d}. However, partial occlusion of the object due to its own shape or the manipulator can cause errors in object shape and pose estimation, leading to failed grasps. In such situations, force feedback with dynamic responses to disturbances is essential to compensate for the object's shape and pose mismatch. Previous studies \cite{merzic2019leveraging,wu2019mat,koenig2022role} have shown that learning grasping policies under tactile and force feedback can adapt to environmental uncertainty, improving the grasp success rate, particularly for low-DOF grippers. However, this ability is still challenging to transfer to high-DOF multi-fingered robot hands. Until recently, the work of \cite{liang2021multifingered} trained a five-fingered hand to learn robust motions and stable grasps using force feedback without visual sensing in a pick-up task.
47
+
48
+ Our work aims to go beyond the simple pick-and-place task and achieve functional grasp for multi-fingered hands. Functional grasp involves task-oriented grasping on specific functional regions of objects, enabling the completion of manipulations that are both stable and functional. For instance, the index finger presses the shutter of a camera to take photos. To differentiate our task from stable grasping of objects such as pick-and-place, we refer to this type of grasp as functional grasp. In this work, we formulate functional grasp as a dynamic motion planning process, from pre-grasp to goal grasp, highlighting the challenges in contact-rich functional grasp, which requires high precision and robustness against uncertainty. During this process, the object can hardly remain steady due to errors in hand or object pose, leading to unsynchronized finger touching and translation plus rotation of objects in 3D space. Given the vision limitations due to occlusion by the object or manipulator, we study the effectiveness of force feedback control to handle uncertainty by disabling visual sensing and designing a force-aware path planning method to adapt the manipulation of the object. This is a more challenging task as there are neither full demonstration trajectories nor visual sensing during grasping.
49
+
50
+ The contributions of this paper can be summarized as follows:
51
+ \begin{itemize}
52
+ \item We propose an adaptive motion planning method for multi-fingered functional grasp via joint torque feedback, which is constantly updated and dynamically adapted to object uncertainty in the absence of visual feedback.
53
+ \item We build a reinforcement learning model in which joint torque feedback is considered as part of the state in the Markov Decision Process, which enables the robot to learn the skill of grasp through trial and error under pose uncertainty.
54
+ \item We conduct functional grasp experiments in simulation and observe a significant effect of force feedback in generating smooth trajectories for functional grasps.
55
+ \end{itemize}
56
+
57
+ \section{Related Work}
58
+
59
+ \subsection{Analytical Motion Planning}
60
+
61
+ Nowadays, there are efficient search and optimization algorithms developed to solve motion planning problems \cite{lozano2014constraint}. However, dexterous hand motion planning is still challenging due to the high-dimensional freedom of motion and the complexity of making and breaking contacts between hands and objects \cite{orthey2021sparse}. Orthey et al. improved the planning performance on high-dimensional planning problems by using multilevel abstractions to simplify state spaces \cite{orthey2020multilevel} and generalized sparse roadmaps to multilevel abstractions afterwards \cite{orthey2021sparse}. The CMGMP algorithm \cite{cheng2022contact} utilizes automatically enumerated contact modes of environment-object contacts to guide the tree expansions during the search, generating hybrid motion plans including both continuous state transitions and discrete contact mode switches. The TrajectoTree method \cite{chen2021trajectotree} plans trajectories for dexterous manipulation tasks involving contact switching using contact-implicit trajectory optimization (CITO) \cite{posa2014direct} augmented with a high-level discrete contact sequence planner. Another proposed method \cite{pang2023global} enables efficient global motion planning for highly contact-rich and high-dimensional systems. However, analysis methods are usually based on conditional assumptions, and the models are complex and difficult to reproduce.
62
+
63
+ \subsection{Learning Based Motion Planning}
64
+
65
+
66
+ Currently, there have been a few studies that utilize techniques such as CVAE \cite{ye2023learning}, trajectory imitation learning \cite{chen2022dextransfer}, or auto-regressive network architecture \cite{taheri2022goal} to generate grasping paths. However, the majority of research in this area relies on deep reinforcement learning, which has demonstrated outstanding performance in sequential decision-making problems \cite{patel2022learning,dasari2023learning,jain2019learning,mandikal2022dexvip,xu2023unidexgrasp,christen2022d}. To facilitate exploration and reduce sampling complexity, it is common to incorporate prior knowledge such as expert demonstrations \cite{patel2022learning,jain2019learning}, contact maps \cite{mandikal2022dexvip}, or grasping poses \cite{dasari2023learning,xu2023unidexgrasp}. For example, learning efficiency can be enhanced by using a small amount of teaching data \cite{jain2019learning}, or by directly acquiring operational experience from human hand manipulation videos \cite{patel2022learning}. Moreover, a binary affordance map has been employed \cite{mandikal2022dexvip} to guide the agent towards functional grasp regions on the object. Sudeep et al. \cite{dasari2023learning} found that the pregrasp finger pose is crucial for successful behavior learning. Our research, similar to \cite{xu2023unidexgrasp,christen2022d}, utilizes goal grasp poses. However, unlike previous approaches, we specifically focus on functional grasping. In order to accomplish subsequent operational tasks, the hand configuration is typically more complex and requires precise execution, rather than solely pick-up the object.
67
+
68
+ For the hand-object interaction phase we are studying, not only is there initial position uncertainty in the object, but the object can also be perturbed during the operation. Some studies assume that the object's pose is completely known throughout the entire grasping process, which does not align with real-world scenarios \cite{merzic2019leveraging,liu2023dexrepnet,christen2022d,vulin2021improved,dasari2023learning,patel2022learning}. \cite{xu2023unidexgrasp} addressed this issue by employing a teacher-student distillation approach, where the reinforcement learning of operational strategies is initially performed under the assumption of known object states and later imitated when the object state is unknown. In practice, vision is advantageous in guiding dexterous hands to approach and grasp objects. However, once the hand and object come into contact, precise object position and the interaction forces between the hand and object are difficult to obtain through vision due to occlusion \cite{hu2022physical}. As a result, purely vision-guided grasping often leads to imprecise and jarring operations \cite{liu2023dexrepnet,qin2023dexpoint}.
69
+
70
+ Based on human operational experience, real-time force feedback is indispensable for guiding smooth and dexterous manipulation actions. Current research has focused on exploring the control methods and effects of force feedback on two-finger \cite{vulin2021improved} and three-finger grippers \cite{merzic2019leveraging,wu2019mat,koenig2022role}. These studies emphasize the potential of incorporating tactile feedback in robotic grasping, but it remains highly challenging to extend these techniques to five-finger dexterous hands. Currently, most research in this area has focused on fingertip tactile feedback \cite{sundaralingam2019robust,kumar2019contextual,matak2022planning,liang2021multifingered}. In \cite{sundaralingam2019robust}, fingertip tactile information was mapped to force signals for force feedback control. \cite{kumar2019contextual} utilized fingertip tactile feedback to compensate for the reduction in geometric information caused by coarse bounding boxes and uncertainties in pose estimation. \cite{matak2022planning} used fingertip tactile feedback to adapt to online estimates of the object's surface, correcting errors in the initial plan. Liang et al. \cite{liang2021multifingered} proposed a fusion of binary contact information from the fingertip, torque sensors, and robot proprioception (joint positions), which shows promise for achieving more robust and stable grasping. Additionally, \cite{jain2019learning} demonstrated that touch sensors capable of sensing on-off contact events enable faster learning and better asymptotic performance in tasks with high degrees of occlusion.
71
+
72
+
73
+ However, there exists a significant disparity between the perception capabilities of tactile sensors and human touch \cite{li2020review}. When tactile sensors are placed on the finger surface, due to the limitations of the mechanical structure, they usually can only provide limited low-precision tactile perception. Additionally, frequent hand-object interactions can lead to costly damage to tactile sensing devices. In contrast, the robot's body typically offers joint torque sensing information, which dynamically changes in response to dexterous hand motion, contact, and collision, providing valuable insights for manipulation. Though several studies have utilized joint torque information \cite{xu2023unidexgrasp,chen2022towards,liang2021multifingered}, it remains uncertain whether dexterous hands have obtained the control skills to utilize joint torque feedback for performing operations and the role of joint torque sensing during such manipulations. In our research, we solely rely on joint torque feedback to design training tasks and rewards, guiding the dexterous hand to learn through employing joint torque feedback, thereby revealing the potential value of joint torque feedback.
74
+
75
+ \section{PROBLEM FORMULATION OF FUNCTIONAL GRASP PLANNING}
76
+
77
+
78
+ \begin{figure*}[t]
79
+ \centering
80
+ \includegraphics[width=6in]{method1}
81
+ \caption{In the scenario where there is uncertainty in the initial position of an object and the pose of the object is unknown during the grasping process, an agent uses hand perception to obtain information about the hand-object interaction state, and makes action decisions accordingly. The rewards obtained from the dexterous hand executing actions will guide the update of the critic-net and further update the actor-net.}
82
+ \label{fig_3}
83
+ \end{figure*}
84
+ This work aims to plan a motion trajectory for a functional grasp that is not only stable but also functional. It focuses on the last path planning involving rich contact between a multi-fingered hand and an object, which requires high precision and robustness against uncertainty. Given a pre-grasp and a goal functional grasp, the motion trajectory of this path starts from the pre-grasp and ends at the goal functional grasp. During this process, it is difficult to obtain the contact state of the hand-object from visual observation due to occlusion, so the joint torque located on each driving joint of the hand is taken as the force feedback information in this work.
85
+
86
+ The force feedback information, which dynamically updates due to contact collision, can be used to sense the hand-object interaction state. If the multi-fingered hand automatically identifies and touches the functional part of the object while the object can still rest steadily with placement tolerances $(<0.01m)$, the motion trajectory of the functional grasp is considered successful. We further add disturbance $\epsilon$ $(<0.02m)$ to the initial horizontal position of the object to verify the adaptability to uncertainty by utilizing the force feedback information.
87
+
88
+
89
+ We denote a pre-grasp hand pose as $G_0=(T_{0},R_{0},J_{0})$ and a goal grasp $G_g=(T_{g},R_{g},J_{g}) $, relative to the object at $({T^{obj}},{R^{obj}})$, where $J$ stands for the joint angle of the hand, and $T$ and $R$ stand for the position and orientation of the 6D pose. Our goal is to plan the motion from $G_0$ to $G_g$, guided by the force feedback of the hand joints. The movements of dexterous hands should be gentle to avoid moving objects.
90
+
91
+ \section{Method}
92
+ \subsection{Force-feedbacked Motion Planning via Reinforcement Learning}
93
+
94
+ The task of motion planning from pre-grasp to a goal functional grasp is challenging when relying solely on visual observations due to occlusion. Any errors in the hand or object positioning can result in knocking over or even destroying the object. To minimize such risks, we utilize force sensing in this work, which provides essential information about the local object geometry, contact forces, and grasp stability. Our motion planning strategy is based on joint torque feedback from each driving joint of the hand, which directly translates force cues into adaptive finger motion predictions. As the hand makes/breaks contacts with the object, the set of joint torques applied to the object dynamically and quickly changes. We optimize the motion trajectory based on the observed joint torques, current hand pose, contact points on the object, and contact forces in order to predict the next desired hand pose and joint angles that will keep the object stable during grasping. As optimization progresses, the grasp becomes increasingly stable and plausible towards the goal functional grasp. Force feedback plays a crucial role in smoothly fitting the multi-fingered hand to the object surface to achieve a compliant grasp.
95
+
96
+ We train a deep reinforcement learning model using joint torques, hand pose, and joint angles of a multi-fingered hand as the observations and hand configuration changes as actions. An overview of the optimization trajectory for the Shadow hand is presented in Figure 1. To achieve a functional grasp of a given object at the end of the planning, we define a grasp reward function that reflects the grasp quality. Additionally, to minimize object movement when the hand touches it, we design another reward function that encourages the agent to move the object as little as possible. We employ the Soft Actor-Critic (SAC) algorithm \cite{haarnoja2018soft,laskin2020curl}, an off-policy reinforcement learning technique to learn adaptive action policies through trial-and-error.
97
+
98
+ \subsection{Action and State Space}
99
+
100
+ \begin{table}[t]
101
+ \caption{Observations Table}
102
+ \centering
103
+ \begin{tabular}{c|c}
104
+ \hline
105
+ Observation & Dimension\\
106
+ \hline
107
+ Current joint torques of hand $F$ & 24 \\
108
+ Current joint angles of hand $J$ & 24 \\
109
+ Current 6D pose of hand & 6 \\
110
+ Joint motion controller $h$ & 1 \\
111
+ Current driving joints of hand base & 6 \\
112
+ Previous 6D pose of hand & 6 \\
113
+ Previous joint motion controller & 1 \\
114
+ Goal 6D pose of hand $T_{g}, R_{g}$ & 6 \\
115
+ Goal palm position & 3 \\
116
+ \hline
117
+
118
+
119
+
120
+ \end{tabular}
121
+ \end{table}
122
+
123
+ The action vector contains 24 joints controlling the fingers and wrist, as well as 6 joints controlling the hand base translation and rotation. All of these joints are controlled in position mode. The objective of the action control is to generate an optimized hand trajectory, which allows the hand to approach an object from a pre-grasp position to a goal functional grasp. We assume that the pre-grasp position is slightly more than 2 centimeters away from the surface of the object to avoid contact with the object due to initial positioning deviations, which can be obtained using a computer vision system and a grasp planner. The goal functional grasp is either obtained from recent static grasp synthesis methods \cite{zhu2021toward} or a grasp dataset \cite{wang2023dexgraspnet}. An example of the pre-grasp and goal grasp of a camera is shown in Figure 1.
124
+
125
+ The task of motion planning from pre-grasp to a goal functional grasp is not a trivial one. Unlike the pick-and-place task that simply involves closing fingers, achieving a goal functional grasp, such as for dexterous tool use, requires precise positioning of the hand on the functional parts of an object. This task is further complicated by the occlusion of the object during hand-object interaction, making it difficult to determine the exact object position. In this work, we rely on the goal functional grasp and force feedback information to guide the hand motion. The reinforcement learning policy is designed such that the desired grasp pose drives the hand to achieve the desired pose on the object, while the joint torque feedback helps to locate the object, perceive its geometry, and provide compliant contact forces.
126
+
127
+ The entire multi-fingered hand is controlled by a 6 DoF hand base and one actuator per joint. The hand base globally adjusts the orientation and translation of the whole hand to make it reach the feasible approaching direction and distance to the object, while the actuator per joint locally controls the joint angle to force multiple fingers to contact the corresponding functional parts of the object. Since the hand leaves a small gap between the fingers and the object from a pre-grasp to a goal grasp, we employ a locally linear trajectory for each joint. This trajectory takes the pre-grasp joint angle $J_0$, goal joint angle $J_g$, and parameter $h$ as inputs and outputs the current joint angle as follows:
128
+
129
+ \begin{equation}
130
+ J=h(J_{g}-J_{0})+J_{0},h \in [0,1]
131
+ \end{equation}
132
+
133
+ Here, $J_{g}-J_{0}$ denotes the range of joint angle, and $h \in [0,1]$ is an updated parameter that progressively controls the joint from the initial pre-grasp joint angle to the final goal joint angle. When $h=0$, the joint is at the pre-grasp angle, and when $h=1$, it is at the final goal angle. We use the same value of $h$ at each joint to synchronously drive multiple fingers towards the goal position.
134
+
135
+ The state vector consists of joint angles $J$, joint torques $F$ on each of the 24 driving joints, and the hand's 6D pose $(T, R)$, as detailed in Table 1. To monitor the execution progress, the parameter $h$ that controls joint motion is also designed as an observation variable, which is defined as:
136
+
137
+ \begin{equation}
138
+ h=1-0.5\max \limits_{i} \left| j_i -j_i^g \right|,\ j_i \in J,\ j_i^g \in J_{g}
139
+ \end{equation}
140
+
141
+ Here, $j_i$ and $j_i^g$ denote the current and goal joint angles for the $i$-th joint, respectively. In all joint angle changes, if the maximum absolute difference (normalized to [0,1]) between the current joint angles $J$ and the goal joint angles $J_g$ approaches 0, $h$ will be close to 1, indicating that all fingers have reached the goal hand pose. It is worth noting that we do not include any information about the object pose, geometry or mass in the state vector.
142
+
143
+
144
+ \subsection{Reward}
145
+
146
+
147
+
148
+ A carefully designed reward function is essential to guide the agent in learning the desired behavior. In our manipulation tasks, due to the initial uncertainty of the object's position, dexterous hands need to sense the real position of the object through touch. During this process, contact and collision between the hand and the object can cause further displacement of the object. Dexterous grasping requires reducing the movement of the object, allowing the hand to adjust to the object's position. Therefore, the ideal grasping result should be based on the relative position between the hand and the object in the goal grasp, achieving accurate functional grasping at the final position of the object. In order to achieve a functional grasp, we have designed a reward function $r_a$ that guides the robot's joint angles towards the goal pose. Specifically, the observation variable $h$ indicates how close the joint angles are to the goal angles, and the closer they are, the greater the reward obtained.
149
+
150
+ \begin{equation}
151
+ \label{reward_b}
152
+ r_a=-\left|\ 1- h \ \right|
153
+ \end{equation}
154
+
155
+ \begin{figure}[t]
156
+ \centering
157
+ \subfloat{\includegraphics[width=1.3in]{method401}\label{fig_first_case}}
158
+ \hfil
159
+ \subfloat{\includegraphics[width=1.3in]{method402}\label{fig_second_case}}
160
+
161
+ \caption{Terminal conditions: The hand moves in the incorrect orientation (Left). The object's displacement surpasses the predetermined limit (Right).}
162
+ \label{fig_5}
163
+ \end{figure}
164
+ We further design the reward function $r_b$ to guide the palm to the goal grasping position. Due to the multi-point contact constraints between the hand and the object, we only use the position of a single point on the palm to guide its movement. We obtain the current position of the palm relative to the object from the simulation system, denoted as $\textbf{p}$. The reward function is used to guide the palm to minimize the deviation between its current position $\textbf{p}$ and the goal position $\textbf{p}_{g}$. It should be noted that the variable $\textbf{p}_{g}$ here denotes the desired position of the palm relative to the current object position, as opposed to the goal palm position provided in the state, which is defined relative to ${T^{obj}}$. This approach has shown promising results in simulation experiments.
165
+
166
+ \begin{equation}
167
+ r_b=-\Vert \textbf{p}-{\textbf{p}_{g}} \Vert_2
168
+ \end{equation}
169
+
170
+
171
+ Moreover, we employ $r_c$ to incentivize the agent to minimize object displacement during operation, which is calculated as the negative distance between the object positions before and after the operation:
172
+
173
+ \begin{equation}
174
+ r_c=-\Vert {{T^{obj}_{c}}}-{\widetilde T^{obj}}\Vert_2
175
+ \end{equation}
176
+ \begin{equation}
177
+ {\widetilde T^{obj}}={T^{obj}}+\epsilon
178
+ \end{equation}
179
+
180
+ Here, ${T^{obj}_{c}}$ denotes the current object position, and ${\widetilde T^{obj}}$ represents the initial position of the object with added uncertainty.
181
+
182
+ Finally, we use the weight coefficients to scale the reward functions mentioned above, approximating a range of [-1,0]. The total reward is defined as:
183
+
184
+ \begin{equation}
185
+ r=\omega+\omega_a r_a+\omega_b r_b+\omega_c r_c
186
+ \end{equation}
187
+ where $\omega=1.5$, $\omega_a=1$, $\omega_b=20$, and $\omega_c=100$.
188
+
189
+ \subsection{Terminal Conditions}
190
+
191
+
192
+
193
+ It is crucial to encourage the agent to focus on important regions of the state space and avoid unnecessary exploration. We define successful task completion based on three conditions: when the object's movement is less than $0.01m$ ($\left|r_c\right|<0.01$), the final joint angles of the hand are 95$\%$ close to the goal angles ($\left|r_a\right|<0.05$), and the deviation of the palm position is less than $0.01m$ ($\left|r_b\right|<0.01$). Once these conditions are met, the agent receives an additional reward of $1000$. Moreover, a reward of $1.5*(200-step)$ is designed to incentivize the agent to complete the task in as fewer steps as possible, where the maximum number of steps allowed to execute a task is 200. The values of $r_a$ and $r_c$ are continuously monitored in real-time. If the object's movement exceeds $0.01m$ or the palm position deviation surpasses $0.1m$, the grasp fails immediately, as shown in Figure 2. In addition, if the total reward falls below $-50$, indicating inefficient exploration, the task will be aborted promptly.
194
+
195
+
196
+
197
+ \section{Experiments}
198
+
199
+
200
+ \subsection{Simulation Experiment Settings}
201
+
202
+ We performed experiments using the MuJoCo physics simulator \cite{todorov2012mujoco} and the ADROIT dexterous hand \cite{kumar2013fast}, which is a 24-DOF anthropomorphic platform equipped with six driving joints on the hand base, allowing the hand to move freely in space. As our study focused on the adaptive actions guided by force feedback, factors such as tendon elasticity and weak driving forces could hinder the perception of force feedback. Therefore, we made modifications to the platform, including removing hand tendons, increasing the range of joint forces, and building a position controller to enhance controllability.
203
+
204
+ Four tasks were designed, including grasping the nozzle of a watering can, preparing to take pictures with a camera, using an electric drill, and grasping the handle of a cup. These tasks require accurate matching the pose of the hand with that of the object, which is challenging due to errors in the pose of the object and hand. The pre-grasp hand pose and the goal grasp hand pose of the four objects were labeled according to \cite{zhu2021toward}, with the weight of the camera set to 1.0kg and the others set to 0.5kg.
205
+ \begin{figure}[t]
206
+ \centering
207
+ \subfloat{\includegraphics[width=1.7in]{nn25}\label{fig_first_case}}
208
+ \hfil
209
+ \subfloat{\includegraphics[width=1.7in]{nn26}\label{fig_second_case}}
210
+ \caption{Comparison of the success rates with and without force feedback under two different conditions: (Left) $\epsilon=0.005$ and (Right) $\epsilon=0.02$.}
211
+ \label{fig_6}
212
+ \end{figure}
213
+ \begin{figure}[t]
214
+ \centering
215
+ \subfloat{\includegraphics[width=1.7in]{nn21}\label{fig_first_case}}
216
+ \hfil
217
+ \subfloat{\includegraphics[width=1.7in]{nn24}\label{fig_second_case}}
218
+
219
+ \caption{Comparison of the success rates with and without force feedback for heavy objects.}
220
+ \label{fig_7}
221
+ \end{figure}
222
+
223
+ During training, the simulation system was reset when the termination condition was reached, and the total number of learning steps was 1000000. The first 1000 steps adopted a random policy, and then a batch of data was randomly selected from the buffer to update the policy in each step. We tested the policy learned by the agents every 2000 steps and retained the five best policies during the entire learning process. Finally, the best action policy was selected through 100 random tests. This learning process took approximately 4 hours on an RTX2080 Ti.
224
+
225
+ \subsection{Contribution of Force Feedback}
226
+ \begin{figure*}
227
+ \centering
228
+ \includegraphics[width=7.0in]{experiments05}
229
+ \caption{We conducted one hundred randomized tests, with the initial positions of the object drawn in the horizontal (x, y) direction, as shown in the figure. Successful grasps are indicated by green circles, while failed grasps are indicated by red triangles.}
230
+ \label{fig_8}
231
+ \end{figure*}
232
+
233
+
234
+ During the dexterous hand learning process, we observed that the hand gradually understood the task of approaching and grasping the object after several trial and error cycles. However, it took a considerable amount of time for the hand to learn to perceive collisions and adjust its grasping pose to avoid moving the object. The larger the range of initial position changes, the more challenging the learning process became. Nevertheless, it was fascinating to witness the hand's efforts to find the optimal grasping policy that could adapt to initial displacement of the object.
235
+
236
+
237
+ We conducted a comparison under two different initial positions of an object with errors at $\epsilon=0.005m$ and $\epsilon=0.02m$, which are lower and higher than the threshold value of $r_c$ $(0.01m)$, respectively. When $\epsilon=0.005m$, the initial position error is smaller than the maximum allowable displacement of the objects during operation. In this case, several fixed grasping paths can enable the successful completion of the task, regardless of force feedback. However, when $\epsilon=0.02m$, there are no fixed paths that can achieve a high success rate, and very different grasp paths may be generated for varying initial positions. We conducted a comparative experiment by masking the joint torque in the state information to verify the contribution of force feedback. We learned the policy in 5 to 30 times under different conditions and collected the success rate of the best policy into a chart. From Figure 3, it is evident that a higher success rate can be obtained when the objects are grasped with force feedback, which is consistent for most objects in most cases.
238
+
239
+ Due to inertia, the dexterous hand tends to maintain its original motion state. Only when the contact force is large enough to overcome inertia will the joint torques change, allowing perception of the contact collision. Lighter objects provide smaller contact resistance that is difficult to perceive. To validate the role of joint torques in dexterous manipulation, we subsequently increased the weight of the object by a factor of five in follow-up experiments.
240
+
241
+
242
+ As shown in Figure 4, when $\epsilon=0.005m$, we were able to learn a high-scoring grasp policy for the power drill and mug even without force feedback, but failed to do so for the camera and spray bottle due to the low probability of learning a control strategy. With force feedback, the success rate for every object was higher than 70$\%$. Therefore, this experiment demonstrates that force feedback plays a helpful role in the learning process. When the object's initial position has a random deviation of $0.02m$, the grasp success rate for most objects reaches 90$\%$ with force feedback, which is much higher than the success rate without force feedback $(<=35\%)$. This result highlights the particular importance of force feedback when the initial position deviation of the object is relatively large.
243
+
244
+
245
+ Additionally, we further verified the ability of force feedback to adapt to changes in the object's position. One hundred random tests were conducted for each object on the learned grasping policy, and the results are shown in Figure 5. For each test, the initial X and Y coordinates of the object were randomly selected within a range of $\pm2$ $cm$ and plotted on the chart. A successful grasp was marked with a green circle, while a failed grasp was marked with a red triangle. Through multiple experiments, the distribution of successful grasps can be observed. When force feedback is present, there are more successful grasps across the entire distribution space compared to the case without force feedback. In the absence of force feedback, the successful grasps are concentrated in a small area, indicating that a robust grasp is only obtained for a relatively small range of disturbance. This experiment further confirms that force feedback improves the adaptability of grasping.
246
+
247
+
248
+ \subsection{Adaptive Functional Grasp}
249
+ \begin{figure*}
250
+ \centering
251
+ \includegraphics[height=1.8in]{compare7}
252
+ \caption{We observed the grasping process at three fixed positions and compared the difference caused by with and without force feedback. For the three different scenes represented as A, B, and C, the initial positions of the objects are different, but they all move right along the X-axis. The curve in the figure shows the real-time X-coordinate values of the palm and object during the movement, which is used to observe the movement process. With force feedback, an adaptive trajectory is generated, leading to smaller object movement, as marked in the result picture. }
253
+ \label{fig_9}
254
+ \end{figure*}
255
+ \begin{figure*}
256
+ \centering
257
+ \includegraphics[width=5.4in]{exp7}
258
+ \caption{For objects located in various initial positions, our approach achieved accurate functional grasping through different grasping paths. The figure illustrates three functional grasping processes for each object.}
259
+ \label{fig_10}
260
+ \end{figure*}
261
+ Take the spray bottle as an example shown in Figure 6, we selected three different initial positions (-0.017, 0.01), (-0.007, 0.01), and (0.015, 0.01) to create scenes A, B, and C, respectively, for further observation of the grasping action. The position of the whole palm was represented using a single point, and we drew the change curves of the palm position in the X-axis during the grasping process to observe the difference between the movement of the hand with and without force feedback(solid line). Additionally, we recorded the change of the real-time X-axis coordinate of the object's center to express object movement(dashed line). The experimental results demonstrate that in the absence of force feedback, the dexterous hand performs similar grasping actions for scenes A and B, resulting in a large degree of object movement, while the grasp fails for scene C. The difference in palm trajectories between scenes A and B demonstrates a certain degree of adaptability, which can be attributed to the real-time hand pose feedback during the hand-object interaction process by the agent. In contrast, when force feedback is present, the hand can adjust its action under the guidance of force feedback. Therefore, for the three grasps at different initial positions of the object, different adaptive grasping trajectories are formed, resulting in relatively small object movement.
262
+
263
+
264
+ Figure 7 illustrates the movement of the dexterous hand as it performs functional grasping of four distinct objects, all guided by force feedback. The figure highlights the adaptability of our method in generating diverse grasping trajectories and grasp types tailored to different initial object positions for each item. These diverse paths are continually adjusted in real-time as the hand interacts and collides with the objects, aiming to minimize object displacement and achieve precise grasps. We also submitted 4 videos to demonstrate the grasping effect with or without feedback. It can be observed that the generated grasps are very similar to those of humans with dexterity, flexibility, and accuracy.
265
+
266
+
267
+
268
+
269
+ \section{Conclusion And Future Work}
270
+
271
+ We have introduced a novel adaptive motion planning method for multi-fingered functional grasping utilizing force feedback and analyzed its performance on four distinct objects. The reinforcement learning model established is capable of learning policies to adjust actions in real-time based on joint torque feedback, enabling adaptation to disturbances in initial object poses. The achieved dexterous grasps demonstrate promising capabilities of human dexterity for adaptive robotic manipulation. However, our method still has limitations, as dexterous hands seek hand-object interaction policies through trial and error, which are sensitive to initial random exploration. In addition, the objects we manipulate are relatively heavy compared to objects in real-world environments. We will continue to improve the manipulation dexterity by incorporating more perceptual information in the future.
272
+
273
+
274
+
275
+ \bibliographystyle{IEEEtran}
276
+ \bibliography{reference}
277
+
278
+
279
+ \end{document}