diff --git "a/papers/2403/2403.01536.tex" "b/papers/2403/2403.01536.tex" new file mode 100644--- /dev/null +++ "b/papers/2403/2403.01536.tex" @@ -0,0 +1,1106 @@ + + +\documentclass[lettersize,journal]{IEEEtran} + + + +\IEEEoverridecommandlockouts + + + + + + + +\usepackage{graphics} \usepackage{epsfig} \usepackage{times} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} +\usepackage{hyperref} +\usepackage{multirow} +\usepackage{caption,setspace} +\usepackage{booktabs} +\usepackage{arydshln} +\usepackage{tablefootnote} +\usepackage{csquotes} +\usepackage{footnote} +\makesavenoteenv{tabular} +\makesavenoteenv{table} +\usepackage[]{threeparttable} +\usepackage{breqn} +\usepackage{subcaption} +\usepackage{algorithm, algpseudocode} +\usepackage{float} +\usepackage[dvipsnames]{xcolor} + +\DeclareMathOperator*{\argmax}{arg\,max} +\DeclareMathOperator*{\argmin}{arg\,min} +\newtheorem{definition}{Definition} +\newtheorem{remark}{Remark} +\newtheorem{lemma}{Lemma} +\newtheorem{theorem}{Theorem} +\newtheorem{assumption}{Assumption} + +\newcommand{\integral}[2]{\int\limits^{#1}_{#2}} +\newcommand{\longline}{\noalign{\hrule height 1.5pt}} + +\makeatletter +\def\adl@drawiv#1#2#3{\hskip.5\tabcolsep + \xleaders#3{#2.5\@tempdimb #1{1}#2.5\@tempdimb}#2\z@ plus1fil minus1fil\relax + \hskip.5\tabcolsep} +\newcommand{\cdashlinelr}[1]{\noalign{\vskip\aboverulesep + \global\let\@dashdrawstore\adl@draw + \global\let\adl@draw\adl@drawiv} + \cdashline{#1} + \noalign{\global\let\adl@draw\@dashdrawstore + \vskip\belowrulesep}} +\makeatother + +\title{\LARGE \bf +Fast Ergodic Search With Kernel Functions +} + + +\author{Muchen Sun, Ayush Gaggar, Peter Trautman, and Todd Murphey\thanks{Muchen Sun, Ayush Gaggar and Todd Murphey are with the Department of Mechanical Engineering, Northwestern University, Evanston, IL 60208, USA. Email: {\tt\small muchen@u.northwestern.edu}} +\thanks{Peter Trautman is with Honda Research Institute, San Jose, CA 95134, USA} +} + + +\begin{document} +\allowdisplaybreaks + +\bstctlcite{IEEEexample:BSTcontrol} + +\maketitle +\thispagestyle{empty} + + +\begin{abstract} +Ergodic search enables optimal exploration of an information distribution while guaranteeing the asymptotic coverage of the search space. However, current methods typically have exponential computation complexity in the search space dimension and are restricted to Euclidean space. We introduce a computationally efficient ergodic search method. Our contributions are two-fold. First, we develop a kernel-based ergodic metric and generalize it from Euclidean space to Lie groups. We formally prove the proposed metric is consistent with the standard ergodic metric while guaranteeing linear complexity in the search space dimension. Secondly, we derive the first-order optimality condition of the kernel ergodic metric for nonlinear systems, which enables efficient trajectory optimization. Comprehensive numerical benchmarks show that the proposed method is at least two orders of magnitude faster than the state-of-the-art algorithm. Finally, we demonstrate the proposed algorithm with a peg-in-hole insertion task. We formulate the problem as a coverage task in the space of SE(3) and use a 30-second-long human demonstration as the prior distribution for ergodic coverage. Ergodicity guarantees the asymptotic solution of the peg-in-hole problem so long as the solution resides within the prior information distribution, which is seen in the 100\% success rate. +\end{abstract} + +\IEEEpeerreviewmaketitle + +\section{Introduction} +\label{sec:introduction} + +Robots often need to search an environment driven by a distribution of information of interest. Examples include search-and-rescue based on human-annotated maps or aerial images~\cite{murphy_human-robot_2004}\cite{shah_multidrone_2020}, object tracking under sensory or motion uncertainty~\cite{abraham_decentralized_2018}\cite{shetty_ergodic_2022}, and data collection in active learning~\cite{abraham_active_2019}\cite{ prabhakar_mechanical_2022}. The success of such tasks depends on both the richness of the information representation and the effectiveness of the search algorithm. While advances in machine perception and sensor design have substantially improved the quality of information representation, generating effective search strategies for the given information remains an open challenge. + + +Motivated by such a challenge, ergodicity---as an information-theoretic coverage metric---is proposed to optimize search decisions~\cite{mathew_metrics_2011}. Originating in statistical mechanics~\cite{petersen_ergodic_1989}, and more recently the study of fluid mixing~\cite{mathew_multiscale_2005}, the ergodic metric measures the time-averaged behavior of a dynamical system with respect to a spatial distribution---a dynamic system is ergodic with respect to a spatial distribution if the system visits any region of the space for an amount of time proportional to the integrated value of the distribution over the region. Optimizing the ergodic metric guides the robot to cover the whole search space asymptotically while investing more time in areas with higher information values. Recent work has also shown that such a search strategy closely mimics the search behaviors observed across mammal and insect species as a proportional betting strategy for information~\cite{chen_tuning_2020}. + +Despite the theoretical advantages and tight connections to biological systems, current ergodic search methods are not suitable for all robotic tasks. The standard ergodic metric~\cite{mathew_metrics_2011} has an exponential computation complexity in the search space dimension~\cite{shetty_ergodic_2022}\cite{sun_scale-invariant_2022}, limiting its applications in spaces with fewer than 3 dimensions. Moreover, common robotic tasks, in particular vision or manipulation-related tasks, often require operations in non-Euclidean spaces, such as the space of rotations or rigid-body transformations. However, the standard ergodic metric is restricted only in the Euclidean space. + +In this article, we propose an alternative formula for ergodic search across Euclidean space and Lie groups with significantly improved computational efficiency. Our formula is based on the L2-distance between the target information distribution and the spatial empirical distribution of the trajectory. We re-derive the ergodic metric and show that ergodicity can be computed as the summation of the integrated likelihood of the trajectory within the spatial distribution and the uniformity of the trajectory measured with a kernel function. We name this formula the \emph{kernel ergodic metric} and show that it is asymptotically consistent with the standard ergodic metric but has a linear computation complexity in the search space dimension instead of an exponential one. We derive the metric for both Euclidean space and Lie groups. Moreover, we derive the first-order optimality condition of the proposed metric for non-linear dynamical systems using Pontryagin's maximum principle. This result allows us to iteratively apply the standard linear quadratic regulator (LQR) technique for efficient ergodic trajectory optimization. We further generalize the control derivations to Lie groups. + +We compare the computation efficiency of the proposed algorithm with the state-of-the-art fast ergodic search method~\cite{shetty_ergodic_2022} through a comprehensive benchmark. The proposed method is at least two orders of magnitude faster to reach the same level of ergodicity across 2D to 6D spaces and with first-order and second-order system dynamics. We further demonstrate the proposed algorithm for a peg-in-hole insertion task on a 7 degrees-of-freedom robot arm. We formulate the problem as an ergodic coverage task in the space of SE(3), where the robot needs to simultaneously explore its end-effector’s position and orientation, using a 30-second-long human demonstration as the prior distribution for ergodic coverage. We verify that the asymptotic coverage property of ergodic search leads to the task’s $100\%$ success rate. + + +The rest of the paper is organized as follows: Section~\ref{sec:related_work} discusses related work on ergodic search. Section~\ref{sec:problem_formulation} formulates the ergodic search problem and introduces necessary notations. Section~\ref{sec:ergodic_metric} derives the proposed ergodic metric and a theoretical analysis of its formal properties. Section~\ref{sec:ergodic_control} introduces the theory and algorithm of controlling a non-linear dynamic system to optimize the proposed metric. Section~\ref{sec:lie_groups} generalizes the previous derivations to from Euclidean space to Lie group. Section~\ref{sec:evaulation} includes the numerical evaluation and hardware verification of the proposed ergodic search algorithm, followed by conclusion and further discussion in Section~\ref{sec:conclusion}. + +\begin{table*}[h] + \centering + \captionsetup{justification=centering} + \caption{Properties of different ergodic search methods.} + \label{table:property_comparison} + \setlength{\tabcolsep}{10.5pt} + \begin{tabular}{cccccc} + \toprule + \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}c@{}}Methods\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Asymptotic\\Consistency\end{tabular}} &\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Real-Time\\Computation\end{tabular}} &\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Long\\Planning\\Horizon\end{tabular}} &\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Lie Group\\Generalization\end{tabular}} &\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Complexity\\w.r.t.\\Space Dimension\end{tabular}} \\ + & & & & & \\ + & & & & & \\ + \midrule + Mathew et al. \cite{mathew_metrics_2011} & \checkmark & \checkmark & & & Exponential \\ + \midrule + Miller et al. \cite{miller_trajectory_2013-1} & \checkmark & & \checkmark & & Exponential \\ + \midrule + Miller et al. \cite{miller_trajectory_2013} & \checkmark & & \checkmark & \checkmark & Exponential \\ + \midrule + Abraham et al. \cite{abraham_ergodic_2021} & & \checkmark & \checkmark & & Polynomial to Exponential$^*$ \\ + \midrule + Shetty et al. \cite{shetty_ergodic_2022} & \checkmark & \checkmark & & \checkmark & Superlinear \\ + \midrule + \textbf{Ours} & \checkmark & \checkmark & \checkmark & \checkmark & Linear \\ + \bottomrule + \end{tabular} \\ + \vspace{+2pt} + \footnotesize{$^*$ The method proposed in Abraham et al. \cite{abraham_ergodic_2021} uses Monte-Carlo (MC) integration and has a linear complexity to the number of samples. However, to guarantee a consistent MC integration estimate, the number of samples has a growth rate between polynomial and exponential to the dimension~\cite{tang_note_2023}.}\\ +\end{table*} \section{Related Works:\\Ergodic Theory and Ergodic Search} \label{sec:related_work} + +Ergodic theory studies the connection between the time-averaged and space-averaged behaviors of a dynamical system. Originating in statistical mechanics, it has now expanded to a full branch of mathematics with deep connections to other branches, such as information theory, measure theory, and functional analysis. We refer the readers to \cite{walters_introduction_2000} for a more comprehensive review of the ergodic theory in general. For decision-making, the ergodic theory provides formal principles to reason over decisions based on the time and space-averaged behaviors of the environment or of the agent itself. The application of ergodic theory in robotic search tasks was first introduced in~\cite{mathew_metrics_2011}. In this seminal work, the formal definition of ergodicity in the context of a search task is given as the difference between the time-averaged spatial statistics of the agent's trajectory and the target information distribution to search in. A quantitative measure of such difference is also introduced with the name~\emph{spectral multi-scale coverage} (SMC) metric, as well as a closed-form model predictive controller with infinitesimally small planning horizon for both first-order and second-order linear dynamical systems. We refer to the SMC metric in~\cite{mathew_metrics_2011} as the \emph{standard ergodic metric} in the rest of the paper, since it has been widely used in a majority of ergodic search applications. + +Ergodic search has since been applied to generate informative search behaviors in robotic applications, including multi-modal target localization~\cite{mavrommati_real-time_2018}, object detection~\cite{abraham_active_2019}, imitation learning~\cite{kalinowska_ergodic_2021}, robotic assembly~\cite{ehlers_imitating_2019}\cite{shetty_ergodic_2022}, and automated data collection for generative models~\cite{prabhakar_mechanical_2022}. The metric has also been applied to non-search robotic applications, such as point cloud registration~\cite{sun_scale-invariant_2022}. Furthermore, ergodic search has also been extended to better satisfy other requirements from common robotic tasks, such as safety-critical search~\cite{lerch_safety-critical_2023}, multi-objective search~\cite{ren_pareto-optimal_2023}, and time optimal search~\cite{dong_time_2023}. + +There are several limitations of the original ergodic search framework from~\cite{mathew_metrics_2011}: (1) the controller is limited with an infinitesimally small planning horizon, thus it often requires an impractically long exploration period to generate good coverage; (2) it is costly to scale the standard ergodic metric to higher dimension spaces, (3) it is non-trivial to generalize the metric to non-Euclidean spaces. Previous works have designed controllers to optimize the trajectory over a longer horizon. A trajectory optimization method was introduced in \cite{miller_trajectory_2013}, which optimizes the standard ergodic metric iteratively for a nonlinear system by solving a linear-quadratic regulator (LQR) problem in each iteration. A model predictive control method based on hybrid systems theory was introduced in \cite{mavrommati_real-time_2018}, which is later extended to support decentralized multi-agent ergodic search in~\cite{abraham_decentralized_2018}. However, since these methods optimize the standard ergodic metric, they are still limited by the computation cost of evaluating the metric itself. In~\cite{abraham_ergodic_2021}, an approximated ergodic search framework is proposed. The empirical spatial distribution of the robot trajectory is approximated as a Gaussian-mixture model and the standard ergodic metric is replaced with the Kullback-Leibler (KL) divergence between the Gaussian-mixture distribution and target information distribution, estimated using Monte-Carlo (MC) integration. While this framework has a linear time complexity to the number of samples used for MC integration, to guarantee a consistent estimate of the KL divergence, the number of samples has a growth rate varying between polynomial and exponential to the search space dimension~\cite{tang_note_2023}. A new computation scheme was introduced in \cite{shetty_ergodic_2022} to accelerate the evaluation of the standard ergodic metric, using the technique of tensor train decomposition. This framework is demonstrated on an ergodic search task in a 6-dimensional space. However, this framework is limited to an infinitesimally small planning horizon, and even though the tensor train technique significantly improves the scalability of the standard ergodic metric, the computational cost is still expensive for planning with longer horizons. As for extending ergodic search to non-Euclidean spaces, a extension to the special Euclidean group SE(2) was introduced in \cite{miller_trajectory_2013} by defining the Fourier basis function on SE(2). However, defining Fourier basis function for other Lie groups is non-trivial and the method has the same computation complexity as in Euclidean space. The tensor train framework from~\cite{shetty_ergodic_2022} can also be generalized to Lie groups, but the generalization is for the controller instead of the metric; thus, it is limited to an infinitesimally small planning horizon. Our proposed ergodic search framework is built on top of a scalable ergodic metric that is asymptotically consistent with the standard ergodic metric, alongside rigorous generalization to Lie groups. A comparison of the properties of different ergodic search methods is shown in Table~\ref{table:property_comparison}. + \section{Problem Formulation} \label{sec:problem_formulation} +\label{sec:problem_formulation} + +\subsection{Notation and Definition} +We denote the state of the robot as $s\in\mathcal{S}$, where $\mathcal{S}$ is a bounded set within an $n$-dimensional Euclidean space. Later in the paper we will extend the state of the robot to Lie groups. We assume the robot's motion is governed by the following dynamics: +\begin{align} + \dot{s}(t) = f(s(t), u(t)), \label{eq:robot_dynamics} +\end{align} where $u(t)\in\mathcal{U}\subset\mathbb{R}^m$ is the control signal. The dynamics function $f(\cdot,\cdot)$ is differentiable with respect to both $s(t)$ and $u(t)$. We denote a probability density function defined over the bounded state space $\mathcal{S}$ as $P(x):\mathcal{S}\mapsto\mathbb{R}_0^+$, which must satisfy: +\begin{align} + \int_{\mathcal{S}} P(x)dx=1 \text{ and } P(x)\geq 0\quad\forall x\in\mathcal{S}. \label{eq:distr_constraints} +\end{align} We define a trajectory $s(t):[0,T]\mapsto\mathcal{S}$ as a continuous mapping from time to a state in the bounded state space. + +\begin{definition}[Trajectory empirical distribution] +Given a trajectory $s(t):[0, T]\mapsto\mathcal{S}$ of the robot, we define the empirical distribution of the trajectory as: +\begin{align} + C_{s}(x) = \frac{1}{T} \int_{0}^{T} \delta_{s(t)}(x) dt, \label{eq:trajectory_spatial_statistics} +\end{align} where $\delta_{s(t)}(x)$ is a Dirac delta function. + +\end{definition} +\begin{remark} +The trajectory empirical distribution $C_{s}(x)$ has a valid probability density function since it satisfies (\ref{eq:distr_constraints}). +\end{remark} +\begin{remark} + The trajectory empirical distribution is often also referred to as the time-averaged statistics of the trajectory~\cite{mathew_metrics_2011}\cite{miller_ergodic_2016}, since it represents the likelihood of each region being visited by the robot. +\end{remark} +\begin{remark} + The trajectory empirical distribution transforms the trajectory, as a function of time, to a spatial distribution. This transformation is necessary to evaluate ergodicity. +\end{remark} + +\subsection{Ergodicity} +The original definition of ergodicity~\cite{mathew_metrics_2011} states that a dynamic system is ergodic with respect to the distribution if and only if \emph{the system visits any region of the space for an amount of time proportional to the integrated value of the distribution over the region}. Given this definition, the mathematical definition of an ergodic system is given below. +\begin{definition}[Ergodic system] + Given a spatial distribution $P(x)$, a dynamical system, with its trajectory denoted as $s(t)$, is ergodic with respect to $P(x)$ if and only if the following equation holds~\cite{mathew_metrics_2011}: + \begin{align} + P(x) = \lim_{T\rightarrow\infty} C_{s}(x), \forall x\in\mathcal{S}. \label{eq:theoretical_ergodicity} + \end{align} +\end{definition} + +\begin{remark} + In other words, an ergodic system's trajectory empirical distribution, at the limit of infinite time horizon, is exactly the same as the target distribution. +\end{remark} +\begin{lemma}[Asymptotic coverage] + If a spatial distribution has a non-zero density at any point of the state space, an ergodic system, while systematically spending more time over regions with higher information density and less time over regions with less information density, will eventually visit any possible state in the state space as the time approaches infinity. +\end{lemma} + +However, the above definition of an ergodic system (\ref{eq:theoretical_ergodicity}) is infeasible in practice for two reasons: (1) it only evaluates systems with infinitely long time horizons; (2) the trajectory empirical distribution (\ref{eq:trajectory_spatial_statistics}) requires the evaluation of the Dirac delta function, which can only be evaluated in theory because of its infinite magnitude. For a robotic task, it is more important to quantify how ``ergodic'' a finite-horizon trajectory is given a spatial distribution. By optimizing such a quantifiable metric as the objective for an optimal control or trajectory optimization problem, we can enable a robot to optimally explore a spatial distribution within a limited amount of time while knowing the asymptotic coverage property of an ergodic system will eventually guide the robot to cover the entire space. This motivates the derivation of a practical \emph{ergodic metric}. + +\subsection{Standard ergodic metric} + +Motivated by the need for a practical measure of ergodicity for robotic search tasks, the standard ergodic metric was proposed in~\cite{mathew_metrics_2011}. The metric is formulated as a Sobolev norm through the Fourier transformation of the target distribution and the trajectory empirical distribution. We now briefly introduce the formula of the standard ergodic metric. + +\begin{definition}[Fourier basis function] + The standard ergodic metric assumes the robot operates in a $n$-dimensional rectangular Euclidean space, denoted as $\mathcal{S}=[0, L_1]\times\cdots\times[0, L_n]$. The Fourier basis function $f_{\mathbf{k}}(x) : \mathcal{S} \mapsto \mathbb{R}$ is defined as: + \begin{align} + f_{\mathbf{k}}(x) = \frac{1}{h_{\mathbf{k}}}\prod_{i=1}^{n} \cos\left( \frac{k_i\pi}{L_i} x_i \right), \label{eq:fourier_basis} + \end{align} where + \begin{align} + & x = [x_1, x_2, \cdots, x_n] \in \mathcal{S} \nonumber \\ + & \mathbf{k} = [k_1, \cdots, k_n] \in \mathcal{K} \subset \mathbb{N}^n \nonumber \\ + & \mathcal{K} = [0, 1, \cdots, K_1]\times\cdots\times [0, 1, \cdots, K_n] \nonumber + \end{align} and $h_{\mathbf{k}}$ is the normalization term such that the function space norm of each basis function is $1$. +\end{definition} +\begin{remark} + Following~\cite{mathew_multiscale_2005}, the set of Fourier basis functions (\ref{eq:fourier_basis}) forms a set of orthonormal basis functions. This means both the target spatial distribution and the trajectory empirical distribution can be represented as the weighted sum of the Fourier basis functions. +\end{remark} + +\begin{definition}[Standard ergodic metric] + Given an $n$-dimensional spatial distribution $P(x)$ and a dynamical system with a trajectory $s(t)$ over a finite time horizon $[0, T]$, the standard ergodic metric, denoted as $\mathcal{E}$, is defined as: + \begin{align} + \mathcal{E}(P(x), s(t)) & = \sum_{\mathbf{k}\in\mathcal{K}} \Lambda_{\mathbf{k}} \left( p_{\mathbf{k}} - c_{\mathbf{k}} \right)^2, \label{eq:standard_metric} + \end{align} where the sequences of $\{p_{\mathbf{k}}\}_{\mathcal{K}}$ and $\{c_{\mathbf{k}}\}_{\mathcal{K}}$ are the sequences of Fourier decomposition coefficients of the target distribution and trajectory empirical distribution, respectively: + \begin{align} + p_{\mathbf{k}} & = \int_{\mathcal{S}} P(x) f_{\mathbf{k}} (x) dx \label{eq:target_coef} \\ + c_{\mathbf{k}} & = \int_{\mathcal{S}} C_s(x) f_{\mathbf{k}} (x) dx = \frac{1}{T} \int_0^T f_{\mathbf{k}}(s(t)) dt \label{eq:traj_coef} + \end{align} (The full derivation of (\ref{eq:traj_coef}) can be found in~\cite{mathew_metrics_2011}.) The sequence of $\{\Lambda_{\mathbf{k}}\}$ is a convergent real sequence which bounds (\ref{eq:standard_metric}) from above, and is defined as: + \begin{align} + \Lambda_{\mathbf{k}} = (1 + \Vert\mathbf{k}\Vert)^{-\frac{n+1}{2}} .\label{eq:fourier_lambda} + \end{align} +\end{definition} + +In practice, by choosing a finite number of Fourier basis functions, the standard ergodic metric (\ref{eq:standard_metric}) can approximate ergodicity on a system with a finite time horizon. More Fourier basis functions will lead to better approximation but also require more computation. + + + +\begin{theorem} \label{theorem:fourier_metric_convergence} + With the time horizon $T$ and the number of Fourier basis functions approaching infinity, a dynamical system is globally optimal under the standard ergodic metric (\ref{eq:standard_metric}) if and only if the system satisfies the original definition of an ergodic system in (\ref{eq:theoretical_ergodicity}). +\end{theorem} +\begin{proof} + See~\cite{mathew_metrics_2011}. +\end{proof} + +\begin{lemma} + The standard ergodic metric (\ref{eq:fourier_lambda}) is a distance metric given by the Sobolev space norm of negative index $-\frac{n+1}{2}$: + \begin{align} + \mathcal{E}(P(x), s(t)) = \Vert P(x) - C_s(x) \Vert^2_{H^{-\frac{n+1}{2}}}. \label{eq:sobolev_norm} + \end{align} +\end{lemma} +\begin{proof} + See~\cite{mathew_metrics_2011}. +\end{proof} + + + +\begin{remark}[Limitations of the standard ergodic metric] + The computational bottleneck of the standard ergodic metric (\ref{eq:standard_metric}) is the number of Fourier basis functions. Past studies have revealed that the sufficient number of the basis functions for practical applications grows \emph{exponentially} with the search space dimension~\cite{sun_scale-invariant_2022}\cite{shetty_ergodic_2022}, creating a significant challenge to apply the standard ergodic metric in higher-dimensional spaces. In principle, the Fourier basis functions can also be defined in non-Euclidean spaces such as Lie groups. However, deriving the Fourier basis function in these spaces is non-trivial, limiting the generalization of the standard ergodic metric. +\end{remark} + \section{Kernel-Based Ergodic Metric} \label{sec:ergodic_metric} + + + + + + + + + +Inspired by the equivalence between the standard ergodic metric (\ref{eq:standard_metric}) and the Sobolev-norm distance (\ref{eq:sobolev_norm}) between the target spatial distribution and trajectory empirical distribution, we derive the kernel ergodic metric from a L2-norm distance formula for ergodicity. The Sobolev-norm and L2-norm formulas are asymptotically consistent as both converge to the definition of ergodic system in (\ref{eq:theoretical_ergodicity}), but the L2-norm formula leads to a more computationally efficient metric for ergodicity. + +\subsection{Derive kernel-based ergodic metric} + +\begin{figure*}[htbp] + \centering + \includegraphics[width=\textwidth]{figs/metric_element_comparison.pdf} + \caption{Trajectories when optimizing the individual and combined elements of the kernel ergodic metric (\ref{eq:proposed_metric}). (a) When only optimizing the information maximization term (\ref{eq:first_term}), the trajectory would go straight to a local maximum of the target distribution; (b) When only optimizing the kernel correlation term (\ref{eq:second_term}), the trajectory exhibits a uniform distribution pattern; (c) The kernel metric is the summation of the two elements, optimizing which leads to a trajectory that optimally covers the target distribution.} + \label{fig:metric_elements_comparison} +\end{figure*} + +We start with deriving the L2-distance between the target spatial distribution and the trajectory empirical distribution: +\begin{align} + & \int_{\mathcal{S}} \left[ P(x) - C_s(x) \right]^2 dx \label{eq:l2_dist} \\ + & = \int_{\mathcal{S}} \left[ P(x)^2 - 2 P(x) C_s(x) + C_s(x)^2 \right] dx \\ + & = \int_{\mathcal{S}} P(x)^2 dx - 2 \int_{\mathcal{S}} P(x) C_s(x) dx + \int_{\mathcal{S}} C_s(x)^2 dx. \label{eq:l2_dist_expansion} +\end{align} We can drop the first term in (\ref{eq:l2_dist_expansion})---the norm of the target distribution $\Vert P(x) \Vert^{2} = \int_{\mathcal{S}} P(x)^2 dx$---since it does not depend on $s(t)$ and is a constant given a target spatial distribution: +\begin{align} + & \argmin_{s(t)} \quad \Vert P(x) \Vert - 2 \int_{\mathcal{S}} P(x) C_s(x) dx + \int_{\mathcal{S}} C_s(x)^2 dx \nonumber \\ + & = \argmin_{s(t)} \quad {-2} \int_{\mathcal{S}} P(x) C_s(x) dx + \int_{\mathcal{S}} C_s(x)^2 dx. \label{eq:droped_obj} +\end{align} We now substitute the definition of trajectory empirical distribution (\ref{eq:trajectory_spatial_statistics}) to (\ref{eq:droped_obj}). For the first term in (\ref{eq:droped_obj}), we have: +\begin{align} + \int_{\mathcal{S}} P(x) C_s(x) dx & = \int_{\mathcal{S}} P(x) \left[ \frac{1}{T} \int_{0}^{T} \delta_{s(t)}(x) dt \right] dx \\ + & = \frac{1}{T} \int_0^T \left[ \int_{\mathcal{S}} P(x) \delta_{s(t)}(x) dx \right] dt \\ + & = \frac{1}{T} \int_0^T P(s(t)) dt. \label{eq:first_term} +\end{align} The derivation of the last two equations above is based on the property of Dirac delta function. For the second term in (\ref{eq:droped_obj}), we have: +\begin{align} + & \int_{\mathcal{S}} C_s(x)^2 dx = \int_{\mathcal{S}} \left[ \frac{1}{T} \int_{0}^{T} \delta_{s(t)}(x) dt \right]^2 dx \\ + & = \frac{1}{T^2} \int_{0}^{T} \int_{0}^{T} \left[\int_{\mathcal{S}} \delta_{s(t_1)}(x) \delta_{s(t_2)}(x) dx\right] dt_1 dt_2 \\ +& = \frac{1}{T^2} \int_{0}^{T} \int_{0}^{T} \phi(s(t_1), s(t_2)) dt_1 dt_2, \label{eq:second_term} +\end{align} where $\phi(x_1, x_2)$ is a delta kernel defined as: +\begin{align} + \phi(x_1, x_2) = \begin{cases} +\infty, \text{ if $x_1=x_2$} \\ 0, \text{otherwise}\end{cases}. \label{eq:delta_kernel} +\end{align} + +Substituting (\ref{eq:first_term}) and (\ref{eq:second_term}) back to (\ref{eq:droped_obj}), we have the alternative ergodic metric as: +\begin{align} + \mathcal{E}_{\phi}(s(t)) = & - \frac{2}{T} \int_0^T P(s(t)) dt \nonumber \\ + & + \frac{1}{T^2} \int_{0}^{T} \int_{0}^{T} \phi(s(t_1), s(t_2)) dt_1 dt_2. \label{eq:proposed_metric} +\end{align} + +In practice, the kernel $\phi(\cdot,\cdot)$ can be approximated in various forms and our results hold as long as the kernel function can asymptotically converge to a delta kernel. For example, our formula can be specified with the squared exponential kernel function, the rational quadratic kernel function, $\gamma$-exponential kernel function, and the Matérn class of kernel functions~\cite{rasmussen_gaussian_2006}. For the rest of the paper, we will formulate it as a Gaussian (squared exponential) kernel: +\begin{align} + \phi(x_1, x_2; \Sigma) = \frac{\exp\left( \frac{1}{2}(x_1-x_2)^\top\Sigma^{-1}(x_1-x_2) \right)}{\sqrt{(2\pi)^n\det(\Sigma)}}, + \label{eq:gaussian_kernel} +\end{align} where $\Sigma$ is a $n$-by-$n$ covariance matrix of the kernel. We name this new metric (\ref{eq:proposed_metric}) \emph{kernel ergodic metric}. + +\begin{theorem} + With the time horizon $T$ approaching infinity and the kernel approaching a delta kernel (\ref{eq:delta_kernel}), a dynamical system is globally optimal under the kernel ergodic metric (\ref{eq:proposed_metric}) if and only if the system satisfies the original definition of an ergodic system in (\ref{eq:theoretical_ergodicity}). +\end{theorem} +\begin{proof} + With the time horizon $T$ approaching infinity and the kernel approaching a delta kernel (\ref{eq:delta_kernel}), the kernel ergodic metric (\ref{eq:proposed_metric}) converges to the L2-distance formula (\ref{eq:l2_dist}). Since the L2-distance (\ref{eq:l2_dist}) is a convex metric with a global minima of $0$, a dynamic system that is globally optimal under the kernel ergodic metric (\ref{eq:proposed_metric}) must satisfy: + \begin{align} + P(x) = C_s(x), \forall x\in\mathcal{S}, + \end{align} which is equivalent to the definition of ergodic system in (\ref{eq:theoretical_ergodicity}). +\end{proof} + +\begin{remark} + Note that the kernel ergodic metric (\ref{eq:proposed_metric}) no longer requires the search space to be rectangular as the standard ergodic metric does. Instead, it only requires the search space to be bounded. +\end{remark} + +\begin{figure*}[htbp] + \centering + \includegraphics[width=\textwidth]{figs/kernel_parameter_obj.pdf} + \caption{(a) Samples from a target distribution. (b) The kernel parameter selection objective function (\ref{eq:optimal_kernel}) with the given samples. In this case, the kernel parameter is the value of the diagonal entry in the covariance. (c) A sub-optimal kernel parameter leads to an ``over-uniform'' coverage behavior. (d) The optimal kernel parameter generates an ergodic trajectory that allocates the time it spends in each region to be proportional to the integrated probability density of the region. (e) Another sub-optimal kernel parameter leads to an ``over-concentrated'' coverage behavior.} + \label{fig:kernel_parameter_obj} +\end{figure*} + +\subsection{Intuition behind kernel ergodic metric} + +The formula of kernel ergodic metric (\ref{eq:proposed_metric}) is the summation of two elements $-\frac{2}{T} \int_0^T P(s(t)) dt$ and $\frac{1}{T^2} \int_{0}^{T} \int_{0}^{T} \phi(s(t_1), s(t_2)) dt_1 dt_2$. Here, we show that minimizing the kernel metric, which is equivalent to simultaneously minimizing the two elements, represents \emph{a balance between information maximization and uniform coverage}. + +For the first term, since $P(x)$ is the probability density function of the target distribution, minimizing $-\frac{2}{T} \int_0^T P(s(t)) dt$ drives the system to the state with the maximum likelihood within the spatial distribution. For the second term in the kernel ergodic metric, the following lemma shows that minimizing it drives the system to cover the search space uniformly. + +\begin{lemma} \label{lemma:minimal_norm_uniformality} +A trajectory $s(t)$ that minimizes $\frac{1}{T^2} \int_{0}^{T} \int_{0}^{T} \phi(s(t_1), s(t_2)) dt_1 dt_2$ uniformly covers the search space $\mathcal{S}$. +\end{lemma} +\begin{proof} +See appendix. +\end{proof} + + + +Based on the above results, we can see that the kernel ergodic metric is the summation of the integrated likelihood of the trajectory within the spatial distribution and the uniformity of the trajectory within the search space measured by the kernel function. In other words, an ergodic system exhibits both information maximization behavior and uniform coverage behavior. Figure~\ref{fig:metric_elements_comparison} demonstrates the effects of optimizing each term separately and optimizing the combined kernel ergodic metric. Note that Figure~\ref{fig:metric_elements_comparison}(b) demonstrates that Lemma~\ref{lemma:minimal_norm_uniformality} holds in practice with finite time horizon and a Gaussian kernel. + +\subsection{Finding the optimal kernel parameter} + +Since the delta kernel is approximated by a Gaussian kernel (\ref{eq:gaussian_kernel}) in practice, the parameter of the Gaussian kernel---the covariance matrix---plays a critical role. Here, we discuss the principle of choosing the optimal kernel parameter. Our principle is based on the convergence property of the empirical distribution, shown in the following lemma. + +\begin{lemma}\label{lemma:large_number} + Assume $\bar{s}=\{s_t\}$ is the trajectory of a discrete dynamical system, with a total number of time steps being $N$, where each state $s_t$ is an i.i.d. sample from the target distribution $P(x)$. The following equation holds: + \begin{align} + \lim_{N\rightarrow\infty}C_{\bar{s}}(x) = \lim_{N\rightarrow\infty}\left[\frac{1}{N}\sum_{t=1}^{N} \delta_{s_t}(x)\right] = P(x), \forall x\in\mathcal{S}. + \end{align} +\end{lemma} + +With Lemma~\ref{lemma:large_number}, we have: +\begin{align} + \lim_{N\rightarrow\infty} \left\Vert \frac{d}{d\bar{s}} \left[ \int_{\mathcal{S}}(C_s(x)-P(x))^2 dx \right] \right\Vert = 0. +\end{align} +The L2 distance here can be replaced by the kernel ergodic metric, but instead of a continuous time integral, we will use a discrete Monte Carlo (MC) integral formula: +\begin{align} + & \lim_{N\rightarrow\infty} \left\Vert \frac{d}{d\bar{s}} J(\bar{s}) \right\Vert = 0 \label{eq:zero_gradient_limit} \\ + J(\bar{s}) = -\frac{1}{N} & \sum_{t=1}^{N} P(s_t) + \frac{1}{N^2} \sum_{t_1=1}^{N} \sum_{t_2=1}^{N} \phi(s_{t_1}, s_{t_2}). +\end{align} +The result in (\ref{eq:zero_gradient_limit}) suggests that a set of i.i.d. samples from the target distribution will be close to an optimum under the kernel ergodic metric. We can choose the optimal kernel parameters by minimizing the derivative of the samples with respect to the kernel ergodic metric. We now formally define this objective function for automatic kernel parameter selection: +\begin{definition}[Kernel parameter selection objective] + Given a set of samples $\bar{s}=\{s_i\}$ from the target distribution, denote a kernel as $\phi(\cdot,\cdot;\theta)$ with $\theta$ being the kernel parameter, the objective function for automatic kernel parameter selection is: +\begin{align} + J(\bar{s}; \theta) & = -\frac{1}{N} \sum_{t=1}^{N} P(s_t) + \frac{1}{N^2} \sum_{t_1=1}^{N} \sum_{t_2=1}^{N} \phi(s_{t_1}, s_{t_2}; \theta). \label{eq:optimal_kernel} + \end{align} The optimal kernel parameter can be found by minimizing this objective. +\end{definition} + +\begin{remark} + With a Gaussian kernel, as well as other kernel functions that are differentiable with respect to the parameter, the kernel parameter selection objective function is differentiable and can be optimized with standard iterative optimization methods. +\end{remark} + +In Figure~\ref{fig:kernel_parameter_obj}, an example objective function for kernel parameter selection is shown, as well as how different kernel parameters could influence the resulting ergodic trajectory. From Figure~\ref{fig:kernel_parameter_obj}, we can also see that the kernel parameter is an adjustable parameter for a practitioner to generate coverage trajectories that balance behaviors between uniform coverage and seeking maximum likelihood estimation. Thus, a kernel parameter could be sub-optimal under the parameter selection objective yet still generate valuable trajectories for practitioners depending on the specific requirements of a task. + + + + + + + + + + + + + + + + + + + + + + + + + + + +% + \section{Optimal Control For Kernel Ergodic Metric} \label{sec:ergodic_control} + +In this section, we will discuss how to optimize the kernel ergodic metric (\ref{eq:proposed_metric}) when the trajectory is governed by a nonlinear dynamical system. We show that, even though the kernel ergodic metric (\ref{eq:proposed_metric}) has a double time-integral structure that is uncommon among standard optimal control formulas, we can still solve the optimization problem through standard optimal control techniques, such as a linear-quadratic regulator (LQR). + +\subsection{First-Order Optimality Condition} + +We start with formulating the optimal control problem for the kernel ergodic metric (\ref{eq:proposed_metric}) and derive the first-order optimality condition of the problem. + +\begin{definition}[Kernel ergodic control] + Given a target distribution $P(x)$ and system dynamics $\dot{s}(t) = f(s(t), u(t))$, kernel ergodic control is defined as the following optimization problem: + \begin{align} + u^*(t) & = \argmin_{u(t)} J(u(t)) \\ +J(u(t)) & = \mathcal{E}_{\phi}(s(t)) + \int_0^T l(s(t), u(t)) dt + m(s(T)) \label{eq:ergodic_control_obj} \\ + \text{s.t. } & s(t) = s_0 + \int_{0}^{t} f(s(\tau), u(\tau)) d\tau, + \end{align} where $l(\cdot,\cdot)$ and $m(\cdot)$ are the additional run-time cost and terminal cost of the task, respectively. +\end{definition} + +Our derivation of the first-order optimality condition for the above optimal control problem is based on the following lemma. + +\begin{lemma} \label{lemma:perturbation_linear_dynamics} + Given a dynamical system $\dot{s}(t) = f(s(t), u(t))\in\mathbb{R}^n$ with the initial state $s(0)=s_0$, denote $v(t)$ as the perturbation on $u(t)$ and $z(t)$ as the resulting perturbation on $s(t)$, $z(t)$ and $v(t)$ are governed a linear dynamics with $z(0)=\mathbf{0}\in\mathbb{R}^n$: + \begin{align} + \dot{z}(t) & = A(t) z(t) + B(t) v(t) \\ + A(t) & = D_1 f(s(t), u(t)), \quad B(t) = D_2 f(s(t), u(t)). + \end{align} +\end{lemma} + +We now introduce the first-order optimality condition for the optimal control problem. + +\begin{theorem} \label{theorem:first_order_optmality} + The first-order necessary condition for $u(t)$ to minimize the objective (\ref{eq:ergodic_control_obj}) is: + \begin{align} + \rho(t)^\top B(t) + b(t)^\top = 0, \quad \forall t\in[0, T] \label{eq:first_order_optimality} + \end{align} where + \begin{align} + B(t) = \frac{d}{du} f(s(t), u(t)), \quad b(t) = \frac{d}{du} l(s(t), u(t)) \label{eq:bt_formula} + \end{align} and + \begin{align} + \dot{\rho}(t) = & -A(t)^\top \rho(t) - a(t) \\ + \rho(T) = & Dm(s(T)) \\ + A(t) = & \frac{d}{ds} f(s(t), u(t)) \\ + a(t) = & -\frac{2}{T} DP(s(t)) + \frac{d}{ds} l(s(t), u(t)) \nonumber \\ + & + \frac{2}{T^2} \int_{0}^{T} D_1\phi(s(t), s(\tau)) d\tau, \label{eq:state_linearization_objective} + \end{align} where $D_1\phi(\cdot,\cdot)$ denotes the derivative of $\phi$ with respect to its first argument. +\end{theorem} +\begin{proof} +See appendix. +\end{proof} + +While we leave the full proof of Theorem~\ref{theorem:first_order_optmality} in the appendix, we highlight an intermediate result of the proof---the Gateaux derivative of the objective (\ref{eq:ergodic_control_obj} with respect to the control $u(t)$---as it is important for the derivation of the iterative optimal control algorithm. + +\begin{lemma} \label{lemma:descent_direction} + Given a nominal control $u(t)$ and the resulting state trajectory $s(t)$, the Gateaux derivative of the kernel ergodic control objective (\ref{eq:ergodic_control_obj}) is the derivative of the objective in the direction of a perturbation $v(t)$ on the control, which leads to a perturbation $z(t)$ on the state. It can be computed as: + \begin{align} + DJ(u(t))\cdot v(t) = \int_0^T a(t)^\top z(t) + b(t)^\top v(t) dt + \gamma^\top z(T), \label{eq:obj_directional_derivative} + \end{align} where + \begin{align} + \gamma^\top = Dm(s(T)) + \end{align} and $a(t)$ and $b(t)$ are defined in (\ref{eq:state_linearization_objective}) and (\ref{eq:bt_formula}), respectively. +\end{lemma} +\begin{proof} + See proof of Theorem~\ref{theorem:first_order_optmality} in the appendix. +\end{proof} + + + +\subsection{Iterative optimal control} + +While the first-order optimality (\ref{eq:first_order_optimality}) is not sufficient to \emph{directly} generate optimal control under the kernel ergodic metric, it enables an iterative algorithm to solve the problem efficiently. + +Equation (\ref{eq:obj_directional_derivative}) in Lemma~\ref{lemma:descent_direction} gives us the closed-form expression for the descent direction of the kernel ergodic control objective and Lemma~\ref{lemma:perturbation_linear_dynamics} reveals that the perturbation is governed by a linear dynamics. Thus, instead of directly solving the optimization problem, we propose to iteratively optimize the descent direction with a quadratic cost, which has \emph{known closed-form solution} through the standard linear quadratic regulator (LQR) technique. The sub-objective for finding the optimal descent direction $J_\zeta(v(t))$ is defined as: +\begin{align} +& J_\zeta(v(t)) \nonumber \\ + & = DJ(u(t))\cdot v(t) + \int_0^T z(t)^\top Q(t) z(t) + v(t)^\top R(t) v(t) dt \\ + & = \int_0^T a(t)^\top z(t) + b(t)^\top v(t) dt + \gamma^\top z(T) \nonumber \\ + & \quad \quad + \int_0^T z(t)^\top Q(t) z(t) + v(t)^\top R(t) v(t) dt. \label{eq:iterative_sub_objective} +\end{align} After finding the optimal descent direction in each iteration, we can update the current control $u(t)$ by choosing a step size for the descent direction, which can be fixed through tuning or adaptive through line search. We describe the whole iterative optimal control process in Algorithm~\ref{algo:traj_opt}. + +\begin{algorithm} + \caption{Kernel-ergodic trajectory optimization} + \label{algo:traj_opt} + \begin{algorithmic}[1] + \Procedure{TrajOpt}{$s_0$, $\bar{u}(t)$} + \State $k \gets 0$ \Comment{$k$ is the iteration index.} + \State $u_k(t) \gets \bar{u}(t)$ + \While{termination criterion not met} + \State Simulate $s_k(t)$ given $s_0$ and $u_k(t)$ + \State $v_k(t) \gets \argmin_{v(t)} J_\zeta(v(t))$ \Comment{See Eq(\ref{eq:iterative_sub_objective})} + \State Find step size $\eta$ \Comment{Fixed or apply line search} + \State $u_{k+1}(t) \gets u_k(t) + \eta\cdot u_k(t)$ + \State $k \gets k+1$ + \EndWhile + \State \textbf{return} $u_k(t)$ + \EndProcedure + \end{algorithmic} +\end{algorithm} + + + +\subsection{Accelerating optimization} + +We further introduce two approaches to accelerate the computation in Algorithm~\ref{algo:traj_opt}. The first approach is a bootstrap method that provides an initial control $\bar{u}(t)$ to the algorithm that is closer to the optimum. The second approach parallelizes the computation of the descent direction to speed up each iteration of the algorithm. + +\subsubsection{Bootstrap} \label{subsec:bootstrap} +The idea of the bootstrap approach is to guide the initial control to roughly cover the target distribution. To achieve this, we formulate a trajectory tracking problem for the initial control to track over an \emph{ordered} set of samples from the target distribution. The order of the samples is determined by a rapid approximation of a traveling-salesman problem (TSP). Note that neither the solution of TSP nor trajectory tracking needs to be accurate. For example, the TSP solution is approximated through the nearest-neighbor approach~\cite{rosenkrantz_analysis_1977}, which has a maximum quadratic complexity, and the trajectory tracking problem can be computed through any iterative optimization methods for just a few iterations, without the need for strict convergence criterion. + +\subsubsection{Parallelization} + +The time integral term in the descent direction formula (\ref{eq:state_linearization_objective}): +\begin{align} + \int_{0}^{T} D_1\phi(s(t), s(\tau)) d\tau +\end{align} is the summation of the derivative of kernel correlation between the state at time $t$ and the whole trajectory. Since the evaluation of the kernel correlations in each time step is independent of each other, this integral can be computed in parallel. Similarly, the computation of the kernel ergodic control objective (\ref{eq:ergodic_control_obj}) can also be parallelized during the linear search in each iteration. \section{Kernel Ergodic Control on Lie groups} \label{sec:lie_groups} + + + +So far, the derivation of the kernel ergodic metric and the trajectory optimization method assumes the robot state evolves in a Euclidean space. One of the advantages of the kernel ergodic metric is that it can be generalized to other Riemannian manifolds, in particular Lie groups. + + +\subsection{Preliminaries} + +A Lie group is a smooth manifold. Thus, any element on the Lie group locally resembles a linear space. But unlike other manifolds, elements in a Lie group also satisfy the four group axioms equipped with a composition operation: closure, identity, inverse, and associativity. Therefore, the Lie group can model non-Euclidean entities, such as rotation or rigid body transformation, while allowing analytical and numerical techniques in the Euclidean space to be applied. In particular, we are interested in the special orthogonal group SO(3) and the special Euclidean group SE(3), which are used extensively to model 3D rotation and 3D rigid body transformation (simultaneous rotation and translation), respectively. + +\begin{definition}[SO(3) group] + The special orthogonal group SO(3) is a matrix manifold, in which each element is a 3-by-3 matrix satisfying the following property: + \begin{align} + g^\top g = g g^\top = I \text{ and } \det(g) = 1, \quad \forall g\in SO(3) \subset \mathbb{R}^{3\times 3}, \nonumber + \end{align} where $I$ is a 3-by-3 identify matrix. The composition operator for SO(3) is the standard matrix multiplication. +\end{definition} + +\begin{definition}[SE(3) group] + The special Euclidean group SE(3) is a matrix manifold. Each element of SE(3) is a 4-by-4 matrix that, when used as a transformation between two Euclidean space points, preserves the Euclidean distance between and the handedness of the points. Each element has the following structure: + \begin{align} + g = \begin{bmatrix} + R & \mathbf{t} \\ + \mathbf{0} & 1 + \end{bmatrix}, R \in SO(3), \mathbf{t}, \mathbf{0}\in\mathbb{R}^3. + \end{align} The composition operation in SE(3) is simply the standard matrix multiplication and it has the following structure: + \begin{align} + & g_1 \circ g_2 = \begin{bmatrix} + R_1 R_2 & R_1 \mathbf{t}_2 + \mathbf{t}_1 \\ + \mathbf{0} & 1 + \end{bmatrix} \\ + g_1 & = \begin{bmatrix} + R_1 & \mathbf{t}_1 \\ + \mathbf{0} & 1 + \end{bmatrix}, \quad g_2 = \begin{bmatrix} + R_2 & \mathbf{t}_2 \\ + \mathbf{0} & 1 + \end{bmatrix}. \\ + \nonumber + \end{align} +\end{definition} + +The smooth manifold property of the Lie group means at every element in SO(3) and SE(3), we can locally define a linear matrix space. We call such space the \emph{tangent space} of the group. + +\begin{definition}[Tangent space] + For an element $g$ on a manifold $\mathcal{M}$, its tangent space $\mathcal{T}_g\mathcal{M}$ is a linear space consisting of all possible tangent vectors that pass through $g$. +\end{definition} +\begin{remark} + Each element in the tangent space $\mathcal{T}_g\mathcal{M}$ can be considered as the time derivative of a temporal trajectory on the manifold $g(t)$ that passes through the $g$ at time $t$. Given the definition of a Lie group, the time derivative of such a trajectory is a vector. +\end{remark} + +\begin{definition}[Lie algebra] + For a Lie group $\mathcal{G}$, the tangent space at its identity element $\mathcal{I}$ is the Lie algebra of this group, denoted as $\mathfrak{g} = \mathcal{T}_{\mathcal{I}}\mathcal{G}$. +\end{definition} + +Despite being a linear space, the tangent space on the Lie group and Lie algebra could still have non-trivial structures. For example, the Lie algebra of the SO(3) group is the linear space of skew-symmetric matrices. However, elements in Lie algebra can be expressed as a vector on top of a set of \emph{generators}, which are the derivatives of the tangent element in each direction. This key insight allows us to represent Lie algebra elements in the standard Euclidean vector space. We can transform the elements between the Lie algebra and the standard Euclidean space through two isomorphisms---the \emph{hat} and \emph{vee} operators---defined below. More details regarding the properties of tangent space and Lie algebra can be found in~\cite{sola_micro_2021}\cite{boumal_introduction_2023}. + +\begin{figure*}[htbp] + \centering + \includegraphics[width=\textwidth]{figs/lie_group_intro_5.pdf} + \caption{Illustration of key concepts in the Lie group ergodic search formula. (a) The exponential map maps a Lie algebra element $\tau\in\mathfrak{g}$ to a Lie group element $g\in\mathcal{G}$; The logarithm map is the inverse of the exponential map; The adjoint transformation maps an element from one tangent space (Lie algebra in this case) to an element in another tangent space $\mathcal{T}_{g^\prime}\mathcal{G}$. (b) The difference between two Lie group elements $g_1^{-1}g_2$ is mapped to the Lie algebra $\log(g_1^{-1}g_2)$ through the logarithm map, which allows us to use the Euclidean norm to define the quadratic function on the Lie group. (c) The Lie group Gaussian distribution is defined in the tangent space of the mean $\bar{g}$. The probability density function is evaluated as a zero-mean Euclidean Gaussian distribution $\mathcal{N}(\mathbf{0},\Sigma)$ over the Lie group sample $g$ in the tangent space $\log(\bar{g}^{-1}g)$. (d) Dynamics is defined through the left-trivialization in the Lie algebra $\lambda:\mathcal{G}\times\mathbb{R}^n\times\mathbb{R}_0^+\mapsto\mathfrak{g}$, which is mapped back to propagate the Lie group system state through the exponential map $\exp(\Delta t{\cdot}\lambda(t))$. The dynamics is defined as continuous, but the Lie group trajectory is integrated piece-wise.} + \label{fig:lie_group_intro} +\end{figure*} + +\begin{definition}[Hat] + The hat operator $\hat{(\cdot)}$ is an isomorphism from a $n$-dimensional Euclidean vector space to the Lie algebra with $n$ degress of freedom: + \begin{align} + \hat{(\cdot)} : \mathbb{R}^n \mapsto \mathfrak{g}; \quad \hat{\nu} = \sum_{i=1}^{n} \nu_i E_i \in \mathfrak{g}, \quad \nu\in\mathbb{R}^n, + \end{align} where $E_i$ is the $i$-th generator of the Lie algebra. +\end{definition} + +\begin{definition}[Vee] + The vee operator $^\vee{(\cdot)} : \mathfrak{g} \mapsto \mathbb{R}^n$ is the inverse mapping of the hat operator. +\end{definition} + +For the SO(3) group, the hat operator is defined as: +\begin{align} + \hat{\omega} & = \begin{bmatrix} + 0 & -\omega_e & \omega_2 \\ + \omega_3 & 0 & -\omega_1 \\ + -\omega_2 & \omega_1 & 0 + \end{bmatrix}, \omega \in \mathbb{R}^3. +\end{align} For the SE(3) group, the hat operator is defined as: +\begin{align} + \hat{\tau} = \begin{bmatrix} + \hat{\omega} & \nu \\ + \mathbf{0} & 0 + \end{bmatrix} \in \mathbb{R}^{4\times 4}, \quad \tau=\begin{bmatrix} + \omega\\ \nu + \end{bmatrix} \in \mathbb{R}^6, \omega,\nu\in\mathbb{R}^3. +\end{align} + + + +\begin{definition}[Exponential map] + The exponential map, denoted as $\text{exp}:\mathfrak{g}\mapsto\mathcal{G}$, maps an element from the Lie algebra to the Lie group. +\end{definition} + +\begin{definition}[Logarithm map] + The logarithm map, denoted as $\log:\mathcal{G}\mapsto\mathfrak{g}$, maps an element from the Lie algebra to the Lie group. +\end{definition} + +The exponential and logarithm map for the SO(3) and SE(3) groups can be computed in practice through specific, case-by-case formulas. For example, the exponential map for the SO(3) group can be computed using the Rodrigues' rotation formula. More details regarding the formulas for exponential and logarithm map can be found in~\cite{lynch_modern_2017}. + +\begin{definition}[Adjoint] + The adjoint of a Lie group element $g$, denoted as $Ad_g:\mathfrak{g}\mapsto\mathfrak{g}$, transforms the vector in one tangent space to another. Given two tangent spaces, $\mathcal{T}_{g_1}\mathcal{G}$ and $\mathcal{T}_{g_2}\mathcal{G}$, from two elements of the Lie group $\mathcal{G}$, the adjont enables the following transformation: + \begin{align} + v_1 = Ad_{g_1^{-1}g_2} (v_2). + \end{align} +\end{definition} + +Since the adjoint is a linear transformation, it can be represented as a matrix denoted as $[Ad_g]$. The adjoint matrix for a SO(3) matrix is itself, the adjoint matrix for a SE(3) matrix is: +\begin{align} + [Ad_g] = \begin{bmatrix} + R & \hat{\mathbf{t}} R \\ + \mathbf{0} & R + \end{bmatrix} \in \mathbb{R}^{6\times 6}, \quad g = \begin{bmatrix} + R & \mathbf{t} \\ + \mathbf{0} & 1 + \end{bmatrix}. +\end{align} Visual illustrations of exponential map, logarithm map and adjoint are shown in Figure~\ref{fig:lie_group_intro}(a). + +\subsection{Kernel on Lie groups} + +The definition of a Gaussian kernel is built on top of the notion of ``distance''---a quadratic function of the ``difference''---between the two inputs. While the definition of distance in Euclidean space is trivial, its counterpart in Lie groups will have different definitions and properties. Thus, to define a kernel in a Lie group, we start with defining quadratic functions in Lie groups~\cite{fan_online_2016}. + +\begin{definition}[Quadratic function] +Given two elements $g_1, g_2$ on the Lie group $\mathcal{G}$, we can define the quadratic function as: +\begin{align} + C(g_1, g_2) = \frac{1}{2} \Vert \log(g_2^{-1} g_1) \Vert_{M}^2, \label{eq:lie_quadratic} +\end{align} where $M$ is the weight matrix and $\log$ denotes Lie group logarithm. +\end{definition} + +The visual illustration of the quadratic function on Lie groups is shown in Figure~\ref{fig:lie_group_intro}(b). Since the quadratic function is defined on top of Lie algebra, it has similar numerical properties to regular Euclidean space quadratic functions, such as symmetry. + +The derivatives of the quadratic function, following the derivation in~\cite{fan_online_2016}, are as follow: +\begin{align} + D_1 C(g_1, g_2) & = \text{d} \exp^{-1}\left(-\log(g_2^{-1}g_1)\right)^\top M \log(g_2^{-1}g_1) \\ + D_2 C(g_1, g_2) & = -[\mathit{Ad}_{g_1^{-1}g_2}]^\top D_1 C(g_1, g_2), +\end{align} where $\text{d}\exp^{-1}$ denotes the trivialized +tangent inverse of the exponential map, its specification on SO(3) and SE(3) are derived in~\cite{fan_online_2016}. + +Given (\ref{eq:lie_quadratic}), we now define the squared exponential kernel on Lie groups. + +\begin{definition} + The squared exponential kernel on Lie groups is defined as: + \begin{align} + \Phi(g_1, g_2; \alpha, M) = \alpha \cdot \exp\left( \frac{1}{2} \Vert \log(g_2^{-1} g_1) \Vert_{M}^2 \right). + \end{align} +\end{definition} + + +\subsection{Probability distribution on Lie groups} + +Probability distributions in Euclidean space need to be generalized to Lie groups case by case; thus, we primarily focus on generalizing Gaussian and Gaussian-mixture distributions to the Lie group as the target distribution. The results here also apply to other probability distributions, such as the Cauchy distribution and Laplace distribution. + +Our formula follows the commonly used \emph{concentrated Gaussian} formula~\cite{yunfeng_wang_error_2006}\cite{wang_nonparametric_2008}\cite{chirikjian_gaussian_2014}, which has been widely used for probabilistic state estimation on Lie groups~\cite{chauchat_invariant_2018}\cite{mangelson_characterizing_2020}\cite{ hartley_contact-aided_2020}. + +\begin{definition}[Gaussian distribution] + Given a Lie group mean $\bar{g}\in\mathcal{G}$ and a covariance matrix $\Sigma$ whose dimension matches the degrees of freedom of the Lie group (thus the dimension of a tangent space on the group), we can define a Gaussian distribution, denoted as $\mathcal{N}_{\mathcal{G}}(\bar{g}, \Sigma)$, with the following probability density function: + \begin{align} + \mathcal{N}_{\mathcal{G}}(g\vert \bar{g}, \Sigma) & = \mathcal{N}(\log(\bar{g}^{-1} \circ g) \vert \mathbf{0}, \Sigma), + \end{align} where $\mathcal{N}(\mathbf{0}, \Sigma)$ is a zero-mean Euclidean Gaussian distribution in the tangent space of the mean $\mathcal{T}_{\bar{g}}\mathcal{G}$. +\end{definition} + +Given the above definition, in order to generate a sample $g{\sim}\mathcal{N}_{\mathcal{G}}(\bar{g}, \Sigma)$ from the distribution, we first generate a perturbation from the distribution the tangent space $\epsilon{\sim}\mathcal{N}(\mathbf{0}, \Sigma)$, which will perturb the Lie group mean to generate the sample: +\begin{align} + g = \bar{g} \circ \exp(\epsilon) & \sim \mathcal{N}_{\mathcal{G}}(\bar{g}, \Sigma). \label{eq:perturb_sample_gen} \\ + \epsilon & \sim \mathcal{N}(\mathbf{0}, \Sigma) +\end{align} Following this relation, we can verify that the Lie group Gaussian distribution and the tangent space Gaussian distribution share the same covariance matrix through the following equation: +\begin{align} + \Sigma & = \mathbb{E}\left[ \epsilon\epsilon^\top \right] \\ + & = \mathbb{E}\left[ \log(\bar{g}^{-1}\circ g) \log(\bar{g}^{-1}\circ g)^\top \right]. +\end{align} + +Since the optimal control formula requires the derivative of the target probability density function with respect to the state, we now give the full expression of the probability density function and derive its derivative: +\begin{align} + P(g) & = \mathcal{N}_{\mathcal{G}}(g\vert \bar{g}, \Sigma) \nonumber \\ + & = \mathcal{N}(\log(\bar{g}^{-1}\circ g) \vert \mathbf{0}, \Sigma) \nonumber \\ + & = \eta \cdot \exp\left( -\frac{1}{2}\log\left( \bar{g}^{-1}g\right)^\top\Sigma^{-1}\log\left( \bar{g}^{-1}g\right) \right), +\end{align} where $\eta$ is the normalization term defined as: +\begin{align} + \eta = \frac{1}{\sqrt{(2\pi)^n\det(\Sigma)}}. +\end{align} The derivative of $P(g)$ is: +\begin{align} + D P(g) = P(g) \cdot -\left( \frac{d}{dg} \log\left(\bar{g}^{-1}g\right) \right)^\top \Sigma^{-1}\log\left(\bar{g}^{-1}g\right), +\end{align} where the derivative $\frac{d}{dg} \log\left(\bar{g}^{-1}g\right)$ can be further expanded as: +\begin{align} + \frac{d}{dg} \log\left(\bar{g}^{-1}g\right) & = \mathbf{d}\exp\left(-\log\left(\bar{g}^{-1}g\right)\right) \cdot \frac{d}{dg}\left(\bar{g}^{-1}g \right) \\ + & = \mathbf{d}\exp\left(-\log\left(\bar{g}^{-1}g\right)\right), +\end{align} where $\text{d}\exp$ and $\text{d}\exp^{-1}$ denote the trivialized +tangent of the exponential map and the inverse of the exponential map, the specification of two on SO(3) and SE(3) are derived in~\cite{fan_online_2016}. + +\begin{remark} + Our formula of concentrated Gaussian distribution on Lie groups perturbs the Lie group mean on the right side (\ref{eq:perturb_sample_gen}). Another formula is to perturb the mean on the left side. The Lie group derivation of the kernel ergodic metric holds for both formulas. As discussed in~\cite{mangelson_characterizing_2020}, the primary difference between the two formulas is the frame in which the perturbation is applied. +\end{remark} + +\begin{figure*}[htbp] + \centering + \includegraphics[width=0.99\textwidth]{figs/iterative_benchmark_plot.pdf} + \caption{Comparison of the scalability of different methods. The proposed method exhibits a linear complexity across 2 to 6 dimensional spaces, while the Fourier metric-based methods, even if accelerated by tensor-train, exhibits a close-to exponential complexity.} + \label{fig:scalability_plot} +\end{figure*} + +\subsection{Dynamics on Lie groups} + +Given a trajectory evolving on the Lie group $g(t):[0,T]\mapsto\mathcal{G}$, we define its dynamics in terms of a control vector field: +\begin{align} + \dot{g}(t) = f(g(t), u(t), t) \in \mathfrak{g}. +\end{align} In order to linearize the dynamics as required by the trajectory optimization algorithm in (\ref{eq:obj_directional_derivative}), we follow the derivation in~\cite{saccon_optimal_2013} to model the dynamics through the \emph{left trivialization} of the control vector field: +\begin{align} + \lambda(g(t), u(t), t) = g(t)^{-1} f(g(t), u(t), t) \in \mathfrak{g}, +\end{align} which allows us to write down the dynamics instead as: +\begin{align} + \dot{g}(t) = g(t) \lambda(g(t), u(t), t). +\end{align} + +Denote a perturbation on the control $u(t)$ as $v(t)$ and the resulting tangent space perturbation on the Lie group state as $z(t)\in\mathfrak{g}$,~\cite{saccon_optimal_2013} shows that $z(t)$ exhibits a similar linear dynamics as its Euclidean space counterpart, as shown in Lemma~\ref{lemma:perturbation_linear_dynamics}: +\begin{align} + \dot{z}(t) & = A(t) z(t) + B(t) v(t) \\ + A(t) & = D_1 \lambda(g(t), u(t), t) - [Ad_{\lambda(g(t), u(t), t)}] \\ + B(t) & = D_2 \lambda(g(t), u(t), t). +\end{align} Since the linearization of the dynamics is in the tangent space, this allows us to apply Algorithm~\ref{algo:traj_opt} to optimize the control for kernel ergodic control on Lie groups. + + \section{Evaluation} \label{sec:evaulation} + +\subsection{Overview} + +We first evaluate the numerical efficiency of our algorithm compared to existing ergodic search algorithms through a simulated benchmark. We then demonstrate our algorithm, specifically the Lie group SE(3) variant, with a peg-in-hole insertion task. + +\subsection{Numerical Benchmark} + +\begin{figure*}[htbp] + \includegraphics[width=0.99\textwidth]{figs/sim_traj_6d.pdf} + \caption{Example ergodic trajectory generated by the proposed algorithm in 6-dimensional space, with second-order system dynamics. The trajectory is overlapped on top of the marginalized target distribution.} + \label{fig:sim_traj_6d} +\end{figure*} + +\begin{table*}[htbp] + \centering + \captionsetup{justification=centering} + \caption{Average time required for \textbf{\textcolor{blue}{iterative}} methods to reach the same ergodic metric value\\(\textbf{first-order} system dynamics).} + \setlength{\tabcolsep}{20.0pt} + \begin{tabular}{cccccc} + \toprule + \midrule + \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Task\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}System\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Average Target\\Ergodic Metric Value\end{tabular}} & \multicolumn{3}{c}{Average Elapsed Time (second)} \\ + \cmidrule(lr){4-6} + & & & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{Ours}\\(Iterative)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{TT}\\(\textcolor{blue}{Iterative})\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{SMC}\\(\textcolor{blue}{Iterative})\end{tabular}} \\ +& & & & & \\ + \midrule + \midrule + 2 & 2 & $1.77{\times}10^{-3}$ & $\bf 1.77{\times}10^{-2}$ & $3.22{\times}10^{0}$ & $9.39{\times}10^{-1}$ \\ + \midrule + 3 & 3 & $2.24{\times}10^{-3}$ & $\bf 1.95{\times}10^{-2}$ & $3.32{\times}10^{0}$ & $6.45{\times}10^{0}$ \\ + \midrule + 4 & 4 & $1.86{\times}10^{-3}$ & $\bf 1.92{\times}10^{-2}$ & $6.88{\times}10^{0}$ & $8.84{\times}10^{1}$ \\ + \midrule + 5 & 5 & $1.20{\times}10^{-3}$ & $\bf 2.27{\times}10^{-2}$ & $3.36{\times}10^{1}$ & N/A \\ + \midrule + 6 & 6 & $8.47{\times}10^{-4}$ & $\bf 2.31{\times}10^{-2}$ & $7.04{\times}10^{1}$ & N/A \\ + \midrule + \bottomrule + \end{tabular} + \label{table:first_order_benchmark_iterative} + + \vspace{+1em} + + \centering + \captionsetup{justification=centering} + \caption{Average time required for \textbf{\textcolor{blue}{iterative}} methods to reach the same ergodic metric value\\(\textbf{second-order} system dynamics).} + \setlength{\tabcolsep}{20.0pt} + \begin{tabular}{cccccc} + \toprule + \midrule + \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Task\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}System\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Average Target\\Ergodic Metric Value\end{tabular}} & \multicolumn{3}{c}{Average Elapsed Time (second)} \\ + \cmidrule(lr){4-6} + & & & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{Ours}\\(Iterative)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{TT}\\(\textcolor{blue}{Iterative})\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{SMC}\\(\textcolor{blue}{Iterative})\end{tabular}} \\ +& & & & & \\ + \midrule + \midrule + 2 & 4 & $3.06{\times}10^{-3}$ & $\bf 1.85{\times}10^{-2}$ & $3.88{\times}10^{0}$ & $1.42{\times}10^{0}$ \\ + \midrule + 3 & 6 & $3.35{\times}10^{-3}$ & $\bf 2.37{\times}10^{-2}$ & $3.79{\times}10^{0}$ & $8.61{\times}10^{0}$ \\ + \midrule + 4 & 8 & $2.12{\times}10^{-3}$ & $\bf 3.94{\times}10^{-2}$ & $7.86{\times}10^{0}$ & $1.04{\times}10^{2}$ \\ + \midrule + 5 & 10 & $2.19{\times}10^{-3}$ & $\bf 5.66{\times}10^{-2}$ & $1.46{\times}10^{1}$ & N/A \\ + \midrule + 6 & 12 & $1.13{\times}10^{-3}$ & $\bf 6.28{\times}10^{-2}$ & $3.90{\times}10^{1}$ & N/A \\ + \midrule + \bottomrule + \end{tabular} + \label{table:second_order_benchmark_iterative} +\end{table*} + +\begin{table*}[htbp] + \centering + \captionsetup{justification=centering} + \caption{Benchmark results of the proposed method and \textbf{\textcolor{Green}{greedy}} baselines\\(\textbf{first-order} system dynamics).} + \setlength{\tabcolsep}{18.0pt} + \begin{tabular}{cccccc} + \toprule + \midrule + \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Task\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}System\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Metrics (Average)\end{tabular}} & \multicolumn{3}{c}{Results (Average)} \\ + \cmidrule(lr){4-6} + & & & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{Ours}\\(Iterative)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{TT}\\\textit{(\textcolor{Green}{Greedy})}\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{SMC}\\\textit{(\textcolor{Green}{Greedy})}\end{tabular}} \\ + & & & & & \\ + \midrule + \midrule + \multirow{2}{*}{2} & \multirow{2}{*}{2} & Ergodic Metric & $1.77{\times}10^{-3}$ & $\bf 1.70{\times}10^{-3}$ & $2.25{\times}10^{-3}$ \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 1.77{\times}10^{-2}$ & $1.39{\times}10^{-1}$ & $1.93{\times}10^{-2}$ \\ + \midrule + \midrule + \multirow{2}{*}{3} & \multirow{2}{*}{3} & Ergodic Metric & $\bf 2.24{\times}10^{-3}$ & $4.24{\times}10^{-3}$ & $6.55{\times}10^{-3}$ \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 1.95{\times}10^{-2}$ & $4.50{\times}10^{-1}$ & $1.01{\times}10^{-1}$ \\ + \midrule + \midrule + \multirow{2}{*}{4} & \multirow{2}{*}{4} & Ergodic Metric & $\bf 1.86{\times}10^{-3}$ & $3.69{\times}10^{-3}$ & $3.26{\times}10^{-3}$ \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 1.92{\times}10^{-2}$ & $1.18{\times}10^{0}$ & $4.76{\times}10^{0}$ \\ + \midrule + \midrule + \multirow{2}{*}{5} & \multirow{2}{*}{5} & Ergodic Metric & $\bf 1.20{\times}10^{-3}$ & $4.32{\times}10^{-3}$ & N/A \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 2.27{\times}10^{-2}$ & $4.03{\times}10^{0}$ & N/A \\ + \midrule + \midrule + \multirow{2}{*}{6} & \multirow{2}{*}{6} & Ergodic Metric & $\bf 8.47{\times}10^{-4}$ & $2.80{\times}10^{-3}$ & N/A \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 2.31{\times}10^{-2}$ & $9.53{\times}10^{0}$ & N/A \\ + \midrule + \bottomrule + \end{tabular} + \label{table:first_order_benchmark_greedy} + + \vspace{+1em} + + \centering + \captionsetup{justification=centering} + \caption{Benchmark results of the proposed method and \textbf{\textcolor{Green}{greedy}} baselines\\(\textbf{second-order} system dynamics).} + \setlength{\tabcolsep}{18.0pt} + \begin{tabular}{cccccc} + \toprule + \midrule + \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Task\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}System\\Dim.\end{tabular}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Metrics (Average)\end{tabular}} & \multicolumn{3}{c}{Results (Average)} \\ + \cmidrule(lr){4-6} + & & & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{Ours}\\(Iterative)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{TT}\\\textit{(\textcolor{Green}{Greedy})}\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textsf{SMC}\\\textit{(\textcolor{Green}{Greedy})}\end{tabular}} \\ + & & & & & \\ + \midrule + \midrule + \multirow{2}{*}{2} & \multirow{2}{*}{4} & Ergodic Metric & $\bf 3.06{\times}10^{-3}$ & $1.42{\times}10^{-2}$ & $1.45{\times}10^{-2}$ \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 1.85{\times}10^{-2}$ & $1.12{\times}10^{-1}$ & $2.37{\times}10^{-2}$ \\ + \midrule + \midrule + \multirow{2}{*}{3} & \multirow{2}{*}{6} & Ergodic Metric & $\bf 1.35{\times}10^{-3}$ & $1.45{\times}10^{-2}$ & $1.52{\times}10^{-2}$ \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 2.37{\times}10^{-2}$ & $3.07{\times}10^{-1}$ & $1.08{\times}10^{-1}$ \\ + \midrule + \midrule + \multirow{2}{*}{4} & \multirow{2}{*}{8} & Ergodic Metric & $\bf 2.12{\times}10^{-3}$ & $1.70{\times}10^{-2}$ & $1.78{\times}10^{-2}$ \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 3.94{\times}10^{-2}$ & $1.10{\times}10^{0}$ & $5.28{\times}10^{0}$ \\ + \midrule + \midrule + \multirow{2}{*}{5} & \multirow{2}{*}{10} & Ergodic Metric & $\bf 2.19{\times}10^{-3}$ & $1.94{\times}10^{-2}$ & N/A \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 5.66{\times}10^{-2}$ & $2.40{\times}10^{0}$ & N/A \\ + \midrule + \midrule + \multirow{2}{*}{6} & \multirow{2}{*}{12} & Ergodic Metric & $\bf 1.13{\times}10^{-3}$ & $1.97{\times}10^{-2}$ & N/A \\ + \cmidrule(lr){3-6} +& & Elapsed Time (second) & $\bf 6.28{\times}10^{-2}$ & $6.03{\times}10^{0}$ & N/A \\ + \midrule + \bottomrule + \end{tabular} + \label{table:second_order_benchmark_greedy} +\end{table*} + + + +\noindent\textbf{[Rationale of baseline selection] } We compare the proposed algorithm with methods that optimize the standard ergodic metric with four baseline methods: +\begin{itemize} + \item \textsf{SMC}(Greedy): The original ergodic search algorithm proposed in~\cite{mathew_metrics_2011} that optimizes the standard ergodic metric. It is essentially a greedy receding-horizon planning algorithm with the planning horizon being infinitesimally small. + \item \textsf{SMC}(iterative): An iterative trajectory optimization algorithm proposed in~\cite{miller_trajectory_2013} that optimizes the standard ergodic metric. It follows a similar derivation as Algorithm~\ref{algo:traj_opt}, as it iteratively solves a LQR problem. + \item \textsf{TT}(Greedy): An algorithm that shares the same formula of \textsf{SMC}(Greedy), but it accelerates the computation of the standard ergodic metric through tensor-train decomposition. Proposed in~\cite{shetty_ergodic_2022}, this algorithm is the state-of-the-art fast ergodic search algorithm with a greedy receding-horizon planning formula. + \item \textsf{TT}(Iterative): We accelerate the computation of the iterative trajectory optimization algorithm for the standard ergodic metric---\textsf{SMC}(Iterative)---through the same tensor-training decomposition technique used in~\cite{shetty_ergodic_2022}. This method is the state-of-the-art trajectory optimization method for ergodic search. +\end{itemize} +We choose \textsf{SMC}(Greedy) since it is the original algorithm for ergodic search and is one of the most commonly used algorithms in robotic applications. For the same reason, we choose \textsf{TT}(Greedy) as it further accelerates the computation of \textsf{SMC}(Greedy), thus serving as the state-of-the-art fast ergodic search baseline. We choose \textsf{SMC}(Iterative), as well as \textsf{TT}(Iterative), since the algorithms are conceptually similar to our proposed algorithm, given both methods use the same iterative optimization scheme as in Algorithm~\ref{algo:traj_opt}. Iterative methods generate better ergodic trajectories with the same number of time steps since they optimize over the whole trajectory, while the greedy methods only myopically optimize one time step at a time. However, for the same reason, iterative methods, in general, are less computationally efficient. We do not include~\cite{abraham_ergodic_2021} for comparison because it does not generalize to Lie groups. The computation of the standard ergodic metric in \textsf{SMC} methods is implemented in Python with vectorization. We use the implementation from \cite{shetty_ergodic_2022} for the tensor-training accelerated standard ergodic metric, which is implemented with the Tensor-Train Toobox~\cite{oseledets_tensor-train_2024} with key tensor train operations implemented in Fortran with multi-thread CPU parallelization. We implement our algorithm in C++ with OpenMP~\cite{dagum_openmp_1998} for multi-thread CPU parallelization. All methods are tested on a server with an AMD 5995WX CPU. No GPU acceleration is used in the experiments. We will release the implementation of our algorithm. + +\noindent\textbf{[Experiment design] } We test each of the four baseline methods and the proposed kernel ergodic search method across 2-dimensional to 6-dimensional spaces, which cover the majority of the state spaces used in robotics. Each search space is defined as a squared space, where each dimension has a boundary of $[0, 1]$ meters. For each number of dimensions, we randomize 100 test trials, with each trial consisting of a randomized three-mode Gaussian-mixture distribution (with full covariance) as the target distribution. The means of each Gaussian mixture distribution are sampled uniformly within the search space, and the covariance matrices are sampled uniformly from the space of positive definite matrices using the approach from~\cite{shetty_ergodic_2022}, with the diagonal entries varying from $0.01$ to $0.02$. In each trial, all the algorithms will explore the same target distribution with the same initial position; all the iterative methods (including ours) will start with the same initial trajectory generated from the proposed bootstrap approach (see Section~\ref{subsec:bootstrap}) and will run for a same number of iterations. We test all the algorithms with both the first-order and second-order point-mass dynamical systems. All methods have a time horizon of 200 time steps with a time step interval being $0.1$ second. + +\noindent\textbf{[Metric selection] } The benchmark takes two measurements: (1) the \emph{standard ergodic metric} of the generated trajectory from each algorithm and (2) the computation time efficiency of each algorithm. We choose the standard ergodic metric as it is ubiquitous for all existing ergodic search methods and the optimization objective for all four baselines. Our proposed method optimizes the kernel ergodic metric. Still, we have shown that it is asymptotic consistent with the standard ergodic metric, making the standard ergodic metric a suitable metric to evaluate our method as well. For the greedy baselines, we measure the elapsed time of the single-shot trajectory generation process and the standard ergodic metric of the final trajectory. For iterative baselines and our algorithm, we compute the standard ergodic metric of our proposed method at convergence and measure the time each method takes to reach the same level of ergodicity. We measure the computation time efficiency for the iterative methods this way because the primary constraint for all iterative methods is not the quality of the ergodic trajectory, as all iterative methods will eventually converge to (at least) a local optimum of the ergodic metric, but instead to generate trajectory with sufficient ergodicity within limited time. + +\noindent\textbf{[Results] } Table~\ref{table:first_order_benchmark_iterative} and Table~\ref{table:second_order_benchmark_iterative} show the averaged time required for each iterative method (including ours) to reach the same level of standard ergodic metric value from 2D to 6D space and across first-order and second-order system dynamics. We can see the proposed method is at least two orders of magnitude faster than the baselines, particularly when the search space dimension is higher than three and with second-order system dynamics. We evaluate the \textsf{SMC}(iterative) baseline only up to 4-dimensional space as the memory consumption of computing the standard ergodic metric beyond 4D space exceeds the computer's limit, leading to prohibitively long computation time (we record a memory consumption larger than 120 GB and the elapsed time longer than 6 minutes for a single iteration in a 5D space; the excessive computation resource consumption of the standard ergodic metric in high-dimensional spaces is discussed in~\cite{shetty_ergodic_2022}). Figure~\ref{fig:scalability_plot} further shows the better scalability of the proposed method, as it exhibits a linear time complexity in search space dimension while the \textsf{SMC}(iterative) method exhibits an exponential time complexity and the \textsf{TT}(iterative) method exhibits a super-linear time complexity and much slower speed with the same computation resources. Table~\ref{table:first_order_benchmark_greedy} and Table~\ref{table:second_order_benchmark_greedy} show the comparison between the proposed and non-iterative greedy baseline methods. Despite the improvement in computation efficiency of the non-iterative baselines, the proposed method is still at least two orders of magnitude faster and generates trajectories with better ergodicity. Lastly, Figure~\ref{fig:sim_traj_6d} shows an example ergodic trajectory generated by our method in a 6-dimensional space with second-order system dynamics. + +\subsection{Ergodic Coverage for Peg-in-Hole Insertion in SE(3)} + +\noindent\textbf{[Motivation] } Given the complexity of robotic manipulation tasks, using human demonstration for robots to acquire manipulation skills is becoming increasingly popular~\cite{ravichandar_recent_2020}, in particular for robotic insertion tasks, which are critical for applications including autonomous assembly~\cite{wu_prim-lafd_2023} and household robots~\cite{zhang_vision-based_2023}. Most approaches for acquiring insertion skills from demonstrations are learning-based, where the goal is to learn a control policy~\cite{wen_you_2022} or task objective from the demonstrations~\cite{englert_learning_2018}. One common strategy to learn insertion skills from demonstration is to learn motion primitives, such as dynamic movement primitives (DMPs), from the demonstrations as control policies, which could dramatically reduce the search space for learning~\cite{saveriano_dynamic_2023}. Furthermore, to address the potential mismatch between the demonstration and the task (e.g., the location of insertion during task execution may differ from the demonstration), the learned policies are often explicitly enhanced with local exploration policies, for example, through hand-crafted exploratory motion primitive~\cite{wu_prim-lafd_2023}, programmed compliance control with torque-force feedback~\cite{jha_imitation_2022} and residual correction policy~\cite{davchev_residual_2022}. Another common strategy is to use human demonstrations to bootstrap reinforcement learning (RL) training in simulation~\cite{luo_robust_2021}\cite{ahn_robotic_2023}\cite{guo_reinforcement_2023}, where the demonstrations could address the sparsity of the reward function, thus accelerate the convergence of the policy. Instead of using learning-from-demonstration methods, our motivation is to provide an alternative learning-free framework to obtain manipulation skills from human demonstrations. We formulate the peg-in-hole insertion task as a coverage problem, where the robot must find the successful insertion configuration using the human demonstration as the prior distribution. We show that combining this search-based problem formulation with ergodic coverage leads to reliable insertion performance while avoiding the typical limitations of learning-from-demonstration methods, such as the limited demonstration data, limited sensor measurement, and the need for additional offline training. Nevertheless, each new task attempt could be incorporated into a learning workflow. + + +\begin{figure}[t] + \centering + \begin{subfigure}[b]{0.24\textwidth} + \includegraphics[width=\textwidth]{figs/sawyer_setup_1.jpeg} +\end{subfigure} + \hfill + \begin{subfigure}[b]{0.24\textwidth} + \includegraphics[width=\textwidth]{figs/sawyer_setup_3.jpg} +\end{subfigure} + \caption{Setup of the hardware experiment.} + \label{fig:sawyer_setup} + + \vspace{+1em} + + \centering + \includegraphics[width=0.49\textwidth]{figs/sawyer_system_diagram.pdf} + \caption{System diagram of acquiring insertion skill from human demonstration.} + \label{fig:sawyer_system_diagram} + \vspace{-1em} +\end{figure} + + + +\noindent\textbf{[Task design] } In this task, the robot needs to find successful insertion configurations for cubes with three different geometries from a common shape sorting toy (see Figure~\ref{fig:sawyer_setup}). For each object of interest, a 30-second-long kinesthetic teaching demonstration is conducted, with a human moving the end-effector around the hole of the corresponding shape. The end-effector poses are recorded at 10 Hz in SE(3), providing a 300-time step trajectory recording as the only data available for the robot to acquire the insertion skill. During the task execution, the robot needs to generate insertion motions from a randomly initialized position within the same number of time steps as the demonstration (300 time steps). Furthermore, to demonstrate the method’s robustness to the quality of the demonstration, the demonstrations in this test do not contain successful insertions but only configurations around the corresponding hole, such as what someone attempting the task might do even if they were unsuccessful. Such insufficient demonstrations make it necessary for the robot to adapt beyond repeating the exact human demonstration provided. Some approaches attempt this adaptation through learning, whereas here adaptation is formulated as state space coverage ``near'' the demonstrated distribution of states. + +\begin{figure*}[htbp] + \centering + \includegraphics[width=0.99\textwidth]{figs/sawyer_traj_overlap_1.pdf} + \caption{Trajectories from one of the hardware test trials. All trajectories are represented in the space of SE(3) and converted to the coordinate system of 3D Euclidean space $\{x, y, z\}$ for position and Euler angles $\{\alpha, \beta, \gamma\}$ for orientation. The orientations included in human demonstration lie at the boundary of the principle interval $-\pi$ and $\pi$, thus exhibiting large discontinuity in the Euler angle coordinate. The proposed algorithm's capability to directly reason over the Lie group SE(3) inherently overcomes this issue and successfully generates continuous trajectories to cover the distribution of human demonstration.} + \label{fig:sawyer_traj_overlap} +\end{figure*} + +\noindent\textbf{[Implementation details] } We use a Sawyer robot arm for the experiment. Our approach is to generate ergodic coverage trajectory using the human demonstration as the target distribution, assuming the successful insertion configuration resides within the distribution that governs the human demonstration. The target distribution is modeled a Lie group Gaussian-mixture (GMM) distribution, which is computed using the expectation maximization (EM) algorithm from the human demonstration. Since the demonstration does not include successful insertion, the target GMM distribution has a height (z-axis value) higher than the box's surface. Thus, we decrease the z-axis value of the GMM means for $2cm$. After the ergodic search trajectory is generated with the given target GMM distribution, the robot tracks the trajectory with online position feedback with waypoint-based tracking. We enable the force compliance feature on the robot arm~\cite{noauthor_arm_nodate}, which ensures safety for both the robot and the object during execution. No other sensor feedback, such as visual or tactile sensing, is used. A system overview is shown in the diagram in Figure~\ref{fig:sawyer_system_diagram}. Note that the waypoint-based control moves the end-effector at a slower speed than the human demonstration; thus, even if the executed search trajectory has the same number of time steps as the demonstration, it would take the robot longer real-world time to execute the trajectory. An end-effector insertion configuration is considered successful when both of the following criteria are met: (1) the end-effector's height, measured through the z-axis value, is near the exact height of the box surface, and (2) the force feedback from the end-effector detects no force along the z-axis. Meeting the first criterion means the end-effector reaches the necessary height for a possible insertion. Meeting the second criterion means the cube goes freely through the hole; it rules out the false-positive scenario where the end-effector forces the part of the cube through the hole when the cube and hole are not aligned. + +\noindent\textbf{[Results] } We compare our method with a baseline method that repeats the demonstration. We test three objects in total, the shapes being rhombus, trapezoid, and ellipse. A total of 20 trials are conducted for each object; in each trial, we generate a new demonstration shared by both our method and the baseline. Both methods also share the maximum amount of time steps allowed for the insertion, which is the same as the demonstration. We measure the success rate of each method and report the average time steps required to find the successful insertion. Table~\ref{table:insertion_success_rate} shows the success rate for the insertion task across three objects within a limited search time of 300 time steps. The proposed ergodic search method has a success rate higher than or equal to $80\%$ across all three objects, while the baseline method only has a success rate up to $10\%$ across the objects. The baseline method does not have a $0\%$ success rate because of the noise within the motion from the force compliance feature during trajectory tracking. Table~\ref{table:insertion_time_required} shows the average time steps required for successful insertion, where we can see the proposed method can find a successful insertion strategy in SE(3) with significantly less time than the demonstration. Figure~\ref{fig:sawyer_traj_overlap} further shows the end-effector trajectory from the human demonstration and the resulting ergodic search trajectory, as well as how the SE(3) reasoning capability of the proposed algorithm overcomes the Euler angle discontinuity in the human demonstration. + +\begin{table}[htbp] + \centering + \captionsetup{justification=centering} + \caption{Success rate of hardware insertion test\\(limited search time).} + \setlength{\tabcolsep}{7.0pt} + \begin{tabular}{cccc} + \toprule + \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Strategy}\end{tabular}} & \multicolumn{3}{c}{\textbf{Success Rate} (20 trials per object)} \\ + \cmidrule(lr){2-4} + & Rhombus & Trapezoid & Ellipse \\ + \midrule + \textbf{Ours} & $80\%$ (16/20) & $80\%$ (16/20) & $90\%$ (18/20) \\ + \midrule + Naive & $10\%$ (2/20) & $10\%$ (2/20) & $10\%$ (2/20) \\ + \bottomrule + \end{tabular} + \label{table:insertion_success_rate} + + \vspace{+1em} + + \centering + \captionsetup{justification=centering} + \caption{Average time steps for successful insertion (limited search time).} + \setlength{\tabcolsep}{5.0pt} + \begin{tabular}{cccc} + \toprule + \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Strategy}\end{tabular}} & \multicolumn{3}{c}{\textbf{Average Time Steps} (20 trials per object)} \\ + \cmidrule(lr){2-4} + & Rhombus & Trapezoid & Ellipse \\ + \midrule + \textbf{Ours} & $106.81{\pm}78.60$ & $128.44{\pm}73.81$ & $103.33{\pm}61.25$ \\ + \midrule + Naive & $187.50{\pm}27.50$ & $58.50{\pm}57.50$ & $31.50{\pm}30.50$ \\ + \bottomrule + \end{tabular} + \label{table:insertion_time_required} +\end{table} + +\begin{figure}[htbp] + \centering + \includegraphics[width=0.49\textwidth]{figs/box_and_whisker_insertion_timesteps.pdf} + \caption{Time steps required for $100\%$ successful insertion with the proposed algorithm.} + \label{fig:insertion_time_step_plot} + \vspace{-1em} +\end{figure} + +\noindent\textbf{[Asymptotic coverage] } We further demonstrate the asymptotic coverage property of ergodic search, with which the robot is guaranteed to find a successful insertion strategy given enough time, so long as the successful insertion configuration resides in the target distribution. Instead of limiting the search time to 300 time steps, we conduct 10 additional trials on each object (30 in total) with unlimited search time. Our method finds a successful insertion strategy in all 30 trials ($100\%$ success rate). We report the time steps needed for $100\%$ success rate in Figure~\ref{fig:insertion_time_step_plot}. + \section{Conclusion and Discussion} \label{sec:conclusion} + +This work introduces a new ergodic search method with significantly improved computation efficiency and generalizability across Euclidean space and Lie groups. Our first contribution is introducing the kernel ergodic metric, which is asymptotically consistent with the standard ergodic metric but has better scalability to higher dimensional spaces. Our second contribution is an efficient optimal control method. Combining the kernel ergodic metric with the proposed optimal control method generates ergodic trajectories at least two orders of magnitude faster than the state-of-the-art method. + +We demonstrate the proposed ergodic search method through a peg-in-hole insertion task. We formulate the task as an ergodic coverage problem using a 30-second-long human demonstration as the target distribution. We demonstrate that the asymptotic coverage property of ergodic search leads to a $100\%$ success rate in this task, so long as the success insertion configuration resides within the target distribution. Our framework serves as an alternative approach to learning-from-demonstration methods. + + + +Since our formula is based on kernel functions, it can be flexibly extended with other kernel functions for different tasks. One potential extension is to use the non-stationary attentive kernels~\cite{chen_ak_2022}, which are shown to be more effective in information-gathering tasks compared to the squared exponential kernel used in this work. The trajectory optimization-based formula means the proposed framework could be integrated into reinforcement learning (RL) with techniques such as guided policy search~\cite{levine_guided_2013}. The proposed framework can also be further improved. The evaluation of the proposed metric can be accelerated by exploiting the spatial sparsity of the kernel function evaluation within the trajectory. + +\section*{Acknowledgments} +The authors would like to acknowledge Allison Pinosky, Davin Landry, and Sylvia Tan for their contributions to the hardware experiment. This material is supported by the Honda Research Institute Grant HRI-001479 and the National Science Foundation Grant CNS-2237576. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the aforementioned institutions. + +\bibliographystyle{IEEEtran} +\bibliography{references} + +\appendix + +\noindent\textbf{Proof for Lemma~\ref{lemma:minimal_norm_uniformality}} +\begin{proof} +From (\ref{eq:second_term}) we have: +\begin{align} + & \lim_{T\rightarrow\infty}\left[\frac{1}{T^2} \int_{0}^{T} \int_{0}^{T} \phi(s(t_1), s(t_2)) dt_1 dt_2\right] = \int_{\mathcal{S}} C_s(x)^2 dx +\end{align} +Thus, we just need to prove that a probability density function $p(x)$ defined over the bounded search space $\mathcal{S}$ is an uniform distribution when it minimizes $\int_{\mathcal{S}} p(x)^2 dx$. This can formulated as the following functional optimization problem: +\begin{align} + p^*(x) & = \argmin_{p(x)} \int_{\mathcal{S}} p(x)^2 dx \\ + & \text{s.t. } \int_{\mathcal{S}} p(x) dx = 1 +\end{align} +To solve it, we first formulate the Lagrangian operator: +\begin{align} + \mathcal{L}(p, \lambda) = \int_{\mathcal{S}} p(x)^2 dx - \lambda \cdot \int_{\mathcal{S}} p(x) dx +\end{align} +The necessary condition for $p^*(x)$ to be an extreme is (Theorem 1, Page 43~\cite{gelfand_calculus_2000}): +\begin{align} + \frac{\partial\mathcal{L}}{\partial p}(p^*, \lambda) & = 2 p^*(x) - \lambda = 0 +\end{align} which gives us $p^*(x) = \frac{\lambda}{2}$. By substituting this equation back to the equality constraint we have: +\begin{align} + \int_{\mathcal{S}} p^*(x) dx & = \frac{\lambda}{2} \int_{\mathcal{S}} 1\cdot dx = \frac{\lambda}{2} \vert\mathcal{S}\vert = 1 +\end{align} +Therefore $\lambda = \frac{2}{\vert\mathcal{S}\vert}$, and we have: +\begin{align} + p^*(x) = \frac{\lambda}{2} = \frac{1}{\vert\mathcal{S}\vert} +\end{align} which is the probablity density function of a uniform distribution. + +To show that $p^*(x)$ as an extreme is a minimum instead of a maximum, we just need to find a distribution that has larger norm than $p^*(x)$. To do so, we define a distribution $p^\prime(x)$ that has value $\frac{1}{2\vert\mathcal{S}\vert}$ for half the search space $\mathcal{S}$, and has value of $\frac{3}{2\vert\mathcal{S}\vert}$. It's easy to show that $\Vert p^\prime(x)\Vert>\Vert p^*(x)\Vert$, thus $p^*(x)$ is the global minimum, which completes the proof. +\end{proof} + + +\noindent\textbf{Proof for Theorem~\ref{theorem:first_order_optmality}} + +\begin{proof} + Denote $v(t)$ as a perturbation on the control $u(t)$ and $z(t)$ as the resulting perturbation on the system state $s(t)$, we take the Gateaux derivative of the objective: + \begin{align} + DJ&(u(t))\cdot v(t) \\ + = & \lim_{\epsilon\rightarrow 0} \frac{d}{d\epsilon} J(u + \epsilon\cdot v) \nonumber \\ + = & \lim_{\epsilon\rightarrow 0} \Bigg[ - \frac{2}{T} \int_0^T \frac{d}{d\epsilon} P(s(t) + \epsilon z(t)) dt \nonumber \\ + & + \frac{1}{T^2} {\int_{0}^{T}} {\int_{0}^{T}} \frac{d}{d\epsilon} \phi(s(t_1){+}\epsilon z(t_1), s(t_2){+}\epsilon z(t_2)) dt_1 dt_2 \nonumber \\ + & + \int_0^T \frac{d}{d\epsilon} l(s(t){+}\epsilon z(t), u(t){+}\epsilon v(t)) dt \nonumber \\ + & + \frac{d}{d\epsilon} \Big( m(s(T)){+}\epsilon z(T) \Big) \Bigg] \\ + = & - \frac{2}{T} \int_0^T DP(s(t)) z(t) dt \nonumber \\ + & + \frac{1}{T^2} {\int_{0}^{T}} {\int_{0}^{T}} \Big( D_1\phi(s(t_1), s(t_2))z(t_1) \nonumber \\ + & \quad \quad \quad \quad \quad \quad \quad + D_2\phi(s(t_1), s(t_2))z(t_2) \Big) dt_1 dt_2 \nonumber \\ + & + \int_0^T \Big( D_1 l(s(t), u(t))z(t) + D_2 l(s(t), u(t))v(t) \Big) dt \nonumber \\ + & + Dm(s(T)) z(T) \label{eq:ctrl_directional_derivative_1} + \end{align} Since a Gaussian kernel is stationary and symmetric, it has the following property: + \begin{align} + D_1\phi(x_1, x_2) = D_2\phi(x_2, x_1) + \end{align} + Based on this property, we can simplify the double time integral above: + \begin{align} + & {\int_{0}^{T}} {\int_{0}^{T}} \Big( D_1\phi(s(t_1), s(t_2))z(t_1) \nonumber \\ + & \quad \quad \quad \quad + D_2\phi(s(t_1), s(t_2))z(t_2) \Big) dt_1 dt_2 \nonumber \\ + & = {\int_{0}^{T}} {\int_{0}^{T}} \Big( D_1\phi(s(t_1), s(t_2))z(t_1) \nonumber \\ + & \quad \quad \quad \quad \quad \quad + D_1\phi(s(t_2), s(t_1))z(t_2) \Big) dt_1 dt_2 \nonumber \\ + & = {\int_{0}^{T}} {\int_{0}^{T}} D_1\phi(s(t_1), s(t_2))z(t_1) dt_1 dt_2 \nonumber \\ + & \quad + {\int_{0}^{T}} {\int_{0}^{T}} D_1\phi(s(t_2), s(t_1))z(t_2) dt_1 dt_2 \nonumber \\ +& = 2 {\int_{0}^{T}} {\int_{0}^{T}} D_1\phi(s(t_1), s(t_2))z(t_1) dt_1 dt_2 \nonumber \\ + & = {\int_{0}^{T}} \left(2 {\int_{0}^{T}} D_1\phi(s(t), s(\tau)) d\tau \right) z(t) dt \label{eq:linearized_kernel_integral} + \end{align} + Substituting (\ref{eq:linearized_kernel_integral}) back to (\ref{eq:ctrl_directional_derivative_1}), we have: + \begin{align} + DJ&(u(t))\cdot v(t) \\ + = & \int_0^T -\frac{2}{T} DP(s(t)) z(t) dt \nonumber \\ + & + {\int_{0}^{T}} \left(\frac{2}{T^2} {\int_{0}^{T}} D_1\phi(s(t), s(\tau)) d\tau \right) z(t) dt \nonumber \\ + & + \int_0^T D_1 l(s(t), u(t))z(t) dt \nonumber \\ + & + \int_0^T D_2 l(s(t), u(t))v(t) dt \nonumber \\ + & + Dm(s(T)) z(T) \nonumber \\ + = & \int_0^T \Bigg[ -\frac{2}{T} DP(s(t)) + \left(\frac{2}{T^2} {\int_{0}^{T}} D_1\phi(s(t), s(\tau)) d\tau \right) \nonumber \\ + & \quad \quad \quad + D_1 l(s(t), u(t)) \Bigg] z(t) + D_2 l(s(t), u(t))v(t) dt \nonumber \\ + & \quad + Dm(s(T)) z(T) \\ + = & \int_0^T a(t)^\top z(t) + b(t)^\top v(t) dt + \gamma^\top z(T) +\end{align} + Based on Lemma~\ref{lemma:perturbation_linear_dynamics}, the perturbation $z(t)$ and $v(t)$ are governed by a linear dynamics, thus we can use the state transition matrix $\Phi(t,\tau)$\footnote{Note that the notation of state transition matrix is different from the the kernel.} to expand equation (\ref{eq:obj_directional_derivative}): + \begin{align} + & DJ(u(t))\cdot v(t) \nonumber \\ + & = \int_0^T a(t)^\top z(t) + b(t)^\top v(t) dt + \gamma^\top z(T) \nonumber \\ + & = \int_0^T a(t)^\top \left(\int_0^t \Phi(t,\tau) B(\tau) v(\tau) d\tau \right) + b(t)^\top v(t) dt \nonumber \\ + & \quad \quad + \gamma^\top \left( \int_0^T \Phi(T,\tau) B(\tau) v(\tau) d\tau \right) \\ + & = \int_0^T \int_0^t a(t)^\top \Phi(t,\tau) B(\tau) v(\tau) d\tau dt + \int_0^T b(t)^\top v(t) dt \nonumber \\ + & \quad \quad + \gamma^\top \left( \int_0^T \Phi(T,\tau) B(\tau) v(\tau) d\tau \right) \\ + & = \int_0^T \int_\tau^T a(t)^\top \Phi(t,\tau) B(\tau) v(\tau) dt d\tau + \int_0^T b(t)^\top v(t) dt \nonumber \\ + & \quad \quad + \gamma^\top \left( \int_0^T \Phi(T,\tau) B(\tau) v(\tau) d\tau \right) \\ + & = \int_0^T \left( \int_\tau^T a(t)^\top \Phi(t,\tau) dt \right) B(\tau) v(\tau) d\tau \nonumber \\ + & \quad \quad + \int_0^T b(t)^\top v(t) dt + \int_0^T \left( \gamma^\top \Phi(T,\tau) \right) B(\tau) v(\tau) d\tau \\ + & = \int_0^T \left( \underbrace{\int_\tau^T a(t)^\top \Phi(t,\tau) dt + \gamma^\top \Phi(T,\tau)}_{\rho(\tau)^\top, \text{with } \rho(T) = \gamma} \right) B(\tau) v(\tau) d\tau \nonumber \\ + & \quad \quad + \int_0^T b(t)^\top v(t) dt \\ + & = \int_0^T \left( \rho(t)^\top B(t) + b(t)^\top \right) v(t) dt + \end{align} For $u(t)$ to be a minimum of $J(u(t))$, we need to have $DJ(u(t))\cdot v(t) = 0$ for any perturbation $v(t)$, thus we have the first-order optimalty condition as: + \begin{align} + \rho(t)^\top B(t) + b(t)^\top = 0, \quad \forall t\in[0, T] + \end{align} which completes the proof. +\end{proof} + +\end{document}