|
\documentclass{article} |
|
\usepackage[final]{nips_2017} |
|
|
|
\usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{comment} |
|
\usepackage{times} |
|
\usepackage{epsfig} |
|
\usepackage{graphicx} |
|
\usepackage{amsmath} |
|
\usepackage{amssymb} |
|
\setcitestyle{square} |
|
\usepackage{capt-of} |
|
\usepackage{url} |
|
\usepackage{subfigure} |
|
\usepackage{natbib} |
|
\setcitestyle{numbers} |
|
\usepackage{color} |
|
\usepackage{comment} |
|
\newcommand\TODO[1]{\textcolor{red}{#1}} |
|
\newcommand\todo[1]{\textcolor{red}{#1}} |
|
\newcommand\kosta[1]{\textcolor{blue}{#1}} |
|
\newcommand\ersin[1]{\textcolor{green}{(Ersin: #1)}} |
|
|
|
|
|
|
|
|
|
|
|
|
|
\newcommand{\model}{\text{ImGAN}} |
|
\newcommand{\zz}{z} |
|
\newcommand{\D}{D} |
|
\newcommand{\ZZ}{Z} |
|
\newcommand{\B}{B} |
|
\newcommand{\R}{R} |
|
\newcommand{\xxpred}{\mathbf{X}} |
|
\newcommand{\N}{n} |
|
\newcommand{\xxgt}{\tilde{\mathbf{X}}} |
|
\newcommand{\mdots}{..} |
|
\newcommand{\CI}{\mathcal{C}^I} |
|
\newcommand{\vv}{\mathbf{v}} |
|
\newcommand{\CM}{\mathcal{C}^M} |
|
\newcommand{\SI}{\mathcal{S}^I} |
|
\newcommand{\SM}{\mathcal{S}^M} |
|
\newcommand{\OF}{\mathcal{OF}} |
|
\newcommand{\coord}{\mathbf{X}} |
|
\newcommand{\eg}{e.g.} |
|
|
|
|
|
\DeclareMathOperator{\E}{\mathbb{E}} |
|
|
|
\title{Self-supervised Learning of Motion Capture} |
|
|
|
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} |
|
|
|
\author{ |
|
Hsiao-Yu Fish Tung \textsuperscript{1}, |
|
Hsiao-Wei Tung \textsuperscript{2}, |
|
Ersin Yumer \textsuperscript{3}, |
|
Katerina Fragkiadaki \textsuperscript{1}\\ |
|
\textsuperscript{1} Carnegie Mellon University, Machine Learning Department\\ |
|
\textsuperscript{2} University of Pittsburgh, |
|
Department of Electrical and Computer Engineering\\ |
|
\textsuperscript{3} Adobe Research\\ |
|
\texttt{\{htung, katef\}@cs.cmu.edu, hst11@pitt.edu,yumer@adobe.com } \\ |
|
} |
|
|
|
|
|
|
|
\begin{document} |
|
|
|
\maketitle |
|
|
|
|
|
|
|
|
|
\begin{abstract} |
|
|
|
Current state-of-the-art solutions for motion capture from a single camera are optimization driven: |
|
they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). |
|
Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. |
|
We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail. |
|
|
|
\end{abstract} |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\begin{comment} |
|
3D motion capture using a single monocular camera is an open problem, where current state of the art solutions are optimization based, rather than end-to-end prediction methodologies. Such optimization techniques between the measurements in the video (e.g. person segmentation, optical flow) and the parameters of a 3D human model are susceptible to local minima. This has been the bottleneck that forced the use clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. Instead of optimizing mesh and skeleton parameters directly, we optimize neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, as well as self-supervision from three different sources: (1) skeletal keypoint re-projection error, (2) dense motion re-projection error, and (3) human-background segmentation re-projection error. This enables our method to leverage unlabeled data. We show that the proposed self-supervision improves with experience as the agent learns, and converge globally where previous optimization methods fail. |
|
|
|
|
|
Current motion capture systems are optimization driven: given monocular or multiview video input the parameters of a dense 3d mesh capturing the motion of the depicted person are optimized so that their projection matches measurements in the video, e.g., figure-ground segmentation, optical flow etc. Such optimization techniques are susceptible to local minima, thus clean backgrounds, multiple views or manual initialization are necessary. |
|
This work proposes a learning based motion capture model. Instead of optimizing mesh and skeleton parameters directly, we optimize neural network weights that predict mesh and skeleton configurations and figure-ground segmentation given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of 3D keypoints and optical flow, segmentation-mesh consistency and adversarial priors on mesh and segmentation predictions. |
|
We show it improves with experience as opposed to diverging, thanks to our statistical priors and parameter learning as opposed to direct mesh optimization. |
|
\end{comment} |
|
|
|
\vspace{-4mm} |
|
|
|
\section{Introduction} |
|
\vspace{-1mm} |
|
|
|
|
|
|
|
|
|
|
|
Detailed understanding of the human body and its motion from ``in the wild" monocular setups would open the path to applications of automated gym and dancing teachers, rehabilitation guidance, patient monitoring and safer human-robot interactions. It would also impact the movie industry where character motion capture (MOCAP) and retargeting still requires tedious labor effort of artists to achieve the desired accuracy, or the use of expensive multi-camera setups and green-screen backgrounds. |
|
|
|
Most current motion capture systems are optimization driven and cannot benefit from experience. Monocular motion capture systems optimize the parameters of a 3D human model to match measurements in the video (\eg, person segmentation, optical flow). Background clutter and optimization difficulties significantly impact tracking performance, leading prior work to use green screen-like backdrops~\cite{ballan2008marker} and careful initializations. Additionally, these methods cannot leverage the data generated by laborious manual processes involved in motion capture, to improve over time. This means that each time a video needs to be processed, the optimization and manual efforts need to be repeated from scratch. |
|
|
|
|
|
\begin{figure}[t!] |
|
\centering |
|
\includegraphics[width=1.0\linewidth]{Intro2.pdf} |
|
\centering |
|
\caption{\textbf{Self-supervised learning of motion capture}. Given a video sequence and a set of 2D body joint heatmaps, our network predicts the body parameters for the SMPL 3D human mesh model~\cite{SMPL:2015}. Neural networks weights are pretrained using synthetic data and finetuned using self-supervised losses driven by differentiable keypoint, segmentation, and motion reprojection errors, against detected 2D keypoints, 2D segmentation and 2D optical flow, respectively. By finetuning its parameters at test time through self-supervised losses, the proposed model achieves significantly higher level of 3D reconstruction accuracy than pure supervised or pure optimization based models, which either do not adapt at test time, or cannot benefit from training data, respectively. |
|
} |
|
\vspace*{-4mm} |
|
\label{fig:intro} |
|
\end{figure} |
|
|
|
|
|
|
|
We propose a neural network model for motion capture in monocular videos, that learns to map an image sequence to a sequence of corresponding 3D meshes. The success of deep learning models lies in their supervision from large scale annotated datasets \cite{imagenet}. However, detailed 3D mesh annotations are tedious and time consuming to obtain, thus, large scale annotation of 3D human shapes in realistic video input is currently unavailable. |
|
Our work bypasses lack of 3D mesh annotations in real videos by combining strong supervision from large scale synthetic data of human rendered models, and \textit{self-supervision} from 3D-to-2D differentiable rendering of 3D keypoints, motion and segmentation, and matching with corresponding detected quantities in 2D, in real monocular videos. Our self-supervision leverages recent advances in 2D body joint detection \cite{wei2016cpm, openpose}, 2D figure-ground segmentation \cite{h36m_pami}, and 2D optical flow \cite{flownet2}, each learnt using strong supervision from real or synthetic datasets, such as, MPII \cite{andriluka14cvpr}, COCO \cite{DBLP:journals/corr/LinMBHPRDZ14}, and flying chairs \cite{flownet}, respectively. Indeed, annotating 2D body joints is easier than annotating 3D joints or 3D meshes, while optical flow has proven to be easy to generalize from synthetic to real data. We show how state-of-the-art |
|
models of 2D joints, optical flow and 2D human segmentation can be used to infer dense 3D human structure in videos in the wild, |
|
that is hard to otherwise manually annotate. |
|
In contrast to previous optimization based motion capture works~\cite{Bro06k,DBLP:journals/corr/BogoKLG0B16}, we use differentiable warping and differentiable camera projection for optical flow and segmentation losses, which allows our model to be trained end-to-end with standard back-propagation. |
|
|
|
|
|
\begin{comment} |
|
in this work, we show how these |
|
learnt model can be used to infer |
|
even more complicated structures |
|
that is hard to obtain annotations through human labelling. |
|
In contrast to previous optimization based works~\cite{Bro06k,DBLP:journals/corr/BogoKLG0B16}, we use differentiable warping and differentiable camera projection for optical flow and segmentation losses, which allows our model to be trained end-to-end with standard back-propagation. |
|
\end{comment} |
|
|
|
We use SMPL~\cite{SMPL:2015} as our dense human 3D mesh model. It consists of a fixed number of vertices and triangles with fixed topology, where the global pose is controlled by relative angles between body parts $\theta$, and the local shape is controlled by mesh surface parameters $\beta$. Given the pose and surface parameters, a dense mesh can be generated in an analytical (differentiable) form, which could then be globally rotated and translated to a desired location. The task of our model is to reverse-engineer the rendering process and predict the parameters of the SMPL model ($\theta$ and $\beta$), as well as the focal length, 3D rotations and 3D translations in each input frame, provided an image crop around a detected person. |
|
|
|
|
|
Given 3D mesh predictions in two consecutive frames, we differentiably project the 3D motion vectors of the mesh vertices, and match them against estimated 2D optical flow vectors (Figure~\ref{fig:intro}). Differentiable motion rendering and matching requires vertex visibility estimation, which we perform using ray casting integrated with our neural model for code acceleration. |
|
Similarly, in each frame, 3D keypoints are projected and their distances to corresponding detected 2D keypoints are penalized. Last but not the least, differentiable segmentation matching using Chamfer distances penalizes under and over fitting of the projected vertices against 2D segmentation of the human foreground. Note that these re-projection errors are only on the shape rather than the texture by design, since our predicted 3D meshes are textureless. |
|
|
|
|
|
|
|
We provide quantitative and qualitative results on 3D dense human shape tracking in SURREAL \cite{varol17} and H3.6M \cite{h36m_pami} datasets. We compare against the corresponding optimization versions, where mesh parameters are directly optimized by minimizing our self-supervised losses, as well as against supervised models that do not use self-supervision at test time. Optimization baselines easily get stuck in local minima, and are very sensitive to initialization. In contrast, our learning-based MOCAP model relies on supervised pretraining (on synthetic data) to provide reasonable pose initialization at test time. Further, self-supervised adaptation achieves lower 3D reconstruction error than the pretrained, non-adapted model. |
|
Last, our ablation highlights the complementarity of the three proposed self-supervised losses. |
|
|
|
|
|
|
|
|
|
\section{Related Work} |
|
\paragraph{3D Motion capture} |
|
3D motion capture using multiple cameras (four or more) |
|
is a well studied problem where impressive results are achieved with existing methods~\cite{gall2009motion}. However, motion capture from a single monocular camera is still an open problem even for skeleton-only capture/tracking. Since ambiguities and occlusions can be severe in monocular motion capture, most approaches rely on prior models of pose and motion. Earlier works considered linear motion models~\cite{fleet2001robust,choo2001people}. Non-linear priors such as Gaussian process dynamical models~\cite{urtasun2006gaussian}, as well as twin Gaussian processes~\cite{bo2010twin} have also been proposed, and shown to outperform their linear counterparts. |
|
Recently, Bogo et al.~\cite{DBLP:journals/corr/BogoKLG0B16} presented a static image pose and 3D dense shape prediction model which works in two stages: first, a 3D human skeleton is predicted from the image, and then a parametric 3D shape is fit to the predicted skeleton using an optimization procedure, during which the skeleton remains unchanged. Instead, our work couples 3D skeleton and 3D mesh estimation in an end-to-end differentiable framework, via test-time adaptation. |
|
|
|
\paragraph{3D human pose estimation} |
|
Earlier work on 3D pose estimation considered optimization methods and hard-coded anthropomorphic constraints (e.g., limb symmetry) to fight ambiguity during 2D-to-3D lifting~\cite{ramakrishna2012reconstructing}. Many recent works learn to regress to 3D human pose directly given an RGB image~\cite{DBLP:journals/corr/PavlakosZDD16} using deep neural networks and large supervised training sets~\cite{h36m_pami}. Many have explored 2D body pose as an intermediate representation~\cite{DBLP:journals/corr/ChenR16a,interpreter}, or as an auxiliary task in a multi-task setting~\cite{DBLP:journals/corr/TomeRA17,interpreter,yan2016perspective}, where the abundance of labelled 2D pose training examples helps feature learning and complements limited 3D human pose supervision, which requires a Vicon system and thus is restricted to lab instrumented environments. Rogez and Schmid ~\cite{DBLP:journals/corr/RogezS16} obtain large scale RGB to 3D pose synthetic annotations |
|
by rendering synthetic 3D human models against realistic backgrounds~\cite{DBLP:journals/corr/RogezS16}, a dataset also used in this work. |
|
|
|
|
|
\paragraph{Deep geometry learning} |
|
Our differentiable renderer follows recent works that integrate deep learning and geometric inference \cite{DBLP:journals/corr/TungHSF17}. |
|
Differentiable warping~\cite{stn,DBLP:journals/corr/PatrauceanHC15} and backpropable camera projection \cite{yan2016perspective,interpreter} have been used to learn 3D camera motion \cite{tinghuisfm} and joint 3D camera and 3D object motion \cite{sfmnet} in an end-to-end self-supervised fashion, minimizing a photometric loss. |
|
Garg et al.~\cite{garg2016unsupervised}learns a monocular depth predictor, supervised by photometric error, given a stereo image pair with known baseline as input. |
|
The work of \cite{handa2016gvnn} contributed a deep learning library with many geometric operations including a backpropable camera projection layer, similar to the one used in Yan et al.~\cite{yan2016perspective} and Wu et al.~\cite{interpreter}'s cameras, as well as Garg et al.'s depth CNN~\cite{garg2016unsupervised}. |
|
|
|
|
|
|
|
|
|
|
|
\begin{comment} |
|
\paragraph{Deep geometry learning through differentiable rendering} |
|
There has recently been great interest in integrating learning to aid geometric approaches in camera and depth estimation. SE3-Nets~\cite{SE3-Nets} take as input a actuation force and a 3D point cloud to forecast a set of 3D rigid object motions (3D rotations and translations) and corresponding pixel motion masks under a static camera assumption; and their model is supervised using ground-truth 3D optical flow recorded from vicon. |
|
Differentiable warping~\cite{stn,DBLP:journals/corr/PatrauceanHC15} and backpropable camera projection \cite{interpreter} have been used to instead learn 3D camera motion \cite{tinghuisfm} and joint 3D camera and 3D object motion \cite{sfmnet} in an end-to-end self-supervised fashion (without ground truth motion field), minimizing a photometric loss and from 2D RGB as opposed to point cloud input. |
|
Garg et al.~\cite{garg2016unsupervised}given a stereo image pair with known baseline learns a monocular depth predictor, supervised by photometric error. |
|
The work of \cite{handa2016gvnn} contributed a deep learning library with many geometric operations including a back-propable camera projection layer, similar to the one used in the works above 3D image interpreters \cite{interpreter}, and Garg et al.'s depth CNN~\cite{garg2016unsupervised}. |
|
\end{comment} |
|
|
|
\section{Learning Motion Capture} |
|
The architecture of our network is shown in Figure \ref{fig:intro}. We use SMPL as the parametrized model of 3D human shape, introduced by Loper et al.~\cite{SMPL:2015}. SMPL is comprised of parameters that control the yaw, pitch and roll of body joints, and parameters that control deformation of the body skin surface. Let $\theta$, $\beta$ denote the joint angle and surface deformation parameters, respectively. Given these parameters, a fixed number ($\N=6890$) of 3D mesh vertex coordinates are obtained using the following analytical expression, where $\mathbf{X_i}\in\mathbb{R}^3$ stands for the 3D coordinates of the $i$th vertex in the mesh: |
|
\begin{equation} |
|
\coord_i = \bar{\coord}_i + \sum_{m}\beta_m\mathbf{s}_{m,i} + \sum_{n}(T_n(\theta)-T_n(\theta^{*}))\mathbf{p}_{n,i} |
|
\label{eq:smpl23d} |
|
\end{equation} |
|
where $\bar{\coord}_i\in\mathbb{R}^3$ is the nominal rest position of vertex $i$, $\beta_m$ is the blend coefficient for the skin surface blendshapes, $\mathbf{s}_{m,i}\in\mathbb{R}^3$ is the element corresponding to $i$th vertex of the $m$th skin surface blendshape, $\mathbf{p}_{n,i}\in\mathbb{R}^3$ is the element corresponding to $i$th vertex of the $n$th skeletal pose blendshape, $T_n(\theta)$ is a function that maps the $n$th pose blendshape to a vector of concatenated part relative rotation matrices, and $T_n(\theta^{*})$ is the same for the rest pose $\theta^{*}$. Note the expression in Eq. \ref{eq:smpl23d} is differentiable. |
|
|
|
Our model, given an image crop centered around a person detection, predicts parameters $\beta$ and $\theta$ of the SMPL 3D human mesh. Since annotations of 3D meshes are very tedious and time consuming to obtain, our model uses supervision from a large dataset of synthetic monocular videos, and self-supervision with a number of losses that rely on differentiable rendering of 3d keypoints, segmentation and vertex motion, and matching with their 2D equivalents. We detail supervision of our model below. |
|
|
|
|
|
|
|
\paragraph{Paired supervision from synthetic data} |
|
We use the synthetic Surreal dataset \cite{varol17} that contains monocular videos of human characters performing activities against 2D image backgrounds. The synthetic human characters have been generated using the SMPL model, and animated using Human H3.6M dataset \cite{h36m_pami}. Texture is generated by directly coloring the mesh vertices, without actual 3D cloth simulation. Since values for $\beta$ and $\theta$ are directly available in this dataset, we use them to pretrain the $\theta$ and $\beta$ branches of our network using a standard supervised regression loss. |
|
|
|
|
|
\begin{figure}[t!] |
|
\centering |
|
\includegraphics[width=1.0\linewidth]{loss.pdf} |
|
\centering |
|
\caption{\textbf{Differentiable rendering} of body joints (left), segmentation (middle) and mesh vertex motion (right). |
|
} |
|
\label{fig:model} |
|
\end{figure} |
|
|
|
|
|
\subsection{Self-supervision through differentiable rendering} |
|
Self-supervision in our model is based on 3D-to-2D rendering and consistency checks against 2D estimates of keypoints, segmentation and optical flow. Self-supervision can be used at both train and test time, for adapting our model's weights to the statistics of the test set. |
|
|
|
\paragraph{Keypoint re-projection error} |
|
|
|
Given a static image, predictions of 3D body joints of the depicted person should match, when projected, corresponding 2D keypoint detections. Such keypoint re-projection error has been used already in numerous previous works \cite{interpreter,yan2016perspective}. Our model predicts a dense 3D mesh instead of a skeleton. We leverage the linear relationship that relates our 3D mesh vertices to 3D body joints: |
|
\begin{equation} |
|
{\mathbf{X}_{kpt}}^\intercal = \mathbf{A} \cdot \mathbf{X}^\intercal |
|
\end{equation} |
|
Let $\mathbf{X} \in \mathbb{R}^{4 \times \N}$ denote the 3D coordinates of the mesh vertices in homogeneous coordinates (with a small abuse of notation since it is clear from the context), where $\N$ the number of vertices. For estimating 3D-to-2D projection, our model further predicts focal length, rotation of the camera and translation of the 3D mesh off the center of the image, in case the root node of the 3D mesh is not exactly placed at the center of the image crop. We do not predict translation in the $z$ direction (perpendicular to the image plane), as the predicted focal length accounts for scaling of the person figure. For rotation, we predict Euler rotation angles $\alpha,\beta,\gamma$ so that the 3D rotation of the camera reads $\R=\R^x(\alpha)\R^y(\beta)\R_t^z(\gamma)$, where $\R^x(\theta)$ denotes rotation around the x-axis by angle $\theta$, here in homogeneous coordinates. The re-projection equation for the $k$th keypoint then reads: |
|
\begin{eqnarray} |
|
x^k_{kpt}=&P \cdot \left( \R \cdot \mathbf{X}^k_{kpt} + T \right) \label{eq:projection} |
|
\end{eqnarray} |
|
where $P=\mathrm{diag}(\begin{bmatrix}f &f &1 &0\end{bmatrix}$ is the predicted camera projection matrix and $T = \begin{bmatrix} T_x &T_y & 0& 0 \end{bmatrix}^T$ handles small perturbations in object centering. |
|
Keypoint reprojection error then reads: |
|
\begin{eqnarray} |
|
\mathcal{L}^\text{kpt} = \| x_{kpt} - \tilde{x}_{kpt} \|_2^2, |
|
\end{eqnarray} |
|
and $\tilde{x}_{kpt}$ are ground-truth or detected 2D keypoints. |
|
Since 3D mesh vertices are related to $\beta, \theta$ predictions using Eq. \ref{eq:smpl23d}, re-projection error minimization updates the neural parameters for $\beta,\theta$ estimation. |
|
|
|
|
|
|
|
|
|
|
|
|
|
\paragraph{Motion re-projection error} |
|
|
|
Given a pair of frames, 3D mesh vertex displacements from one frame to the next should match, when projected, corresponding 2D optical flow vectors, computed from the corresponding RGB frames. All Structure-from-Motion (SfM) methods exploit such motion re-projection error in one way or another: the estimated 3D pointcloud in time when projected should match 2D optical flow vectors in \cite{DBLP:journals/corr/AlldieckKM17}, or multiframe 2D point trajectories in \cite{Tomasi:1992:SMI:144398.144403}. Though previous SfM models use motion re-projection error to optimize 3D coordinates and camera parameters directly \cite{DBLP:journals/corr/AlldieckKM17}, here we use it to optimize neural network parameters, that predict such quantities, instead. |
|
|
|
|
|
Motion re-projection error estimation requires visibility of the mesh vertices in each frame. We implement visibility inference through ray casting for each example and training iteration in Tensor Flow and integrate it with our neural network model, which accelerates by ten times execution time, as opposed to interfacing with raycasting in OpenGL. Vertex visibility inference \textit{does not need to be differentiable}: it is used only to mask motion re-projection loss for invisible vertices. |
|
Since we are only interested in visibility rather than complex rendering functionality, ray casting boils down to detecting the first mesh facet to intersect with the straight line from the image projected position of the center of a facet to its 3D point. If the intercepted facet is the same as the one which the ray is cast from, we denote that facet as visible since there is no occluder between that facet and the image plane. |
|
We provide more details for the ray casting reasoning in the experiment section. |
|
Vertices that constructs these visible facet are treated as visible. Let $\vv^i \in \{0,1 \}, i=1 \cdots \N$ denote visibilities of mesh vertices. |
|
|
|
Given two consecutive frames $I_1,I_2$, let $\beta_1,\theta_1,\R_1,T_1,\beta_2,\theta_2,\R_2,T_2$ denote corresponding predictions from our model. |
|
We obtain corresponding 3D pointclouds, $\mathbf{X}^i_1= \begin{bmatrix} X^i_1\\Y^i_1\\Z^i_1 \end{bmatrix}, i = 1 \cdots \N,$ and $ \mathbf{X}^i_2=\begin{bmatrix} X^i_2\\Y^i_2\\Z^i_2 \end{bmatrix},i=1 \cdots \N$ using Eq. \ref{eq:smpl23d}. The 3D mesh vertices are mapped to corresponding pixel coordinates $(x^i_1,y^i_1), i=1 \cdots \N,(x^i_2,y^i_2), i=1 \cdots \N$, using the camera projection equation (Eq. \ref{eq:projection}). Thus the predicted 2D body flow resulting from the 3D motion of the corresponding meshes is $(u^i,v^i)=(x^i_2-x^i_1,y^i_2-y^i_1), i=1 \cdots \N$. |
|
|
|
|
|
|
|
Let $\OF=(\tilde{u},\tilde{v})$ denote the 2D optical flow field estimated with an optical flow method, such as the state-of-the-art deep neural flow of \cite{flownet2}. |
|
Let $\OF(x^i_1,y^i_1)$ denote the optical flow at a potentially subpixel location $x^i_1,y^i_1$, obtained from the pixel centered optical flow field $\OF$ through differentiable bilinear interpolation (differentiable warping) \cite{stn}. |
|
Then, the motion re-projection error reads: |
|
|
|
|
|
|
|
\begin{eqnarray*} |
|
\mathcal{L}^\text{motion} = \frac{1}{\mathbf{1}^T\mathbf{v}} \sum_i^\N \vv^i \left( \| u^i(x^i_1, y^i_1) - \tilde{u}(x^i_1, y^i_1) \|_1 + \|v^i(x^i_1, y^i_1) - \tilde{v}(x^i_1, y^i_1)\|_1 \right) \nonumber |
|
\end{eqnarray*} |
|
|
|
|
|
\paragraph{Segmentation re-projection error} |
|
Given a static image, the predicted 3D mesh for the depicted person should match, when projected, the corresponding 2D figure-ground segmentation mask. Numerous 3D shape reconstruction methods have used such segmentation consistency constraint \cite{DBLP:conf/cvpr/VicenteCAB14,DBLP:journals/corr/AlldieckKM17,Balan:CVPR:2007}, but again, in an optimization as opposed to learning framework. |
|
|
|
Let $\SI \in \{0,1\}^{w \times h}$ denote the 2D figure-ground binary image segmentation, supplied by ground-truth, background subtraction or predicted by a figure-ground neural network segmenter \cite{DBLP:journals/corr/HeGDG17}. |
|
Our segmentation re-projection loss measures |
|
how well the projected mesh mask fits the image segmentation $\SI$ by |
|
penalizing non-overlapping pixels by the shortest distance |
|
to the projected model segmentation $\SM= \{ x_{2d} \}$. For this purpose Chamfer distance |
|
maps $\CI$ for the image segmentation $\SI$ and Chamfer distance |
|
maps $\CM$ for the |
|
model projected segmentation $\SM$ are calculated. The loss then reads: |
|
|
|
|
|
|
|
|
|
|
|
\begin{equation} |
|
\mathcal{L}^\text{seg} = \SM \otimes \CI + \SI \otimes \CM, \nonumber |
|
\end{equation} |
|
where $\otimes$ denotes pointwise multiplication. |
|
Both terms are necessary to prevent under of over coverage of the model segmentation over the image segmentation. |
|
For the loss to be differentiable we cannot use distance transform for efficient computation of Chamfer maps. Rather, we brute force its computation by calculating |
|
the shortest distance of each pixel to the model segmentation and the inverse. Let $x_{2d}^i, i\in 1 \cdots \N$ denote the set of model projected vertex pixel coordinates and $x_{seg}^p, p\in 1 \cdots m$ denote the set of pixel centered coordinates that belong to the foreground of the 2D segmentation map $\SI$: |
|
|
|
|
|
\begin{equation} |
|
\mathcal{L}^\text{seg-proj} = \underbrace{\sum_{i=1}^\N \min_{p} \| x_{2d}^i -x_{seg}^p \|_2^2}_{\text{prevent over-coverage}} + \underbrace{\sum_{p}^m \min_{i} \|x_{seg}^p-x_{2d}^i\|_2^2.}_{\text{prevent under-coverage}} |
|
\end{equation} |
|
|
|
|
|
The first term ensures the model projected segmentation is covered by the image segmentation, while the second term ensures that model projected segmentation covers well the image segmentation. |
|
To lower the memory requirements we use half of the image input resolution. |
|
|
|
|
|
|
|
|
|
|
|
\section{Experiments} |
|
|
|
|
|
We test our method on two datasets: Surreal~\cite{varol17} and H3.6M~\cite{h36m_pami}. Surreal is currently the largest synthetic dataset for people in motion. It contains short monocular video clips depicting human characters performing daily activities. Ground-truth 3D human meshes are readily available. We split the dataset into train and test video sequences. Human3.6M (H3.6M) is the largest real video dataset with annotated 3D human skeletons. It contains videos of actors performing activities and provides annotations of body joint locations in 2D and 3D at every frame, recorded through a Vicon system. It does not provide dense 3D ground-truth though. |
|
|
|
|
|
|
|
Our model |
|
is first trained using supervised skeleton and surface parameters in the training set of the Surreal dataset. Then, it is self-supervised using differentiable rendering and re-projection error minimization at two test sets, one in the Surreal dataset, and one in H3.6M. For self-supervision, we use ground-truth 2D keypoints and segmentations in both datasets, Surreal and H3.6M. The segmentation mask in Surreal is very accurate while in H3.6M is obtained using background subtraction and can be quite inaccurate, as you can see in Figure \ref{fig:qualresults}. Our model refines such initially inaccurate segmentation mask. The 2D optical flows for dense motion matching are obtained using FlowNet2.0 \cite{flownet2} in both datasets. |
|
We do not use any 3D ground-truth supervision in H3.6M as our goal is to demonstrate successful domain transfer of our model, from SURREAL to H3.6M. We measure the quality of the predicted 3D skeletons in both datasets, and we measure the quality of the predicted dense 3D meshes in Surreal, since only there it is available. |
|
|
|
\paragraph{Evaluation metrics} |
|
Given predicted 3D body joint locations of $K=32$ keypoints $\xxpred_{kpt}^k, k=1 \cdots K$ and corresponding ground-truth 3D joint locations $\xxgt_{kpt}^k, k=1 \cdots K$, we define the \textbf{per-joint error} of each example as $ |
|
\frac{1}{K}\sum_{k=1}^K \| \xxpred_{kpt}^k- \xxgt_{kpt}^k \|_2$ |
|
similar to previous works \cite{DBLP:journals/corr/ZhouZPLDD17}. We also define |
|
the \textbf{reconstruction error} of each example as the 3D per-joint |
|
error up to a 3D translation $T$ (3D rotation should still be predicted correctly): $ |
|
\min_{T} \frac{1}{K}\sum_{k=1}^K \| (\xxpred_{kpt}^k+T)- (\xxgt_{kpt}^k) \|_2 $ |
|
We define the \textbf{surface error} of each example to be the per-joint error when considering all the vertices of the 3D mesh: $ |
|
\frac{1}{\N}\sum_{i=1}^\N \| \xxpred^i-\xxgt^i \|_2. $ |
|
|
|
|
|
|
|
We compare our learning based model against two baselines: |
|
(1) \textit{Pretrained}, a model that uses only supervised training from synthetic data, without self-supervised adaptation. This baseline is similar to the recent work of \cite{DBLP:journals/corr/ChenWLSTLCC16}. |
|
(2) \textit{Direct optimization}, a model that uses our differentiable self-supervised losses, but instead of optimizing neural network weights, optimizes directly over body mesh parameters ($\theta,\beta$), rotation ($R$), translation ($T$), and focal length $f$. We use standard gradient descent as our optimization method. |
|
We experiment with varying amount of supervision during initialization of our optimization baseline: random initialization, using ground-truth 3D translation, using ground-truth rotation and using ground-truth theta angles (to estimate the surface parameters). |
|
|
|
|
|
Tables \ref{tab:results1} and \ref{tab:results2} show the results of our model and baselines for the different evaluation metrics. |
|
The learning based self-supervised model outperforms both the pretrained model, that does not exploit adaptation through differentiable rendering and consistency checks, as well as direct optimization baselines, sensitive to initialization mistakes. |
|
|
|
\paragraph{Ablation} In Figure \ref{fig:curves} we show the 3D keypoint reconstruction error after self-supervised finetuning using different combinations of self-supervised losses. A model self-supervised by the keypoint re-projection error ($\mathcal{L}^\text{kpt}$) alone does worse than model using both keypoint and segmentation re-projection error ($\mathcal{L}^\text{kpt}$+$\mathcal{L}^\text{seg}$). Models trained using all three proposed losses (keypoint, segmentation and dense motion re-projection error ($\mathcal{L}^\text{kpt}$+$\mathcal{L}^\text{seg}$+$\mathcal{L}^\text{motion}$) |
|
outperformes the above two. This shows the complementarity and importance of all the proposed losses. |
|
|
|
|
|
|
|
\begin{table}[h] |
|
\centering |
|
\begin{tabular}{|c| c| c| c |} |
|
\hline |
|
& \textbf{surface error} (mm) & \textbf{per-joint error} (mm) & \textbf{recon. error} (mm) \\ |
|
|
|
\hline |
|
Optimization & 346.5 & 532.8 & 1320.1\\ \hline |
|
Optimization + $\tilde{R}$ & 301.1 & 222.0 & 294.9 \\ \hline |
|
Optimization + $\tilde{R}$ + $\tilde{T}$ & 272.8 & 206.6 & 205.5 \\ \hline |
|
Pretrained & 119.4 & 101.6 & 351.3 \\ \hline |
|
Pretrained+Self-Sup & {\bf 74.5} & \bf{64.4} & \bf{203.9} \\ \hline |
|
\end{tabular} |
|
\vspace{1mm} |
|
\caption{\textbf{3D mesh prediction results in Surreal~\cite{varol17}.} The proposed model (pretrained+self-supervised) outperforms both optimization based alternatives, as well as pretrained models using supervised regression, that do not adapt to the test data. We use a superscript $\tilde{\cdot}$ to denote ground-truth information provided at initialization of our optimization based baseline.} |
|
\label{tab:results1} |
|
\end{table} |
|
|
|
\begin{table}[h] |
|
\begin{minipage}[b]{0.55\linewidth} |
|
\centering |
|
\begin{tabular}{|c| c| c |} |
|
\hline |
|
& \textbf{per-joint error}& \textbf{recon. error} \\ |
|
& (mm) & (mm) \\ |
|
\hline |
|
Optimization & 562.4 & 883.1\\ \hline |
|
Pretrained & 125.6 & 303.5 \\ \hline |
|
Pretrained+Self-Sup & \bf{98.4} & \bf{145.8} \\ \hline |
|
\end{tabular} |
|
\vspace{1mm} |
|
\caption{\textbf{3D skeleton prediction results on H3.6M ~\cite{h36m_pami}.} The proposed model (pretrained+self-supervised) outperforms both an optimization based baseline, as well as a pretrained model. Self-supervised learning through differentiable rendering allows our model to adapt effectively across domains (Surreal to H3.6M), while the fixed pretrained baseline cannot. Dense 3D surface ground-truth is not available and thus cannot be measured in H3.6M} |
|
\label{tab:results2} |
|
\end{minipage} \hfill |
|
\begin{minipage}[b]{0.4\linewidth} |
|
\centering |
|
\includegraphics[width=35mm]{curves.pdf} |
|
\captionof{figure}{\textbf{3D reconstruction error during purely unsupervised finetuning} under different self-supervised losses. (Lk $\equiv\mathcal{L}^\text{kpt}$: Keypoint re-projection error; LS$\equiv\mathcal{L}^\text{seg}$: Segmentation re-projection error LM$\equiv\mathcal{L}^\text{motion}$: Dense motion re-projection error ). All losses contribute to 3D error reduction. } |
|
\label{fig:curves} |
|
\end{minipage} |
|
\end{table} |
|
|
|
\begin{figure}[b!] |
|
|
|
\centering |
|
\includegraphics[width=1.0\linewidth]{mesh_result3}\centering |
|
\caption{\textbf{Qualitative results of 3D mesh prediction}. In the top four rows, we show predictions in Surreal and in the bottom four from H3.6M. Our model handles bad segmentation input masks in H3.6M thanks to supervision from multiple rendering based losses. A byproduct of our 3D mesh model is improved 2D person segmentation (column 6). |
|
} |
|
\label{fig:qualresults} |
|
\end{figure} |
|
|
|
\paragraph{Discussion} |
|
We have shown that a combination of supervised pretraining and unsupervised adaptation is beneficial for accurate 3D mesh prediction. |
|
Learning based self-supervision combines the best of both worlds of supervised learning and test time optimization: supervised learning initializes the learning parameters in the right regime, ensuring good pose initialization at test time, without manual effort. Self-supervision through differentiable rendering allows adaptation of the model to test data, thus allows much tighter fitting that a pretrained model with ``frozen" weights at test time. Note that overfitting in that sense is desirable. We want our predicted 3D mesh to fit as tight as possible to our test set, and improve tracking accuracy with minimal human intervention. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\paragraph{Implementation details} |
|
Our model architecture consists of 5 convolution blocks. Each block contains |
|
two convolutional layers with filter size $5\times 5$ (stride 2) and $3\times 3$ (stride 1), followed by batch normalization and leaky relu activation. The first block contains $64$ channels, and we double size after each block. |
|
On top of these blocks, we add 3 fully connected layers |
|
and shrink the size of the final layer to match our desired outputs. Input image to our model is $128 \times 128$. The model is trained with gradient descent optimizer with learning rate $0.0001$ and is implemented in Tensorflow v1.1.0 \cite{tensorflow2015-whitepaper}. |
|
|
|
|
|
\textbf{Chamfer distance:} We obtain Chamfer distance map $\CI$ for an input image frame $I$ |
|
using distance transform with seed the image figure-ground segmentation mask $\SI$. |
|
This assigns to every pixel in $\CI$ the minimum distance to a pixel on the mask foreground. Next, we describe the differentiable computation for $\CM$ used in our method. |
|
Let ${\it P} = \{x_{2d}\}$ denote a set of pixel coordinates for the mesh's visible projected points. For each pixel location $p$, we compute the minimum distance between that pixel location and any pixel coordinate in ${\it P}$ and obtain a distance map $D \in \mathbb{R}^{w \times h}$. Next, we threshold the distance map $D$ to get the Chamfer distance map ${\CM}$ and segmentation mask ${\SM}$ where, for each pixel position $p$: |
|
\begin{align} |
|
&\CM(p) = \text{max}(0.5, D(p))\\ |
|
&\SM(p) = \text{min}(0.5, D(p)) + \delta(D(p) < 0.5) \cdot 0.5, |
|
\end{align} |
|
and $\delta(\cdot)$ is an indicator function. |
|
|
|
\textbf{Ray casting:} We implemented a standard raycasting algorithm in TensorFlow to accelerate its computation. |
|
Let $r = (x, d)$ denote a casted ray, where $x$ is the point where the ray casts from and $d$ is a normalized vector for the shooting direction. In our case, all the rays cast from the center of the camera. For ease of explanation, we set $x$ at (0,0,0). |
|
A facet $f = (v_0, v_1, v_2),$ is determined as "hit" if it satisfies the following three conditions |
|
: (1) the facet is not parallel to the casted ray, (2) the facet is not behind the ray and (3) the ray passes through the triangle region formed by the three edges of the facet. |
|
Given a facet $f = (v_0, v_1, v_2),$ where $v_i$ denotes the $i$th vertex of the facet, the first condition is satisfied if the magnitude of the inner product between the ray cast direction $d$ and the surface normal of the facet $f$ is large than some threshold $\epsilon.$ Here we set $\epsilon$ to be $1e-8.$ The second condition is satisfied if the inner product between the ray cast direction |
|
$d$ and the surface normal $N$, which is defined as the normalized cross product between $v_1 - v_0$ and $v_2 - v_0,$ has the same sign as the inner product between $v_0$ on $N.$ Finally, the last condition can be split into three sub-problems: given one of the edges on the facet, whether the ray casts on the same side as the facet or not. First, we |
|
find the intersecting point $p$ of the ray cast and the 2D plane expanded by the facet by the following equation: |
|
\begin{align} |
|
p = x + d \cdot \frac{<N, v_0>}{<N, d>}, |
|
\end{align} |
|
where $<\cdot,\cdot>$ denotes inner product. |
|
Given an edge formed by vertices $v_i$ and $v_j,$ the ray casted is determined to fall on the same side of the facet if the cross product between edge $v_i - v_j$ and vector $p - v_j$ has the same sign as |
|
the surface normal vector $N.$ We examine this condition on all of the three edges. If all the above conditions are satisfied, the facet is determined as hit by the ray cast. |
|
Among the hit facets, we choose the one |
|
with the minimum distance to the origin as the visible facet seen from the direction of the |
|
ray cast. |
|
|
|
|
|
|
|
|
|
|
|
|
|
\section{Conclusion} |
|
|
|
We have presented a learning based model for dense human 3D body tracking supervised by synthetic data and self-supervised by differentiable rendering of mesh motion, keypoints, and segmentation, and matching to their 2D equivalent quantities. We show that our model improves by using unlabelled video data, which is very valuable for motion capture where dense 3D ground-truth is hard to annotate. |
|
A clear direction for future work is iterative additive feedback \cite{IEF2015human} on the mesh parameters, for achieving higher 3D reconstruction accuracy, and allowing learning a residual free form deformation on top of the parametric SMPL model, again in a self-supervised manner. |
|
Extensions of our model beyond human 3D shape would allow neural agents to learn 3D with experience as human do, supervised solely by video motion. |
|
|
|
|
|
|
|
|
|
|
|
\clearpage\clearpage |
|
{\small |
|
\bibliographystyle{ieee} |
|
\bibliography{egbib} |
|
} |
|
|
|
\end{document} |