diff --git "a/20240921/2206.06420v5.json" "b/20240921/2206.06420v5.json" new file mode 100644--- /dev/null +++ "b/20240921/2206.06420v5.json" @@ -0,0 +1,751 @@ +{ + "title": "GraphMLP: A Graph MLP-Like Architecture for 3D Human Pose Estimation", + "abstract": "Modern multi-layer perceptron (MLP) models have shown competitive results in learning visual representations without self-attention. However, existing MLP models are not good at capturing local details and lack prior knowledge of human body configurations, which limits their modeling power for skeletal representation learning. To address these issues, we propose a simple yet effective graph-reinforced MLP-Like architecture, named GraphMLP, that combines MLPs and graph convolutional networks (GCNs) in a global-local-graphical unified architecture for 3D human pose estimation. GraphMLP incorporates the graph structure of human bodies into an MLP model to meet the domain-specific demand of the 3D human pose, while allowing for both local and global spatial interactions. Furthermore, we propose to flexibly and efficiently extend the GraphMLP to the video domain and show that complex temporal dynamics can be effectively modeled in a simple way with negligible computational cost gains in the sequence length. To the best of our knowledge, this is the first MLP-Like architecture for 3D human pose estimation in a single frame and a video sequence. Extensive experiments show that the proposed GraphMLP achieves state-of-the-art performance on two datasets, i.e., Human3.6M and MPI-INF-3DHP. Code and models are available at https://github.com/Vegetebird/GraphMLP.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "3D human pose estimation from images is important in numerous applications, such as action recognition, motion capture, and augmented/virtual reality.\nMost existing works solve this task by using a 2D-to-3D pose lifting method, which takes graph-structured 2D joint coordinates detected by a 2D keypoint detector as input [34 ###reference_b34###, 50 ###reference_b50###, 17 ###reference_b17###, 15 ###reference_b15###].\nThis is an inherently ambiguous problem since multiple valid 3D joint locations may correspond to the same 2D projection in the image space.\nHowever, it is practically solvable since 3D poses often lie on low-dimensional manifolds, which can provide important structural priors to mitigate the depth ambiguity [48 ###reference_b48###, 9 ###reference_b9###].\n###figure_1### ###figure_2### Early works attempted to employ fully connected networks (FCN) [34 ###reference_b34###] to lift the 2D joints into 3D space.\nHowever, the dense connection of FCN often results in overfitting and poor performance [53 ###reference_b53###].\nTo address this problem, recent works consider that the skeleton of a human body can be naturally represented as the graph structure and utilize graph convolutional networks (GCNs) for this task [56 ###reference_b56###, 9 ###reference_b9###, 31 ###reference_b31###].\nAlthough GCN-based methods are effective at aggregating neighboring nodes to extract local features, they often suffer from limited receptive fields to obtain stronger representation power [57 ###reference_b57###], as these methods usually rely on learning relationships between human body joints utilizing first-hop neighbors.\nHowever, global information between distant body joints is crucial for understanding the overall body posture and movement patterns [55 ###reference_b55###, 57 ###reference_b57###]. For instance, the position of the hand relative to the foot can indicate whether the person is standing, sitting, or lying down.\nOne way to capture global information is by stacking multiple GCN layers, which expands the model\u2019s receptive fields to cover the entire kinematic chain of a human.\nHowever, this leads to the over-smoothing issue where valuable information is lost in deeper layers [21 ###reference_b21###].\nRecently, modern multi-layer perceptron (MLP) models (in particular MLP-Mixer [45 ###reference_b45###]) with global receptive fields provide new architectural designs in vision [27 ###reference_b27###, 29 ###reference_b29###, 41 ###reference_b41###, 6 ###reference_b6###].\nThe MLP model stacks only fully-connected layers without self-attention and consists of two types of blocks.\nThe spatial MLP aggregates global information among tokens, and the channel MLP focuses on extracting features for each token.\nBy stacking these two MLP blocks, it can be simply built with less inductive bias and achieves impressive performance in learning visual representations.\nThis motivates us to explore an MLP-Like architecture for skeleton-based representation learning.\nHowever, there remain two critical challenges in adapting the MLP models from vision to skeleton:\n(i) Despite their successes in vision, existing MLPs are less effective in modeling graph-structured data due to their simple connections among all nodes.\nDifferent from RGB images represented by highly dense pixels, skeleton inputs are inherently sparse and graph-structured data (see Fig. 2 ###reference_### (left)).\nWithout incorporating the prior knowledge of human body configurations, the model is prone to learning spurious dependencies, which can lead to physically implausible poses [3 ###reference_b3###, 53 ###reference_b53###].\n(ii)\nWhile such models are capable of capturing global interactions between distant body joints (such as the hand and foot) via their spatial MLPs, they may not be good at capturing local interactions due to the lack of careful designs for modeling relationships between adjacent joints (such as the head and neck).\nHowever, local information is also essential for 3D human pose estimation, as it can help the model to understand fine-grained movement details [51 ###reference_b51###, 28 ###reference_b28###].\nFor example, the movement of the head relative to the neck can indicate changes in gaze direction or subtle changes in posture.\nTo overcome both limitations, we present GraphMLP, a new graph-reinforced MLP-Like architecture for 3D human pose estimation, as depicted in Fig. 2 ###reference_###.\nOur GraphMLP is conceptually simple yet effective: it builds a strong collaboration between modern MLPs and GCNs to construct a global-local-graphical unified architecture for learning better skeletal representations.\nSpecifically, GraphMLP mainly contains a stack of novel GraphMLP layers.\nEach layer consists of two Graph-MLP blocks where the spatial graph MLP (SG-MLP) and the channel graph MLP (CG-MLP) blocks are built by injecting GCNs into the spatial MLP and channel MLP, respectively.\nBy combining MLPs and GCNs in a unified architecture, our GraphMLP is able to obtain the prior knowledge of human configurations encoded by the graph\u2019s connectivity and capture both local and global spatial interactions among body joints, hence yielding better performance.\nFurthermore, we extend our GraphMLP from a single frame to a video sequence.\nMost existing video-based methods [39 ###reference_b39###, 59 ###reference_b59###, 24 ###reference_b24###] typically model temporal information by treating each frame as a token or treating the time axis as a separated dimension.\nHowever, these methods often suffer from redundant computations that make little contribution to the final performance because nearby poses are similar [23 ###reference_b23###].\nAdditionally, they are too computationally expensive to process long videos (e.g., 243 frames), thereby limiting their practical utility in real-world scenarios.\nTo tackle these issues, we propose to utilize a simple and efficient video representation of pose sequences.\nThis representation captures complex temporal dynamics by mixing temporal information in the feature channels and treating each joint as a token, offering negligible computational cost gains in the sequence length.\nIt is also a unified and flexible representation that can accommodate arbitrary-length sequences (i.e., a single frame and variable-length videos).\nThe proposed GraphMLP is evaluated on two challenging datasets, Human3.6M [16 ###reference_b16###] and MPI-INF-3DHP [36 ###reference_b36###].\nExtensive experiments show the effectiveness and generalization ability of our approach, which advances the state-of-the-art performance for estimating 3D human poses from a single image.\nIts performance surpasses MGCN [61 ###reference_b61###] by 1.4 in mean per joint position error (MPJPE) on the Human3.6M dataset.\nBesides, as shown in Fig. 1 ###reference_###, it brings clear 6.6 and 6.9 improvements in MPJPE for the MLP model [45 ###reference_b45###] and the GCN model [3 ###reference_b3###] on the MPI-INF-3DHP dataset, respectively.\nSurprisingly, compared to video pose Transformer (e.g., PoseFormer [59 ###reference_b59###]), even with fewer computational costs, our MLP-Like architecture achieves better performance.\nOverall, our main contributions are summarized as follows:\nWe present, to the best of our knowledge, the first MLP-Like architecture called GraphMLP for 3D human pose estimation.\nIt combines the advantages of modern MLPs and GCNs, including globality, locality, and connectivity.\nThe novel SG-MLP and CG-MLP blocks are proposed to encode the graph structure of human bodies within MLPs to obtain domain-specific knowledge about the human body while enabling the model to capture both local and global interactions.\nA simple and efficient video representation is proposed to extend our GraphMLP to the video domain flexibly.\nThis representation enables the model to effectively process arbitrary-length sequences with negligible computational cost gains.\nExtensive experiments demonstrate the effectiveness and generalization ability of the proposed GraphMLP and show new state-of-the-art results on two challenging datasets, i.e., Human3.6M [16 ###reference_b16###] and MPI-INF-3DHP [36 ###reference_b36###]." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "3D Human Pose Estimation.\nThere are mainly two categories to estimate 3D human poses.\nThe first category of methods directly regresses 3D human joints from RGB images [22 ###reference_b22###, 38 ###reference_b38###, 40 ###reference_b40###, 12 ###reference_b12###].\nThe second category is the 2D-to-3D pose lifting method [34 ###reference_b34###, 39 ###reference_b39###, 26 ###reference_b26###], which employs an off-the-shelf 2D pose detection as the front end and designs a 2D-to-3D lifting network using detected 2D poses as input.\nThis lifting method can achieve state-of-the-art performance and has become the mainstream method due to its efficiency and effectiveness.\nFor example, FCN [34 ###reference_b34###] shows that 3D poses can be regressed simply and effectively from 2D keypoints with fully-connected networks.\nTCN [39 ###reference_b39###] extends the FCN to video by utilizing temporal convolutional networks to exploit temporal information from 2D pose sequences.\nLiu et al. [32 ###reference_b32###] incorporate the attention mechanism into TCN to enhance the modeling of long-range temporal relationships across frames.\nSRNet [52 ###reference_b52###] proposes a split-and-recombine network that splits the human body joints into multiple local groups and recombines them with a low-dimensional global context.\nPoSynDA [30 ###reference_b30###] uses domain adaptation through multi-hypothesis pose synthesis for 3D human pose estimation.\nSince the physical skeleton topology can form a graph structure, recent progress has focused on employing graph convolutional networks (GCNs) to address the 2D-to-3D lifting problem.\nLCN [9 ###reference_b9###] introduces a locally connected network to improve\nthe representation capability of GCN.\nSemGCN [56 ###reference_b56###] allows the model to learn the semantic relationships among the human joints.\nMGCN [61 ###reference_b61###] improves SemGCN by introducing a weight modulation and an affinity modulation.\nTransformers in Vision.\nRecently, Transformer-based methods achieve excellent results on various computer vision tasks, such as image classification [10 ###reference_b10###, 33 ###reference_b33###, 49 ###reference_b49###], object detection [4 ###reference_b4###, 60 ###reference_b60###, 58 ###reference_b58###], and pose estimation [59 ###reference_b59###, 5 ###reference_b5###, 25 ###reference_b25###].\nThe seminal work of ViT [10 ###reference_b10###] divides an image into patches and uses a pure Transformer encoder to extract visual features.\nPoseFormer [59 ###reference_b59###] utilizes a pure Transformer-based architecture to model spatial and temporal relationships from videos.\nStrided Transformer [23 ###reference_b23###] incorporates strided convolutions into Transformers to aggregate information from local contexts for video-based 3D human pose estimation.\nHDFormer [5 ###reference_b5###] proposes a High-order Directed Transformer to utilize high-order information on a directed skeleton graph based on Transformer.\nRTPCA [20 ###reference_b20###] introduces a temporal pyramidal compression-and-amplification design for enhancing temporal modeling in 3D human pose estimation.\nMesh Graphormer [28 ###reference_b28###] combines GCNs and attention layers in a serial order to capture local and global dependencies for human mesh reconstruction.\nUnlike [28 ###reference_b28###], we mainly investigate how to combine more efficient architectures (i.e., modern MLPs) and GCNs to construct a stronger architecture for 3D human pose estimation and adopt a parallel manner to make it possible to model local and global information at the same time.\nMoreover, different from previous video-based methods [59 ###reference_b59###, 23 ###reference_b23###, 24 ###reference_b24###] that treat each frame as a token for temporal modeling, we mix features across all frames and maintain each joint as a token, which makes the network to be economical and easy to train.\nMLPs in Vision.\nModern MLP models are proposed to reduce the inductive bias and computational cost by replacing the complex self-attentions of Transformers with spatial-wise linear layers [27 ###reference_b27###, 29 ###reference_b29###, 42 ###reference_b42###].\nMLP-Mixer [45 ###reference_b45###] firstly proposes an MLP-Like model, which is a simple network architecture containing only pure MLP layers.\nCompared with FCN [34 ###reference_b34###] (i.e., conventional MLPs), this architecture introduces some modern designs, e.g., layer normalization (LN) [2 ###reference_b2###], GELU [14 ###reference_b14###], mixing spatial information.\nMoreover, ResMLP [46 ###reference_b46###] proposes a purely MLP architecture with the Affine transformation.\nCycleMLP [6 ###reference_b6###] proposes a cycle fully-connected layer to aggregate spatial context information and deal with variable\ninput image scales.\nHowever, these modern MLP models have not yet been applied to 3D human pose estimation.\nInspired by their successes in vision, we first attempt to explore how MLP-Like architectures can be used for 3D human pose estimation in non-Euclidean skeleton data.\nThe difference between our approach and the existing MLPs is that we introduce the inductive bias of the physical skeleton topology by combining MLP models with GCNs, providing more physically plausible and accurate estimations.\nWe further investigate applying MLP-Like architecture in video and design an efficient video representation, which is seldom studied." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed GraphMLP", + "text": "Fig. 2 ###reference_### illustrates the overall architecture of the proposed GraphMLP.\nOur approach takes 2D joint locations estimated by an off-the-shelf 2D pose detector as input and outputs predicted 3D poses , where is the number of joints.\nThe proposed GraphMLP architecture consists of a skeleton embedding module, a stack of identical GraphMLP layers, and a prediction head module.\nThe core operation of GraphMLP architecture is the GraphMLP layer, each of which has two parts: a spatial graph MLP (SG-MLP) block and a channel graph MLP (CG-MLP) block.\nOur GraphMLP has a similar architecture to the original MLP-Mixer [45 ###reference_b45###], but we incorporate graph convolutional networks (GCNs) into the model to meet the domain-specific requirement of the 3D human pose estimation and learn the local and global interactions of human body joints.\n###figure_3###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Preliminary", + "text": "In this subsection, we briefly introduce the preliminaries in GCNs and modern MLPs." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Graph Convolutional Networks", + "text": "Let denotes a graph where is a set of nodes and is the adjacency matrix encoding the edges between the nodes.\nGiven a layer feature with dimensions, a generic GCN [18 ###reference_b18###] layer, which is used to aggregate features from neighboring nodes, can be formulated as:\nwhere is the learnable weight matrix and is the adjacency matrix with added self-connections.\n is the identity matrix, is a diagonal matrix, and .\nThe design principle of the GCN block is similar to [3 ###reference_b3###], but we adopt a simplified single-frame version that contains only one graph layer.\nFor the GCN blocks of our GraphMLP, the nodes in the graph denote the joint locations of the human body in Fig. 3 ###reference_### (a), and the adjacency matrix represents the bone connections between every two joints for node information passing in Fig. 3 ###reference_### (b), e.g., the rectangle of 1st row and 2nd column denotes the connection between joint 0 and joint 1.\nIn addition, the different types of bone connections use different kernel weights following [3 ###reference_b3###]." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 Modern MLPs", + "text": "MLP-Mixer is the first modern MLP model proposed in [45 ###reference_b45###].\nIt is a simple and attention-free architecture that mainly consists of a spatial MLP and a channel MLP, as illustrated in Fig. 4 ###reference_### (a).\nThe spatial MLP aims to transpose the spatial axis and channel axis of tokens to mix spatial information. Then the channel MLP processes tokens in the channel dimension to mix channel information.\nLet be an input feature, an MLP-Mixer layer can be calculated as:\nwhere is layer normalization (LN) [2 ###reference_b2###] and is the matrix transposition.\nBoth spatial and channel MLPs contain two linear layers and a GELU [14 ###reference_b14###] non-linearity in between.\nWe adopt this MLP model as our baseline, which is similar to [45 ###reference_b45###], but we transpose the tokens before LN in the spatial MLP (i.e., normalize tokens along the spatial dimension).\nHowever, such a simple MLP model neglects to extract fine-grained local details and lacks prior knowledge about the human configurations, which are perhaps the bottlenecks restricting the representation ability of MLP-Like architectures for learning skeletal representations." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Network Architecture", + "text": "In this work, we present a novel GraphMLP built upon the MLP-Mixer described in [45 ###reference_b45###] to overcome the aforementioned limitations of existing MLP models.\nBelow we elaborate on each module used in GraphMLP and provide its detailed implementations." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Skeleton Embedding", + "text": "Raw input data are mapped to latent space via the skeleton embedding module.\nGiven the input 2D pose with body joints, we treat each joint as an input token.\nThese tokens are projected to the high-dimension token feature by a linear layer, where is the hidden size.\n###figure_4###" + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 GraphMLP Layer", + "text": "The existing MLP simply connects all nodes but does not take advantage of graph structures, making it less effective in handling graph-structured data.\nTo tackle this issue, we introduce the GraphMLP layer that unifies globality, locality, and connectivity in a single layer.\nCompared with the original MLP-Mixer in Fig. 4 ###reference_### (a), the main difference of our GraphMLP layer (Fig. 4 ###reference_### (b)) is that we utilize GCNs for local feature communication.\nThis modification retains the domain-specific knowledge of human body configurations, which induces an inductive bias enabling the GraphMLP to perform very well in skeletal representation learning.\nSpecifically, our GraphMLP layer is composed of an SG-MLP and a CG-MLP.\nThe SG-MLP and CG-MLP are built by injecting GCNs into the spatial MLP and channel MLP (mentioned in Sec. 3.1 ###reference_###), respectively.\nTo be more specific, the SG-MLP contains a spatial MLP block and a GCN block.\nThese blocks process token features in parallel, where the spatial MLP extracts features among tokens with a global receptive field, and the GCN block focuses on aggregating local information between neighboring joints.\nThe CG-MLP has a similar architecture to SG-MLP but replaces the spatial MLP with the channel MLP and has no matrix transposition.\nBased on the above description, the MLP layers in Eq. (2 ###reference_###) and Eq. (3 ###reference_###) are modified to process tokens as:\nwhere denotes the GCN block, is the index of GraphMLP layers.\nHere and are the output features of the SG-MLP and the CG-MLP for block , respectively." + }, + { + "section_id": "3.2.3", + "parent_section_id": "3.2", + "section_name": "3.2.3 Prediction Head", + "text": "Different from [10 ###reference_b10###, 45 ###reference_b45###] that use a classifier head to do classification, our prediction head performs regression with a linear layer.\nIt is applied on the extracted features of the last GraphMLP layer to predict the final 3D pose ." + }, + { + "section_id": "3.2.4", + "parent_section_id": "3.2", + "section_name": "3.2.4 Loss Function", + "text": "To train our GraphMLP, we apply an -norm loss to calculate the difference between prediction and ground truth.\nThe model is trained in an end-to-end fashion, and the -norm loss is defined as follows:\nwhere and are the predicted and ground truth 3D coordinates of joint , respectively.\n###figure_5###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Extension in the Video Domain", + "text": "To extend our GraphMLP for capturing temporal information, we introduce a simple video representation that changes the skeleton embedding module in the original architecture.\nSpecifically, given a 2D pose sequence with frames and joints, we first concatenate features between the coordinates of - axis and all frames for each joint into , and then fed it into a linear layer to map the tokens and get .\nSubsequently, each joint is treated as an input token and fed into the GraphMLP layers and prediction head module to output the 3D pose of the center frame .\nThese processes of GraphMLP in the video domain are illustrated in Figure 5 ###reference_###.\nThis is also a unified and flexible representation strategy that can process arbitrary-length sequences, e.g., a single frame with , and variable-length videos with .\nInterestingly, we find that using a single linear layer to encode temporal information can achieve competitive performance without explicitly temporal modeling by treating each frame as a token.\nMore importantly, this representation is efficient since the amount of is small (e.g., 17 joints).\nThe increased computational costs from single frame to video sequence are only in one linear layer that weights are with linear computational complexity to sequence length, which can be neglected.\nMeanwhile, the computational overhead and the number of parameters in the GraphMLP layers and the prediction head module are the same for different input sequences." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we first introduce experimental settings and implementation details for evaluation.\nThen, we compare the proposed GraphMLP with state-of-the-art methods.\nWe also conduct detailed ablation studies on the importance of designs in our proposed approach." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets and Evaluation Metrics", + "text": "Human3.6M. Human3.6M [16 ###reference_b16###] is the largest benchmark for 3D human pose estimation.\nIt contains 3.6 million video frames captured by a motion capture system in an indoor environment, where 11 professional actors perform 15 actions such as greeting, phoning, and sitting.\nFollowing previous works [34 ###reference_b34###, 51 ###reference_b51###, 53 ###reference_b53###], our model is trained on five subjects (S1, S5, S6, S7, S8) and tested on two subjects (S9, S11).\nWe report our performance using two evaluation metrics.\nOne is the mean per joint position error (MPJPE), referred to as Protocol #1, which calculates the mean Euclidean distance in millimeters between the predicted and the ground truth joint coordinates.\nThe other is the PA-MPJPE that measures the MPJPE after Procrustes analysis (PA) [11 ###reference_b11###] and is referred to as Protocol #2.\nMPI-INF-3DHP.\nMPI-INF-3DHP [36 ###reference_b36###] is a large-scale 3D human pose dataset containing both indoor and outdoor scenes.\nIts test set consists of three different scenes: studio with green screen (GS), studio without green screen (noGS), and outdoor scene (Outdoor).\nFollowing previous works [9 ###reference_b9###, 53 ###reference_b53###, 61 ###reference_b61###], we use the Percentage of Correct Keypoint (PCK) with the threshold of 150 and Area Under Curve (AUC) for a range of PCK thresholds as evaluation metrics.\nTo verify the generalization ability of our approach, we directly apply the model trained on Human3.6M to the test set of this dataset." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "In our implementation, the proposed GraphMLP is stacked by GraphMLP layers with hidden size , the MLP dimension of CG-MLP, i.e., , and the MLP dimension of SG-MLP, i.e., .\nThe whole framework is trained in an end-to-end fashion from scratch on a single NVIDIA RTX 2080 Ti GPU.\nThe learning rate starts from 0.001 with a decay factor of 0.95 utilized in each epoch and 0.5 utilized per 5 epochs.\nWe follow the generic data augmentation (horizontal flip augmentation) in [39 ###reference_b39###, 3 ###reference_b3###, 61 ###reference_b61###].\nFollowing [51 ###reference_b51###, 61 ###reference_b61###, 53 ###reference_b53###], we use 2D joints detected by cascaded pyramid network (CPN) [8 ###reference_b8###] for Human3.6M and 2D joints provided by the dataset for MPI-INF-3DHP." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Comparison with State-of-the-Art Methods", + "text": "Comparison with Single-frame Methods.\nTable 1 ###reference_### reports the performance comparison between our GraphMLP and previous state-of-the-art methods that take a single frame as input on Human3.6M.\nIt can be seen that our approach reaches 49.2 in MPJPE and 38.6 in PA-MPJPE, which outperforms previous methods.\nNote that some works [3 ###reference_b3###, 61 ###reference_b61###, 13 ###reference_b13###] adopt a pose refinement module to boost the performance further.\nCompared with them, GraphMLP achieves lower MPJPE (48.0 ), surpassing MGCN [61 ###reference_b61###] by a large margin of 1.4 error reduction (relative 3% improvements).\nDue to the uncertainty of 2D detections, we also report results using ground truth 2D keypoints as input to explore the upper bound of the proposed approach.\nAs shown in Table 2 ###reference_###, our GraphMLP obtains substantially better performance when given precise 2D joint information and attains state-of-the-art performance, which indicates its effectiveness.\nTable 3 ###reference_### further compares our GraphMLP against previous state-of-the-art single-frame methods on cross-dataset scenarios.\nWe only train our model on the Human3.6M dataset and test it on the MPI-INF-3DHP dataset.\nThe results show that our approach obtains the best results in all scenes and all metrics, consistently surpassing other methods.\nThis verifies the strong generalization ability of our approach to unseen scenarios.\nComparison with Video-based Methods.\nAs shown in Table 4 ###reference_###, our method achieves outstanding performance against video-based methods in both CPN and GT inputs.\nThe proposed method surpasses our baseline model (i.e., MLP-Mixer [45 ###reference_b45###]) by a large margin of 4.6 (10% improvements) with CPN inputs and 4.9 (14% improvements) with GT inputs.\nThese results further demonstrate the effectiveness of our GraphMLP, which combines MLPs and GCNs to learn better skeleton representations.\nCompared with the most related work, Poseformer [59 ###reference_b59###], a self-attention-based architecture, our GraphMLP, such a self-attention free architecture, improves the results from 31.3 to 30.3 with GT inputs (3% improvements).\nBesides, our simple video representation, which uses a single linear layer to encode temporal information instead of being designed explicitly for temporal enhancement like previous works [39 ###reference_b39###, 7 ###reference_b7###, 59 ###reference_b59###], is also capable of performing well.\nThis indicates that our method can alleviate the issue of video redundancy by compressing the video information into a single vector, leading to impressive results.\nTable 5 ###reference_### further reports the comparison of parameters, FLOPs, and MPJPE with PoseFormer [59 ###reference_b59###], MixSTE [54 ###reference_b54###], STCFormer [44 ###reference_b44###], and MotionAGFormer [35 ###reference_b35###] in different input frames.\nSurprisingly, compared with PoseFormer, our method requires only 22% FLOPs (356M vs. 1625M) while achieving better performance.\nAlthough MixSTE, STCFormer, and MotionAGFormer achieve better performance with the 243-frame model, they respectively require (356M vs. 277248M), (356M vs. 156215M), and (356M vs. 155634M) more FLOPs than ours, which leads to a time-consuming training process and difficulties in deployment.\nNote that our method offers negligible FLOPs gains in the sequence length, which allows it easy to deploy in real-time applications and has great potential for better results with longer sequences.\nThese results demonstrate that our GraphMLP in video reaches competitive performance with fewer computational costs and can serve as a strong baseline for video-based 3D human pose estimation." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "The large-scale ablation studies with 2D detected inputs on the Human3.6M dataset are conducted to investigate the effectiveness of our model (using the single-frame model).\nModel Configurations.\nWe start our ablation studies by exploring the GraphMLP on different hyper-parameters.\nThe results are shown in Table 6 ###reference_###.\nIt can be observed that using the expanding ratio of 2 (, ) works better than the ratio value of 4 which is common in vision Transformers and MLPs.\nIncreasing or reducing the number of GraphMLP layers hurts performance while using performs best.\nThe hidden size is important to determine the modeling ability.\nWhile increasing the from 128 to 512 (keeping the same MLP ratios), the MPJPE decreases from 50.2 to 49.2 .\nMeanwhile, the number of parameters increases from 0.60M to 9.49M.\nThe performance saturates when surpasses 512.\nTherefore, the optimal hyper-parameters for our model are , , , and , which are different from the original setting of MLP-Mixer [45 ###reference_b45###].\nThe reason for this may be the gap between vision and skeleton data, where the Human3.6M dataset is not diverse enough to train a large GraphMLP model.\n###table_1### Input 2D Detectors.\nA high-performance 2D detector is vital in achieving accurate 3D pose estimation.\nTable 7 ###reference_### reports the performance of our model with ground truth 2D joints, and detected 2D joints from Stack Hourglass (SH) [37 ###reference_b37###], Detectron [39 ###reference_b39###], CPN [8 ###reference_b8###], and HRNet [43 ###reference_b43###].\nWe can observe that our approach can produce more accurate results with a better 2D detector and is effective on different 2D estimators.\n###figure_6### Network Architectures.\nTo clearly demonstrate the advantage of the proposed GraphMLP, we compare our approach with various baseline architectures.\nThe \u2018Graph-Mixer\u2019 is constructed by replacing the linear layers in MLP-Mixer with GCN layers, and the \u2018Transformer-GCN\u2019 is built by replacing the spatial MLPs in GraphMLP with multi-head self-attention blocks and adding a position embedding module.\nTo ensure a consistent and fair evaluation, the parameters of these architectures (e.g., the number of layers, hidden size) remain the same.\nAs shown in Table 8 ###reference_###, our proposed approach consistently surpasses all other architectures.\nFor example, our approach can improve the GCN-based model [3 ###reference_b3###] from 51.3 to 49.2 , resulting in relative 4.1% improvements.\nIt\u2019s worth noting that the MLP-Like model (\u2018MLP-Mixer\u2019, \u2018GraphMLP\u2019) outperforms the Transformer-based model (\u2018Transformer\u2019, \u2018Transformer-GCN\u2019).\nThis may be because a small number of tokens (e.g., 17 joints) are less effective for self-attention in learning long-range dependencies.\nOverall, these results confirm that GraphMLP can serve as a new and strong baseline for 3D human pose estimation.\nThe implementation details of these network architectures can be found in the supplementary.\nNetwork Design Options.\nOur approach allows the model to learn strong structural priors of human joints by injecting GCNs into the baseline model (i.e., MLP-Mixer [45 ###reference_b45###]).\nWe also explore the different combinations of MLPs and GCNs to find an optimal architecture.\nAs shown in Table 9 ###reference_###, the location matters of GCN are studied by five different designs:\n(i) The GCN is placed before spatial MLP.\n(ii) The GCN is placed after spatial MLP.\n(iii) The GCN is placed after channel MLP.\n(iv) The GCN and MLP are in parallel but using a spatial GCN block (process tokens in spatial dimension) in SG-MLP.\n(v) The GCN and MLP are in parallel.\nThe results show that all these designs can help the model produce more accurate 3D poses, and using GCN and MLP in parallel achieves the best estimation accuracy.\nThe illustrations of these design options can be found in the supplementary.\nModel Components.\nWe also investigate the effectiveness of each component in our design.\nIn Table 10 ###reference_###, the first row corresponds to the baseline model (i.e., MLP-Mixer [45 ###reference_b45###]) that does not use any GCNs in the model.\nThe MPJPE is 52.0 .\nThe rest of the rows show the results of replacing its spatial MLP with SG-MLP or channel MLP with CG-MLP by adding a GCN block into the baseline.\nIt can be found that using our SG-MLP or CG-MLP can improve performance (50.6 and 50.5 respectively).\nWhen enabling both SG-MLP and CG-MLP, GraphMLP improves the performance over the baseline by a clear margin of 2.8 (relatively 5.4% improvements), indicating that the proposed components are mutually reinforced to produce more accurate 3D poses.\nThese results validate the effectiveness of our motivation: combining modern MLPs and GCNs in a unified architecture for better 3D human pose estimation.\n###figure_7###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Qualitative Results", + "text": "Fig. 6 ###reference_### presents the qualitative comparison among the proposed GraphMLP, MLP-Mixer [45 ###reference_b45###], and GCN [3 ###reference_b3###] on the Human3.6M dataset.\nFurthermore, Fig. 7 ###reference_### presents the qualitative comparison among our GraphMLP and three other methods, namely MLP-Mixer [45 ###reference_b45###], GCN [3 ###reference_b3###], and MGCN [61 ###reference_b61###], on more challenging in-the-wild images.\nNote that these actions from in-the-wild images are rare or absent from the training set of Human3.6M.\nGraphMLP, benefiting from its globality, locality, and connectivity, performs better and is able to predict accurate and plausible 3D poses.\nIt indicates both the effectiveness and generalization ability of our approach.\nHowever, there are still some failure cases, where our approach fails to produce accurate 3D human poses due to large 2D detection error, half body, rare poses, and heavy occlusion, as shown in Fig. 8 ###reference_###.\nMore qualitative results can be found in the supplementary." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion and Future Works", + "text": "In this paper, we propose a new graph-reinforced MLP-Like architecture, termed GraphMLP, which represents the first use of modern MLP dedicated to 3D human pose estimation.\nOur GraphMLP inherits the advantages of both MLPs and GCNs, making it a global-local-graphical unified architecture without self-attention.\nExtensive experiments show that the proposed GraphMLP achieves state-of-the-art performance and can serve as a simple yet effective modern baseline for 3D human pose estimation in both single frame and video sequence.\nWe also show that complex temporal dynamics can be effectively modeled in a simple way without explicitly temporal modeling, and the proposed GraphMLP in video reaches competitive results with fewer computational costs.\nAlthough the proposed approach has shown promising results, its performance is still limited by the quality of available datasets.\nHigh-quality 3D annotations rely on motion capture systems and are challenging and expensive to obtain.\nThe most commonly used dataset for 3D human pose estimation, Human3.6M, lacks sufficient variations in terms of human poses, environments, and activities, which leads to models that do not generalize well to real-world scenarios.\nDomain adaptation and synthetic data are potential directions to improve the generalization ability of our approach.\nMoreover, since the MLPs and GCNs in our GraphMLP are straightforward, we look forward to incorporating more powerful MLPs or GCNs to further improve performance.\n###figure_8###" + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Multi-Layer Perceptrons", + "text": "In Sec. 3.1 ###reference_### of our main manuscript, we give a brief description of the MLP-Mixer layer [45 ###reference_b45###] which is defined as below:\nIf considering more details about the spatial and channel MLPs, Eq. (7 ###reference_###) and Eq. (8 ###reference_###) can be further defined as:\nwhere is the GELU activation function [14 ###reference_b14###].\n and are the weights of two linear layers in the .\n and are the weights of two linear layers in the .\nFor GraphMLP in video, LN is additionally applied after every fully-connected layer in , Eq. (9 ###reference_###) is modified to:" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Additional Implementation Details", + "text": "In Sec. C ###reference_### of our main manuscript, we conduct extensive ablation studies on different network architectures and model variants of our GraphMLP.\nHere, we provide implementation details of these models.\nNetwork Architectures.\nIn Table 8 ###reference_### of our main paper, we report the results of various network architectures.\nHere, we provide illustrations in Fig. 9 ###reference_### and implementation details as follows:\nFCN: FCN [34 ###reference_b34###] is a conventional MLP whose building block contains a linear layer, followed by batch normalization, dropout, and a ReLU activation.\nWe follow their original implementation and use their code [1 ###reference_b1###] to report the performance.\nGCN: We remove the spatial MLP and the channel MLP in our SG-MLP and CG-MLP, respectively.\nTransformer: We build it by using a standard Transformer encoder which is the same as ViT [10 ###reference_b10###].\nMLP-Mixer: It is our baseline model that has the same architecture as [45 ###reference_b45###]. We build it by replacing multi-head attention blocks with spatial MLPs and removing the position embedding module in the Transformer.\nMesh Graphormer: Mesh Graphormer [28 ###reference_b28###] is the most relevant study to our approach that focuses on combining self-attentions and GCNs in a Transformer model for mesh reconstruction.\nInstead, our GraphMLP focuses on combining modern MLPs and GCNs to construct a stronger architecture for 3D human pose estimation.\nWe follow their design to construct a model by adding a GCN block after the multi-head attention in the Transformer.\nGraph-Mixer: We replace linear layers in MLP-Mixer with GCN layers.\nTransformer-GCN: We replace spatial MLPs in GraphMLP with multi-head self-attention blocks and add a position embedding module before Transformer-GCN layers.\nGraphMLP: It is our proposed approach. Please refer to Fig. 2 ###reference_### of our main paper.\nNetwork Design Options.\nIn Fig. 10 ###reference_###, we graphically illustrate five different design options of the GraphMLP layer as mentioned in Table 9 ###reference_### of our main paper.\nFig. 10 ###reference_### (e) shows that we adopt the design of GCN and MLP in parallel but use a spatial GCN block in SG-MLP.\nThe spatial GCN block processes tokens in the spatial dimension, which can be calculated as:\nModel Components.\nIn Table 10 ###reference_### of our main paper, we investigate the effectiveness of each component in our design.\nHere, we provide the illustrations of these model variants in Fig. 11 ###reference_###." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Additional Ablation Studies", + "text": "Transposition Design Options.\nAs mentioned in Sec. 3.1 ###reference_### of our main manuscript, we transpose the tokens before LN in the spatial MLP, and therefore the LN normalizes tokens along the spatial dimension.\nHere, we investigate the influence of transposition design options in Table 11 ###reference_###.\nThe \u2018Transposition Before LN\u2019 can be formulated as:\nThe \u2018Transposition After LN\u2019 can be written as:\nFrom Table 11 ###reference_###, the results show that performing transposition before LN brings more benefits in both MLP-Mixer and our GraphMLP models.\nNote that it is different from the original implementation of MLP-Mixer, which uses transposition after LN.\n###figure_9###" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Additional Qualitative Results", + "text": "Fig. 12 ###reference_### shows qualitative results of the proposed GraphMLP on Human3.6M and MPI-INF-3DHP datasets.\nThe Human3.6M is an indoor dataset (top three rows), and the test set of MPI-INF-3DHP contains three different scenes: studio with green screen (GS, fourth row), studio without green screen (noGS, fifth row), and outdoor scene (Outdoor, sixth row).\nMoreover, Fig. 13 ###reference_### shows qualitative results on challenging in-the-wild images.\nWe can observe that our approach is able to predict reliable and plausible 3D poses in these challenging cases.\n###figure_10### ###figure_11### ###figure_12### ###figure_13###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: \nQuantitative comparison with state-of-the-art single-frame methods on Human3.6M under Protocol #1 and Protocol #2.\nDetected 2D keypoints are used as input.\n - adopts the same refinement module as\u00a0[3, 61, 13].\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Protocol #1Dir.DiscEatGreetPhonePhotoPosePurch.SitSitD.SmokeWaitWalkD.WalkWalkT.Avg.
FCN\u00a0[34] ICCV\u20191751.856.258.159.069.578.455.258.174.094.662.359.165.149.552.462.9
TCN\u00a0[39] CVPR\u20191947.150.649.051.853.661.449.447.459.367.452.449.555.339.542.751.8
ST-GCN\u00a0[3] ICCV\u201919\n46.548.847.650.952.961.348.345.859.264.451.248.453.539.241.250.6
SRNet\u00a0[52] ECCV\u20192044.548.247.147.851.256.850.145.659.966.452.145.354.239.140.349.9
GraphSH\u00a0[51] CVPR\u20192145.249.947.550.954.966.148.546.359.771.551.448.653.939.944.151.9
MGCN\u00a0[61] ICCV\u201921\n45.449.245.749.450.458.247.946.057.563.049.746.652.238.940.849.4
GraFormer\u00a0[57] CVPR\u20192245.250.848.050.054.965.048.247.160.270.051.648.754.139.743.151.8
UGRN [19] AAAI\u20192347.950.047.151.351.259.548.746.956.061.951.148.954.340.042.950.5
RS-Net [13] TIP\u201923\n44.748.444.849.749.658.247.444.855.259.749.346.451.438.640.648.6
GraphMLP (Ours)45.450.245.849.251.657.947.344.956.961.049.546.953.237.839.949.2
GraphMLP (Ours)\n43.749.345.547.950.556.046.344.155.959.048.445.751.237.139.148.0
Protocol #2Dir.DiscEatGreetPhonePhotoPosePurch.SitSitD.SmokeWaitWalkD.WalkWalkT.Avg.
FCN\u00a0[34] ICCV\u20191739.543.246.447.051.056.041.440.656.569.449.245.049.538.043.147.7
TCN\u00a0[39] CVPR\u20191936.038.738.041.740.145.937.135.446.853.441.436.943.130.334.840.0
ST-GCN\u00a0[3] ICCV\u201919\n36.838.738.241.740.746.837.935.647.651.741.336.842.731.034.740.2
SRNet\u00a0[52] ECCV\u20192035.839.236.636.939.845.138.436.947.754.438.636.339.430.335.439.4
MGCN\u00a0[61] ICCV\u201921\n35.738.636.340.539.244.537.035.446.451.240.535.641.730.733.939.1
SGNN\u00a0[53] ICCV\u20192133.937.236.838.138.743.537.835.047.253.840.738.341.830.131.439.0
RS-Net [13] TIP\u201923\n35.538.336.140.539.244.837.134.945.049.140.235.441.531.034.338.9
GraphMLP (Ours)35.038.436.639.740.143.935.934.145.948.640.035.341.630.033.338.6
GraphMLP (Ours)\n35.138.236.539.839.843.535.734.045.647.639.835.141.130.033.438.4
\n
\n
", + "capture": "Table 1: \nQuantitative comparison with state-of-the-art single-frame methods on Human3.6M under Protocol #1 and Protocol #2.\nDetected 2D keypoints are used as input.\n - adopts the same refinement module as\u00a0[3, 61, 13].\n" + }, + "2": { + "table_html": "
\n
Table 2: \nQuantitative comparison with state-of-the-art single-frame methods on Human3.6M under Protocol #1.\nGround truth 2D keypoints are used as input.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Protocol #1Dir.DiscEatGreetPhonePhotoPosePurch.SitSitD.SmokeWaitWalkD.WalkWalkT.Avg.
FCN\u00a0[34] ICCV\u20191737.744.440.342.148.254.944.442.154.658.045.146.447.636.440.445.5
SemGCN\u00a0[56] CVPR\u20191937.849.437.640.945.141.440.148.350.142.253.544.340.547.339.043.8
ST-GCN\u00a0[3] ICCV\u20191933.439.033.837.038.147.339.537.343.246.237.738.038.630.432.138.1
SRNet\u00a0[52] ECCV\u20192035.936.729.334.536.042.837.731.740.144.335.837.236.233.734.036.4
GraphSH\u00a0[51] CVPR\u20192135.838.131.035.335.843.237.331.738.445.535.436.736.827.930.735.8
GraFormer\u00a0[57] CVPR\u20192232.038.030.434.434.743.335.231.438.046.234.235.736.127.430.635.2
PHGANet [55] IJCV\u20192332.436.530.133.336.343.536.130.537.545.333.835.135.327.530.234.9
GraphMLP (Ours)32.238.229.333.433.538.138.231.737.338.534.236.135.528.029.334.2
\n
\n
", + "capture": "Table 2: \nQuantitative comparison with state-of-the-art single-frame methods on Human3.6M under Protocol #1.\nGround truth 2D keypoints are used as input.\n" + }, + "3": { + "table_html": "
\n
Table 3: \nPerformance comparison with state-of-the-art single-frame methods on MPI-INF-3DHP.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nGS \n\nnoGS \n\nOutdoor \n\nAll PCK \n\nAll AUC \n
\nFCN\u00a0[34] ICCV\u201917\n49.842.531.242.517.0
\nLCN\u00a0[9] ICCV\u201919\n74.870.877.374.036.7
\nSGNN\u00a0[53] ICCV\u201921\n--84.682.146.2
\nMGCN\u00a0[61] ICCV\u201921\n86.486.085.786.153.7
\nGraFormer\u00a0[57] CVPR\u201922\n80.177.974.179.043.8
\nUGRN\u00a0[19] AAAI\u201923\n86.284.781.984.153.7
\nRS-Net [13] TIP\u201923\n---85.653.2
GraphMLP (Ours)87.387.186.387.054.3
\n
", + "capture": "Table 3: \nPerformance comparison with state-of-the-art single-frame methods on MPI-INF-3DHP.\n" + }, + "4": { + "table_html": "
\n
Table 4: \nQuantitative comparisons with video-based methods on Human3.6M under MPJPE.\nCPN and GT denote the inputs of 2D poses detected by CPN and ground truth 2D poses, respectively.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Method\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0CPN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GT
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ST-GCN\u00a0[3] ()\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a048.8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a037.2
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0TCN\u00a0[39] ()\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a046.8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a037.8
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0SRNet\u00a0[52] ()\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a044.8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a032.0
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0PoseFormer\u00a0[59] ()\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a044.3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a031.3
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Anatomy3D\u00a0[7]\n()\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a044.1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a032.3
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Baseline ()\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a048.4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a035.2
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GraphMLP (Ours, )\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a043.8\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a030.3\n
\n
", + "capture": "Table 4: \nQuantitative comparisons with video-based methods on Human3.6M under MPJPE.\nCPN and GT denote the inputs of 2D poses detected by CPN and ground truth 2D poses, respectively.\n" + }, + "5": { + "table_html": "
\n
Table 5: \nComparison of parameters, FLOPs, and MPJPE with PoseFormer\u00a0[59], MixSTE\u00a0[54], STCFormer\u00a0[44], and MotionAGFormer\u00a0[35] in different input frames on Human3.6M.\nFrame per second (FPS) was computed on a single GeForce RTX 3090 GPU.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelParam (M)FLOPs (M)\nMPJPE (mm) \n
\nPoseFormer\u00a0[59]\n19.562051.3
\nPoseFormer\u00a0[59]\n279.5754147.0
\nPoseFormer\u00a0[59]\n819.60162544.3
\nPoseFormer\u00a0[59]\n2439.694874-
\nMixSTE\u00a0[54]\n133.66114151.1
\nMixSTE\u00a0[54]\n2733.673080545.1
\nMixSTE\u00a0[54]\n8133.709241642.4
\nMixSTE\u00a0[54]\n24333.7827724840.9
\nSTCFormer\u00a0[44]\n1---
\nSTCFormer\u00a0[44]\n274.75434744.1
\nSTCFormer\u00a0[44]\n814.751304142.0
\nSTCFormer\u00a0[44]\n24318.9315621540.5
\nMotionAGFormer\u00a0[35]\n1---
\nMotionAGFormer\u00a0[35]\n272.24202345.1
\nMotionAGFormer\u00a0[35]\n814.821303642.5
\nMotionAGFormer\u00a0[35]\n24319.0115563438.4
GraphMLP (Ours)19.4934849.2
GraphMLP (Ours)279.5134945.5
GraphMLP (Ours)819.5735144.5
GraphMLP (Ours)2439.7335643.8
\n
", + "capture": "Table 5: \nComparison of parameters, FLOPs, and MPJPE with PoseFormer\u00a0[59], MixSTE\u00a0[54], STCFormer\u00a0[44], and MotionAGFormer\u00a0[35] in different input frames on Human3.6M.\nFrame per second (FPS) was computed on a single GeForce RTX 3090 GPU.\n" + }, + "6": { + "table_html": "
\n
Table 6: \nAblation study on various configurations of our approach.\n is the number of GraphMLP layers, is the hidden size, is the MLP dimension of CG-MLP, and is the MLP dimension of SG-MLP.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Params (M)\nMPJPE () \n
351210242569.4949.2
351210241289.4749.7
3512204812812.6249.7
3512204825612.6449.6
251210242566.3350.2
4512102425612.6550.0
5512102425615.8150.3
6512102425618.9750.4
3128256640.6050.2
32565121282.3850.1
33847681925.3549.7
3768153638421.3149.4
\n
", + "capture": "Table 6: \nAblation study on various configurations of our approach.\n is the number of GraphMLP layers, is the hidden size, is the MLP dimension of CG-MLP, and is the MLP dimension of SG-MLP.\n" + }, + "7": { + "table_html": "
\n
Table 7: \nAblation study on different 2D detectors.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0Method\u00a0\u00a0\u00a0\u00a0\u00a02D Mean Error\n\u00a0\u00a0\u00a0\u00a0\u00a0MPJPE () \n
\n\u00a0\u00a0\u00a0\u00a0\u00a0SH\u00a0[37]\n\u00a0\u00a0\u00a0\u00a0\u00a09.03\u00a0\u00a0\u00a0\u00a0\u00a056.9
\n\u00a0\u00a0\u00a0\u00a0\u00a0Detectron\u00a0[39]\n\u00a0\u00a0\u00a0\u00a0\u00a07.77\u00a0\u00a0\u00a0\u00a0\u00a054.5
\n\u00a0\u00a0\u00a0\u00a0\u00a0CPN\u00a0[8]\n\u00a0\u00a0\u00a0\u00a0\u00a06.67\u00a0\u00a0\u00a0\u00a0\u00a049.2
\u00a0\u00a0\u00a0\u00a0\u00a02D Ground Truth\u00a0\u00a0\u00a0\u00a0\u00a00\n\u00a0\u00a0\u00a0\u00a0\u00a034.2\n
\n
", + "capture": "Table 7: \nAblation study on different 2D detectors.\n" + }, + "8": { + "table_html": "
\n
Table 8: \nAblation study on different network architectures.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Method\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MPJPE () \n
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0FCN\u00a0[34]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a053.5
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GCN\u00a0[3]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a051.3
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Transformer\u00a0[47]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a052.7
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MLP-Mixer\u00a0[45]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a052.0
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Mesh Graphormer\u00a0[28]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a051.3
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Graph-Mixer\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a050.4
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Transformer-GCN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a051.5
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GraphMLP\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a049.2\n
\n
", + "capture": "Table 8: \nAblation study on different network architectures.\n" + }, + "9": { + "table_html": "
\n
Table 9: \nAblation study on different design options of combining MLPs and GCNs.\n\u2217 means using spatial GCN block in SG-MLP.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Method\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MPJPE () \n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Baseline\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a052.0
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GCN Before Spatial MLP\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a051.6
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GCN After Spatial MLP\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a050.6
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GCN After Channel MLP\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a050.9
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GCN and MLP in Parallel\u2217\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a050.2
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GCN and MLP in Parallel\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a049.2\n
\n
", + "capture": "Table 9: \nAblation study on different design options of combining MLPs and GCNs.\n\u2217 means using spatial GCN block in SG-MLP.\n" + }, + "10": { + "table_html": "
\n
Table 10: \nAblation study on different components in GraphMLP.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSG-MLPCG-MLP\nMPJPE () \n
Baseline\u2717\u271752.0
GraphMLP (SG-MLP)\u2713\u271750.6
GraphMLP (CG-MLP)\u2717\u271350.5
GraphMLP\u2713\u271349.2
\n
", + "capture": "Table 10: \nAblation study on different components in GraphMLP.\n" + }, + "11": { + "table_html": "
\n
Table 11: Ablation study on different design options of transposition.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Method\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MPJPE () \n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MLP-Mixer, Transposition After LN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a052.6
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MLP-Mixer, Transposition Before LN\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a052.0\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GraphMLP, Transposition After LN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a049.7
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0GraphMLP, Transposition Before LN\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a049.2\n
\n
", + "capture": "Table 11: Ablation study on different design options of transposition.\n" + } + }, + "image_paths": { + "1": { + "figure_path": "2206.06420v5_figure_1.png", + "caption": "Figure 1: Performance comparison with MLP-Mixer [45] and GCN [3] on Human3.6M (a) and MPI-INF-3DHP (b) datasets.\nThe proposed GraphMLP absorbs the advantages of modern MLPs and GCNs to effectively learn skeletal representations, consistently outperforming each of them.\nThe evaluation metric is MPJPE (the lower the better).", + "url": "http://arxiv.org/html/2206.06420v5/x1.png" + }, + "2": { + "figure_path": "2206.06420v5_figure_2.png", + "caption": "Figure 2: \nOverview of the proposed GraphMLP architecture.\nThe left illustrates the skeletal structure of the human body.\nThe 2D joint inputs detected by a 2D pose estimator are sparse and graph-structured data.\nGraphMLP treats each 2D keypoint as an input token, linearly embeds each of them through the skeleton embedding, feeds the embedded tokens to GraphMLP layers, and finally performs regression on resulting features to predict the 3D pose via the prediction head.\nEach GraphMLP layer contains one spatial graph MLP (SG-MLP) and one channel graph MLP (CG-MLP).\nFor easy illustration, we show the architecture using a single image as input.", + "url": "http://arxiv.org/html/2206.06420v5/x2.png" + }, + "3": { + "figure_path": "2206.06420v5_figure_3.png", + "caption": "Figure 3: \n(a) The human skeleton graph in physical and symmetrical connections.\n(b) The adjacency matrix used in the GCN blocks of GraphMLP.\nDifferent colors denote the different types of bone connections.", + "url": "http://arxiv.org/html/2206.06420v5/x3.png" + }, + "4": { + "figure_path": "2206.06420v5_figure_4.png", + "caption": "Figure 4: \nComparison of MLP Layers.\n(a) MLP-Mixer Layer [45].\n(b) Our GraphMLP Layer.\nCompared with MLP-Mixer, our GraphMLP incorporates graph structural priors into the MLP model via GCN blocks.\nThe MLPs and GCNs are in a paralleled design to model both local and global interactions.", + "url": "http://arxiv.org/html/2206.06420v5/x4.png" + }, + "5": { + "figure_path": "2206.06420v5_figure_5.png", + "caption": "Figure 5: \nIllustration of the process of GraphMLP in the video domain.", + "url": "http://arxiv.org/html/2206.06420v5/x5.png" + }, + "6": { + "figure_path": "2206.06420v5_figure_6.png", + "caption": "Figure 6: \nQualitative comparison with MLP-Mixer [45] and GCN [3] for reconstructing 3D human poses on Human3.6M dataset.\nRed arrows highlight wrong estimations.", + "url": "http://arxiv.org/html/2206.06420v5/extracted/5869550/figure/dataset.jpg" + }, + "7": { + "figure_path": "2206.06420v5_figure_7.png", + "caption": "Figure 7: \nQualitative comparison with MLP-Mixer [45], GCN [3], and MGCN [61] for reconstructing 3D human poses on challenging in-the-wild images.\nRed arrows highlight wrong estimations.", + "url": "http://arxiv.org/html/2206.06420v5/extracted/5869550/figure/wild.jpg" + }, + "8": { + "figure_path": "2206.06420v5_figure_8.png", + "caption": "Figure 8: \nChallenging scenarios where GraphMLP fails to produce accurate 3D human poses.", + "url": "http://arxiv.org/html/2206.06420v5/extracted/5869550/figure/fail.jpg" + }, + "9": { + "figure_path": "2206.06420v5_figure_9.png", + "caption": "Figure 9: \nDifferent network architectures.", + "url": "http://arxiv.org/html/2206.06420v5/x6.png" + }, + "10": { + "figure_path": "2206.06420v5_figure_10.png", + "caption": "Figure 10: \nDifferent design options of the GraphMLP layer.", + "url": "http://arxiv.org/html/2206.06420v5/x7.png" + }, + "11": { + "figure_path": "2206.06420v5_figure_11.png", + "caption": "Figure 11: \nThe model components we have studied for building our proposed GraphMLP.", + "url": "http://arxiv.org/html/2206.06420v5/x8.png" + }, + "12": { + "figure_path": "2206.06420v5_figure_12.png", + "caption": "Figure 12: Visualization results of our approach for reconstructing 3D human poses on Human3.6M dataset (top three rows) and MPI-INF-3DHP dataset (bottom three rows).", + "url": "http://arxiv.org/html/2206.06420v5/extracted/5869550/figure/supp_dataset.jpg" + }, + "13": { + "figure_path": "2206.06420v5_figure_13.png", + "caption": "Figure 13: Visualization results of the proposed GraphMLP for reconstructing 3D human poses on challenging in-the-wild images.", + "url": "http://arxiv.org/html/2206.06420v5/extracted/5869550/figure/supp_wild.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "https://github.com/weigq/3d_pose_baseline_pytorch/.", + "author": "Code for A Simple Yet Effective Baseline for 3D Human Pose Estimation.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "Layer normalization.", + "author": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.", + "venue": "arXiv preprint arXiv:1607.06450, 2016.", + "url": null + } + }, + { + "3": { + "title": "Exploiting spatial-temporal relationships for 3D pose estimation\nvia graph convolutional networks.", + "author": "Yujun Cai, Liuhao Ge, Jun Liu, Jianfei Cai, Tat-Jen Cham, Junsong Yuan, and\nNadia Magnenat Thalmann.", + "venue": "In ICCV, pages 2272\u20132281, 2019.", + "url": null + } + }, + { + "4": { + "title": "End-to-end object detection with transformers.", + "author": "Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander\nKirillov, and Sergey Zagoruyko.", + "venue": "In ECCV, pages 213\u2013229, 2020.", + "url": null + } + }, + { + "5": { + "title": "HDFormer: High-order directed transformer for 3D human pose\nestimation.", + "author": "Hanyuan Chen, Jun-Yan He, Wangmeng Xiang, Zhi-Qi Cheng, Wei Liu, Hanbing Liu,\nBin Luo, Yifeng Geng, and Xuansong Xie.", + "venue": "In IJCAI, pages 581\u2013589, 2023.", + "url": null + } + }, + { + "6": { + "title": "CycleMLP: A MLP-like architecture for dense prediction.", + "author": "Shoufa Chen, Enze Xie, Chongjian GE, Runjian Chen, Ding Liang, and Ping Luo.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "7": { + "title": "Anatomy-aware 3D human pose estimation with bone-based pose\ndecomposition.", + "author": "Tianlang Chen, Chen Fang, Xiaohui Shen, Yiheng Zhu, Zhili Chen, and Jiebo Luo.", + "venue": "IEEE TCSVT, 32(1):198\u2013209, 2021.", + "url": null + } + }, + { + "8": { + "title": "Cascaded pyramid network for multi-person pose estimation.", + "author": "Yilun Chen, Zhicheng Wang, Yuxiang Peng, Zhiqiang Zhang, Gang Yu, and Jian Sun.", + "venue": "In CVPR, pages 7103\u20137112, 2018.", + "url": null + } + }, + { + "9": { + "title": "Optimizing network structure for 3D human pose estimation.", + "author": "Hai Ci, Chunyu Wang, Xiaoxuan Ma, and Yizhou Wang.", + "venue": "In ICCV, pages 2262\u20132271, 2019.", + "url": null + } + }, + { + "10": { + "title": "An image is worth 16x16 words: Transformers for image recognition at\nscale.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn,\nXiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg\nHeigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.", + "venue": "In ICLR, 2021.", + "url": null + } + }, + { + "11": { + "title": "Generalized procrustes analysis.", + "author": "John C Gower.", + "venue": "Psychometrika, 40(1):33\u201351, 1975.", + "url": null + } + }, + { + "12": { + "title": "Single image based 3D human pose estimation via uncertainty\nlearning.", + "author": "Chuchu Han, Xin Yu, Changxin Gao, Nong Sang, and Yi Yang.", + "venue": "PR, 132:108934, 2022.", + "url": null + } + }, + { + "13": { + "title": "Regular splitting graph network for 3D human pose estimation.", + "author": "Md Tanvir Hassan and A Ben Hamza.", + "venue": "IEEE TIP, 2023.", + "url": null + } + }, + { + "14": { + "title": "Gaussian error linear units (GELUs).", + "author": "Dan Hendrycks and Kevin Gimpel.", + "venue": "arXiv preprint arXiv:1606.08415, 2016.", + "url": null + } + }, + { + "15": { + "title": "Weakly-supervised 3D human pose estimation with cross-view\nU-shaped graph convolutional network.", + "author": "Guoliang Hua, Hong Liu, Wenhao Li, Qian Zhang, Runwei Ding, and Xin Xu.", + "venue": "IEEE TMM, 25:1832\u20131843, 2022.", + "url": null + } + }, + { + "16": { + "title": "Human3.6M: Large scale datasets and predictive methods for 3D\nhuman sensing in natural environments.", + "author": "Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu.", + "venue": "IEEE TPAMI, 36(7):1325\u20131339, 2013.", + "url": null + } + }, + { + "17": { + "title": "MHCanonnet: Multi-hypothesis canonical lifting network for\nself-supervised 3D human pose estimation in the wild video.", + "author": "Hyun-Woo Kim, Gun-Hee Lee, Woo-Jeoung Nam, Kyung-Min Jin, Tae-Kyung Kang,\nGeon-Jun Yang, and Seong-Whan Lee.", + "venue": "PR, page 109908, 2023.", + "url": null + } + }, + { + "18": { + "title": "Semi-supervised classification with graph convolutional networks.", + "author": "Thomas N Kipf and Max Welling.", + "venue": "arXiv preprint arXiv:1609.02907, 2016.", + "url": null + } + }, + { + "19": { + "title": "Pose-oriented transformer with uncertainty-guided refinement for\n2D-to-3D human pose estimation.", + "author": "Han Li, Bowen Shi, Wenrui Dai, Hongwei Zheng, Botao Wang, Yu Sun, Min Guo,\nChenglin Li, Junni Zou, and Hongkai Xiong.", + "venue": "In AAAI, 2023a.", + "url": null + } + }, + { + "20": { + "title": "Refined temporal pyramidal compression-and-amplification transformer\nfor 3D human pose estimation.", + "author": "Hanbing Li, Wangmeng Xiang, Jun-Yan He, Zhi-Qi Cheng, Bin Luo, Yifeng Geng, and\nXuansong Xie.", + "venue": "arXiv preprint arXiv:2309.01365, 2023b.", + "url": null + } + }, + { + "21": { + "title": "Deeper insights into graph convolutional networks for semi-supervised\nlearning.", + "author": "Qimai Li, Zhichao Han, and Xiao-Ming Wu.", + "venue": "In AAAI, 2018.", + "url": null + } + }, + { + "22": { + "title": "3D human pose estimation from monocular images with deep\nconvolutional neural network.", + "author": "Sijin Li and Antoni B Chan.", + "venue": "In ACCV, pages 332\u2013347, 2014.", + "url": null + } + }, + { + "23": { + "title": "Exploiting temporal contexts with strided transformer for 3D human\npose estimation.", + "author": "Wenhao Li, Hong Liu, Runwei Ding, Mengyuan Liu, Pichao Wang, and Wenming Yang.", + "venue": "IEEE TMM, 25:1282\u20131293, 2022a.", + "url": null + } + }, + { + "24": { + "title": "MHFormer: Multi-hypothesis transformer for 3D human pose\nestimation.", + "author": "Wenhao Li, Hong Liu, Hao Tang, Pichao Wang, and Luc Van Gool.", + "venue": "In CVPR, pages 13147\u201313156, 2022b.", + "url": null + } + }, + { + "25": { + "title": "Multi-hypothesis representation learning for transformer-based 3D\nhuman pose estimation.", + "author": "Wenhao Li, Hong Liu, Hao Tang, and Pichao Wang.", + "venue": "PR, 141:109631, 2023c.", + "url": null + } + }, + { + "26": { + "title": "Hourglass tokenizer for efficient transformer-based 3D human pose\nestimation.", + "author": "Wenhao Li, Mengyuan Liu, Hong Liu, Pichao Wang, Jialun Cai, and Nicu Sebe.", + "venue": "In CVPR, pages 604\u2013613, 2024.", + "url": null + } + }, + { + "27": { + "title": "AS-MLP: An axial shifted MLP architecture for vision.", + "author": "Dongze Lian, Zehao Yu, Xing Sun, and Shenghua Gao.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "28": { + "title": "Mesh Graphormer.", + "author": "Kevin Lin, Lijuan Wang, and Zicheng Liu.", + "venue": "In ICCV, pages 12939\u201312948, 2021.", + "url": null + } + }, + { + "29": { + "title": "Pay attention to MLPs.", + "author": "Hanxiao Liu, Zihang Dai, David So, and Quoc V Le.", + "venue": "NeurIPS, 34:9204\u20139215, 2021a.", + "url": null + } + }, + { + "30": { + "title": "PoSynDA: Multi-hypothesis pose synthesis domain adaptation for\nrobust 3D human pose estimation.", + "author": "Hanbing Liu, Jun-Yan He, Zhi-Qi Cheng, Wangmeng Xiang, Qize Yang, Wenhao Chai,\nGaoang Wang, Xu Bao, Bin Luo, Yifeng Geng, et al.", + "venue": "In ACM MM, pages 5542\u20135551, 2023.", + "url": null + } + }, + { + "31": { + "title": "A comprehensive study of weight sharing in graph networks for 3D\nhuman pose estimation.", + "author": "Kenkun Liu, Rongqi Ding, Zhiming Zou, Le Wang, and Wei Tang.", + "venue": "In ECCV, pages 318\u2013334, 2020a.", + "url": null + } + }, + { + "32": { + "title": "Attention mechanism exploits temporal contexts: Real-time 3D human\npose reconstruction.", + "author": "Ruixu Liu, Ju Shen, He Wang, Chen Chen, Sen-ching Cheung, and Vijayan Asari.", + "venue": "In CVPR, pages 5064\u20135073, 2020b.", + "url": null + } + }, + { + "33": { + "title": "Swin Transformer: Hierarchical vision transformer using shifted\nwindows.", + "author": "Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and\nBaining Guo.", + "venue": "In ICCV, pages 10012\u201310022, 2021b.", + "url": null + } + }, + { + "34": { + "title": "A simple yet effective baseline for 3D human pose estimation.", + "author": "Julieta Martinez, Rayat Hossain, Javier Romero, and James J Little.", + "venue": "In ICCV, pages 2640\u20132649, 2017.", + "url": null + } + }, + { + "35": { + "title": "MotionAGFormer: Enhancing 3D human pose estimation with a\ntransformer-gcnformer network.", + "author": "Soroush Mehraban, Vida Adeli, and Babak Taati.", + "venue": "In WACV, pages 6920\u20136930, 2024.", + "url": null + } + }, + { + "36": { + "title": "Monocular 3D human pose estimation in the wild using improved CNN\nsupervision.", + "author": "Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko,\nWeipeng Xu, and Christian Theobalt.", + "venue": "In 3DV, pages 506\u2013516, 2017.", + "url": null + } + }, + { + "37": { + "title": "Stacked hourglass networks for human pose estimation.", + "author": "Alejandro Newell, Kaiyu Yang, and Jia Deng.", + "venue": "In ECCV, pages 483\u2013499, 2016.", + "url": null + } + }, + { + "38": { + "title": "Coarse-to-fine volumetric prediction for single-image 3D human\npose.", + "author": "Georgios Pavlakos, Xiaowei Zhou, Konstantinos G Derpanis, and Kostas\nDaniilidis.", + "venue": "In CVPR, pages 7025\u20137034, 2017.", + "url": null + } + }, + { + "39": { + "title": "3D human pose estimation in video with temporal convolutions and\nsemi-supervised training.", + "author": "Dario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli.", + "venue": "In CVPR, pages 7753\u20137762, 2019.", + "url": null + } + }, + { + "40": { + "title": "Weakly-supervised pre-training for 3D human pose estimation via\nperspective knowledge.", + "author": "Zhongwei Qiu, Kai Qiu, Jianlong Fu, and Dongmei Fu.", + "venue": "PR, 139:109497, 2023.", + "url": null + } + }, + { + "41": { + "title": "Cascaded cross MLP-Mixer GANs for cross-view image translation.", + "author": "Bin Ren, Hao Tang, and Nicu Sebe.", + "venue": "In BMVC, 2021.", + "url": null + } + }, + { + "42": { + "title": "Polyp-Mixer: An efficient context-aware MLP-based paradigm for\npolyp segmentation.", + "author": "Jing-Hui Shi, Qing Zhang, Yu-Hao Tang, and Zhong-Qun Zhang.", + "venue": "IEEE TCSVT, 33(1):30\u201342, 2022.", + "url": null + } + }, + { + "43": { + "title": "Deep high-resolution representation learning for human pose\nestimation.", + "author": "Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang.", + "venue": "In CVPR, pages 5693\u20135703, 2019.", + "url": null + } + }, + { + "44": { + "title": "3D human pose estimation with spatio-temporal criss-cross\nattention.", + "author": "Zhenhua Tang, Zhaofan Qiu, Yanbin Hao, Richang Hong, and Ting Yao.", + "venue": "In CVPR, pages 4790\u20134799, 2023.", + "url": null + } + }, + { + "45": { + "title": "MLP-Mixer: An all-MLP architecture for vision.", + "author": "Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua\nZhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers,\nJakob Uszkoreit, et al.", + "venue": "In NeurIPS, pages 24261\u201324272, 2021.", + "url": null + } + }, + { + "46": { + "title": "ResMLP: Feedforward networks for image classification with\ndata-efficient training.", + "author": "Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin\nEl-Nouby, Edouard Grave, Gautier Izacard, Armand Joulin, Gabriel Synnaeve,\nJakob Verbeek, et al.", + "venue": "arXiv preprint arXiv:2105.03404, 2021.", + "url": null + } + }, + { + "47": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "In NeurIPS, pages 5998\u20136008, 2017.", + "url": null + } + }, + { + "48": { + "title": "Robust estimation of 3D human poses from a single image.", + "author": "Chunyu Wang, Yizhou Wang, Zhouchen Lin, Alan L Yuille, and Wen Gao.", + "venue": "In CVPR, pages 2361\u20132368, 2014.", + "url": null + } + }, + { + "49": { + "title": "AA-trans: Core attention aggregating transformer with information\nentropy selector for fine-grained visual classification.", + "author": "Qi Wang, JianJun Wang, Hongyu Deng, Xue Wu, Yazhou Wang, and Gefei Hao.", + "venue": "PR, 140:109547, 2023.", + "url": null + } + }, + { + "50": { + "title": "View invariant 3D human pose estimation.", + "author": "Guoqiang Wei, Cuiling Lan, Wenjun Zeng, and Zhibo Chen.", + "venue": "IEEE TCSVT, 30(12):4601\u20134610, 2019.", + "url": null + } + }, + { + "51": { + "title": "Graph stacked hourglass networks for 3D human pose estimation.", + "author": "Tianhan Xu and Wataru Takano.", + "venue": "In CVPR, pages 16105\u201316114, 2021.", + "url": null + } + }, + { + "52": { + "title": "SRNet: Improving generalization in 3D human pose estimation with\na split-and-recombine approach.", + "author": "Ailing Zeng, Xiao Sun, Fuyang Huang, Minhao Liu, Qiang Xu, and Stephen Lin.", + "venue": "In ECCV, pages 507\u2013523, 2020.", + "url": null + } + }, + { + "53": { + "title": "Learning skeletal graph neural networks for hard 3D pose\nestimation.", + "author": "Ailing Zeng, Xiao Sun, Lei Yang, Nanxuan Zhao, Minhao Liu, and Qiang Xu.", + "venue": "In ICCV, pages 11436\u201311445, 2021.", + "url": null + } + }, + { + "54": { + "title": "MixSTE: Seq2seq mixed spatio-temporal encoder for 3D human pose\nestimation in video.", + "author": "Jinlu Zhang, Zhigang Tu, Jianyu Yang, Yujin Chen, and Junsong Yuan.", + "venue": "In CVPR, pages 13232\u201313242, 2022.", + "url": null + } + }, + { + "55": { + "title": "Learning enriched hop-aware correlation for robust 3D human pose\nestimation.", + "author": "Shengping Zhang, Chenyang Wang, Liqiang Nie, Hongxun Yao, Qingming Huang, and\nQi Tian.", + "venue": "IJCV, 131(6):1566\u20131583, 2023.", + "url": null + } + }, + { + "56": { + "title": "Semantic graph convolutional networks for 3D human pose regression.", + "author": "Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, and Dimitris N Metaxas.", + "venue": "In CVPR, pages 3425\u20133435, 2019.", + "url": null + } + }, + { + "57": { + "title": "GraFormer: Graph-oriented transformer for 3D pose estimation.", + "author": "Weixi Zhao, Weiqiang Wang, and Yunjie Tian.", + "venue": "In CVPR, pages 20438\u201320447, 2022a.", + "url": null + } + }, + { + "58": { + "title": "Tracking objects as pixel-wise distributions.", + "author": "Zelin Zhao, Ze Wu, Yueqing Zhuang, Boxun Li, and Jiaya Jia.", + "venue": "In ECCV, 2022b.", + "url": null + } + }, + { + "59": { + "title": "3D human pose estimation with spatial and temporal transformers.", + "author": "Ce Zheng, Sijie Zhu, Matias Mendieta, Taojiannan Yang, Chen Chen, and Zhengming\nDing.", + "venue": "In ICCV, pages 11656\u201311665, 2021.", + "url": null + } + }, + { + "60": { + "title": "Deformable DETR: Deformable transformers for end-to-end object\ndetection.", + "author": "Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai.", + "venue": "In ICLR, 2021.", + "url": null + } + }, + { + "61": { + "title": "Modulated graph convolutional network for 3D human pose estimation.", + "author": "Zhiming Zou and Wei Tang.", + "venue": "In ICCV, pages 11477\u201311487, 2021.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2206.06420v5" +} \ No newline at end of file