multi-hmr / README.md
fbaradel's picture
add arxiv.org/abs/2402.14654
6e08595 verified

A newer version of the Gradio SDK is available: 5.9.1

Upgrade
metadata
title: Multi HMR
emoji: 👬
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 4.44.1
app_file: app.py
pinned: false

Multi-HMR: Regressing Whole-Body Human Meshes
for Multiple Persons in a Single Shot

Fabien Baradel*, Matthieu Armando, Salma Galaaoui, Romain Brégier,
Philippe Weinzaepfel, Grégory Rogez, Thomas Lucas*

* equal contribution

arXiv Blogpost Demo Hugging Face Spaces

Multi-HMR illustration 1 Multi-HMR illustration 2
Multi-HMR is a simple yet effective single-shot model for multi-person and expressive human mesh recovery. It takes as input a single RGB image and efficiently performs 3D reconstruction of multiple humans in camera space.

Installation

First, you need to clone the repo.

We recommand to use virtual enviroment for running MultiHMR. Please run the following lines for creating the environment with venv:

python3.9 -m venv .multihmr
source .multihmr/bin/activate
pip install -r requirements.txt

Otherwise you can also create a conda environment.

conda env create -f conda.yaml
conda activate multihmr

The installation has been tested with CUDA 11.7.

Checkpoints will automatically be downloaded to $HOME/models/multiHMR the first time you run the demo code.

Besides these files, you also need to download the SMPLX model. You will need the neutral model for running the demo code. Please go to the corresponding website and register to get access to the downloads section. Download the model and place SMPLX_NEUTRAL.npz in ./models/smplx/.

Run Multi-HMR on images

The following command will run Multi-HMR on all images in the specified --img_folder, and save renderings of the reconstructions in --out_folder. The --model_name flag specifies the model to use. The --extra_views flags additionally renders the side and bev view of the reconstructed scene, --save_mesh saves meshes as in a '.npy' file.

python3.9 demo.py \
    --img_folder example_data \
    --out_folder demo_out \
    --extra_views 1 \
    --model_name multiHMR_896_L_synth

Pre-trained models

We provide multiple pre-trained checkpoints. Here is a list of their associated features. Once downloaded you need to place them into $HOME/models/multiHMR.

modelname training data backbone resolution runtime (ms)
multiHMR_896_L_synth BEDLAM+AGORA ViT-L 896x896 126

We compute the runtime on GPU V100-32GB.

License

The code is distributed under the CC BY-NC-SA 4.0 License.
See Multi-HMR LICENSE, Checkpoint LICENSE and Example Data LICENSE for more information.

Citing

If you find this code useful for your research, please have a look to the associated paper arxiv.org/abs/2402.14654 and please consider citing the following paper:

@inproceedings{multi-hmr2024,
    title={Multi-HMR: Single-Shot Multi-Person Expressive Human Mesh Recovery},
    author={Baradel*, Fabien and 
            Armando, Matthieu and 
            Galaaoui, Salma and 
            Br{\'e}gier, Romain and 
            Weinzaepfel, Philippe and 
            Rogez, Gr{\'e}gory and
            Lucas*, Thomas
            },
    booktitle={ECCV},
    year={2024}
}