ECON / README.md
Yuliang's picture
NICP for SMPL-X completion
c3d3e4a
|
raw
history blame
7.16 kB

ECON: Explicit Clothed humans Obtained from Normals

Yuliang XiuJinlong YangXu CaoDimitrios TzionasMichael J. Black

arXiv 2022

Logo


PyTorch Lightning

Paper PDF Project Page youtube views


ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with loose clothing or in challenging poses. ECON also supports batch reconstruction from multi-person photos.

News :triangular_flag_on_post:

  • [2022/12/09] Demo is available.

TODO

  • Blender add-on for FBX export
  • Full RGB texture generation

Table of Contents
  1. Instructions
  2. Demo
  3. Tricks
  4. Citation

Instructions

Demo

# For single-person image-based reconstruction
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results

# For multi-person image-based reconstruction (see config/econ.yaml)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi

# To generate the demo video of reconstruction results
python -m apps.multi_render -n {filename}

Tricks

Some adjustable parameters in config/econ.yaml

  • use_ifnet: True
    • True: use IF-Nets+ for mesh completion ( $\text{ECON}_\text{IF}$ - Better quality)
    • False: use SMPL-X for mesh completion ( $\text{ECON}_\text{EX}$ - Faster speed)
  • use_smpl: ["hand", "face"]
    • [ ]: don't use either hands or face parts from SMPL-X
    • ["hand"]: only use the visible hands from SMPL-X
    • ["hand", "face"]: use both visible hands and face from SMPL-X
  • thickness: 2cm
    • could be increased accordingly in case xx_full.obj looks flat
  • hps_type: pixie
    • "pixie": more accurate for face and hands
    • "pymafx": more robust for challenging poses

More Qualitative Results

OOD Poses
Challenging Poses
OOD Clothes
Loose Clothes
SHHQ
ECON Results on SHHQ Dataset
crowd
ECON Results on Multi-Person Image


Citation

@misc{xiu2022econ,
    title={ECON: Explicit Clothed humans Obtained from Normals},
    author={Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
    year={2022}
    publisher={arXiv},
    primaryClass={cs.CV}
}

Acknowledgments

We thank Lea Hering and Radek Dan臎膷ek for proof reading, Yao Feng, Haven Feng, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study.

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Union鈥檚 Horizon 2020 research and innovation programme under the Marie Sk艂odowska-Curie grant agreement No.860768 (CLIPE Project).



License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH.

Contact

For technical questions, please contact yuliang.xiu@tue.mpg.de

For commercial licensing, please contact ps-licensing@tue.mpg.de