Depth Estimation
sapiens
English
rawalkhirodkar's picture
Update metadata with huggingface_hub (#2)
428be6e verified
metadata
language: en
license: cc-by-nc-4.0
pipeline_tag: depth-estimation
tags:
  - sapiens

Depth-Sapiens-2B-Bfloat16

Model Details

Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions. Sapiens-2B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.

  • Developed by: Meta
  • Model type: Vision Transformer
  • License: Creative Commons Attribution-NonCommercial 4.0
  • Task: depth
  • Format: bfloat16
  • File: sapiens_2b_render_people_epoch_25_bfloat16.pt2

Model Card

  • Image Size: 1024 x 768 (H x W)
  • Num Parameters: 2.163 B
  • FLOPs: 8.709 TFLOPs
  • Patch Size: 16 x 16
  • Embedding Dimensions: 1920
  • Num Layers: 48
  • Num Heads: 32
  • Feedforward Channels: 7680

More Resources

Uses

Depth 2B model can be used to estimate relative depth on human images.