DepthAnythingV2 / README.md
cooper_robot
Add release note for v1.1.0
b419c06
metadata
library_name: pytorch

depthanythingv2_logo

DepthAnythingV2 is a lightweight depth estimation model designed to predict accurate per-pixel depth maps from a single RGB image, optimized for efficiency and versatility across scenes.

Original paper: DepthAnything V2

DepthAnythingV2-Small

This model uses the DepthAnythingV2-Small variant, which balances model size and inference speed while maintaining strong depth estimation accuracy. It is well suited for applications such as AR/VR, robotics, scene reconstruction, and real-time 3D perception on edge devices.

Model Configuration:

Model Device Model Link
DepthAnythingV2-Small N1-655 Model_Link
DepthAnythingV2-Small CV72 Model_Link
DepthAnythingV2-Small CV75 Model_Link