LiheYoung commited on
Commit
129f0ac
1 Parent(s): c88f92d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -3
README.md CHANGED
@@ -1,3 +1,64 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+
4
+ language:
5
+ - en
6
+ pipeline_tag: depth-estimation
7
+ tags:
8
+ - depth
9
+ - relative depth
10
+ ---
11
+
12
+ # Depth-Anything-V2-Large
13
+
14
+ ## Introduction
15
+ Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:
16
+ - more fine-grained details than Depth Anything V1
17
+ - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard)
18
+ - more efficient (10x faster) and more lightweight than SD-based models
19
+ - impressive fine-tuned performance with our pre-trained models
20
+
21
+ ## Installation
22
+
23
+ ```bash
24
+ git clone https://huggingface.co/spaces/depth-anything/Depth-Anything-V2
25
+ cd Depth-Anything-V2
26
+ pip install -r requirements.txt
27
+ ```
28
+
29
+ ## Usage
30
+
31
+ Download the [model](https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth?download=true) first and put it under the `checkpoints` directory.
32
+
33
+ ```python
34
+ import cv2
35
+ import torch
36
+
37
+ from depth_anything_v2.dpt import DepthAnythingV2
38
+
39
+ model = DepthAnythingV2(encoder='vitl', features=256, out_channels=[256, 512, 1024, 1024])
40
+ model.load_state_dict(torch.load('checkpoints/depth_anything_v2_vitl.pth', map_location='cpu'))
41
+ model.eval()
42
+
43
+ raw_img = cv2.imread('your/image/path')
44
+ depth = model.infer_image(raw_img) # HxW raw depth map
45
+ ```
46
+
47
+ ## Citation
48
+
49
+ If you find this project useful, please consider citing:
50
+
51
+ ```bibtex
52
+ @article{depth_anything_v2,
53
+ title={Depth Anything V2},
54
+ author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
55
+ journal={arXiv:2406.09414},
56
+ year={2024}
57
+ }
58
+
59
+ @inproceedings{depth_anything_v1,
60
+ title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
61
+ author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
62
+ booktitle={CVPR},
63
+ year={2024}
64
+ }