cyun9286 commited on
Commit
32df3f5
·
verified ·
1 Parent(s): 2936afd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -6,6 +6,49 @@ tags:
6
  - pytorch_model_hub_mixin
7
  ---
8
 
9
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
10
- - Library: https://github.com/jiah-cloud/Align3R
11
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - pytorch_model_hub_mixin
7
  ---
8
 
9
+
10
+ <a href='https://arxiv.org/abs/2412.03079'><img src='https://img.shields.io/badge/arXiv-2412.03079-b31b1b.svg'></a> &nbsp;
11
+ <a href='https://igl-hkust.github.io/Align3R.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp;
12
+ <a href='https://github.com/jiah-cloud/Align3R'><img src='https://img.shields.io/badge/Github-Repo-blue'></a> &nbsp;
13
+
14
+
15
+ [**Align3R: Aligned Monocular Depth Estimation for Dynamic Videos**](https://arxiv.org/abs/2412.03079)
16
+ [*Jiahao Lu*\*](https://github.com/jiah-cloud),
17
+ [*Tianyu Huang*\*](https://scholar.google.com/citations?view_op=list_works&hl=en&user=nhbSplwAAAAJ),
18
+ [*Peng Li*](https://scholar.google.com/citations?user=8eTLCkwAAAAJ&hl=zh-CN),
19
+ [*Zhiyang Dou*](https://frank-zy-dou.github.io/),
20
+ [*Cheng Lin*](https://clinplayer.github.io/),
21
+ [*Zhiming Cui*](),
22
+ [*Zhen Dong*](https://dongzhenwhu.github.io/index.html),
23
+ [*Sai-Kit Yeung*](https://saikit.org/index.html),
24
+ [*Wenping Wang*](https://scholar.google.com/citations?user=28shvv0AAAAJ&hl=en),
25
+ [*Yuan Liu*](https://liuyuan-pal.github.io/)
26
+ Arxiv, 2024.
27
+
28
+ **Align3R** estimates temporally consistent video depth, dynamic point clouds, and camera poses from monocular videos.
29
+ <video controls>
30
+ <source src="https://igl-hkust.github.io/Align3R.github.io/static/video/converted/output_video.mp4" type="video/mp4">
31
+ </video>
32
+
33
+
34
+
35
+ ```bibtex
36
+ @article{lu2024align3r,
37
+ title={Align3R: Aligned Monocular Depth Estimation for Dynamic Videos},Jiahao Lu, Tianyu Huang, Peng Li, Zhiyang Dou, Cheng Lin, Zhiming Cui, Zhen Dong, Sai-Kit Yeung, Wenping Wang, Yuan Liu
38
+ author={Lu, Jiahao and Huang, Tianyu and Li, Peng and Dou, Zhiyang and Lin, Cheng and Cui, Zhiming and Dong, Zhen and Yeung, Sai-Kit and Wang, Wenping and Liu,Yuan},
39
+ journal={arXiv preprint arXiv:2412.03079},
40
+ year={2024}
41
+ }
42
+ ```
43
+
44
+ ### How to use
45
+
46
+ First, [install Align3R](https://github.com/jiah-cloud/Align3R).
47
+ To load the model:
48
+
49
+ ```python
50
+ from dust3r.model import AsymmetricCroCo3DStereo
51
+ import torch
52
+ model = AsymmetricCroCo3DStereo.from_pretrained("cyun9286/Align3R_DepthAnythingV2_ViTLarge_BaseDecoder_512_dpt")
53
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
54
+ model.to(device)