akrao9 commited on
Commit
f0826bb
·
verified ·
1 Parent(s): fa57b71

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +76 -3
README.md CHANGED
@@ -1,3 +1,76 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: pytorch
4
+ base_model: facebook/VGGT-1B
5
+ tags:
6
+ - vggt
7
+ - depth-estimation
8
+ - 3d-vision
9
+ - camera-pose
10
+ - test-time-training
11
+ - lact
12
+ pipeline_tag: depth-estimation
13
+ ---
14
+
15
+ # VGGT LaCT (stage 1) — slim adapter weights
16
+
17
+ These files are **LaCT-block weights only** (~200 MB), not a full VGGT checkpoint. They plug into the public **[facebook/VGGT-1B](https://huggingface.co/facebook/VGGT-1B)** backbone: DINOv2 patch embed, frame-wise attention, and prediction heads stay at Meta’s pretrained VGGT-1B; only the **global-attention layers are replaced** by LaCT-style fast-weight GLU blocks trained with stage-1 distillation against the frozen teacher.
18
+
19
+ **Code:** [github.com/Akrao9/vggt_ttt](https://github.com/Akrao9/vggt_ttt) (install `vggt` from [facebookresearch/vggt](https://github.com/facebookresearch/vggt) as in that README).
20
+
21
+ ## Files
22
+
23
+ | File | Description |
24
+ |------|-------------|
25
+ | `vggt_ttt_lact_stage1.pt` | Stage 1 distilled LaCT state dict (`lact_state_dict()` format). Keys are prefixed with `aggregator.lact_blocks.`. |
26
+
27
+ ## Load (Python)
28
+
29
+ ```python
30
+ import torch
31
+ from huggingface_hub import hf_hub_download
32
+
33
+ # From the vggt_ttt repo (with `vggt` installed per upstream README):
34
+ from model.vggt_ttt import VGGT_TTT
35
+ from model.io_utils import torch_load_checkpoint
36
+
37
+ ckpt_path = hf_hub_download("akrao9/VGGT-LACT", "vggt_ttt_lact_stage1.pt")
38
+ device = "cuda"
39
+ model = VGGT_TTT.from_pretrained("facebook/VGGT-1B", chunk_size=16).to(device).eval()
40
+ state = torch_load_checkpoint(ckpt_path, map_location=device)
41
+ model.load_lact_state_dict(state, strict=True)
42
+ ```
43
+
44
+ Use a local path instead of `hf_hub_download` if you already downloaded the `.pt` file.
45
+
46
+ ## Inference CLI
47
+
48
+ From the [vggt_ttt](https://github.com/Akrao9/vggt_ttt) repo, after downloading this checkpoint locally:
49
+
50
+ ```bash
51
+ python scripts/run_inference.py \
52
+ --input path/to/video.mp4 --fps 2 \
53
+ --checkpoint ./vggt_ttt_lact_stage1.pt \
54
+ --out ./out
55
+ ```
56
+
57
+ (`--checkpoint` accepts this LaCT-only dict; see `scripts/run_inference.py`.)
58
+
59
+ ## Training summary
60
+
61
+ - **Stage 1:** distillation from frozen `facebook/VGGT-1B` (pose / depth / world points), trainable parameters confined to the 24 LaCT blocks; `c_proj` zero-init for a near-identity start.
62
+ - **Checkpoints:** saved with `torch.save(model.lact_state_dict(), path)` — same tensor layout as this Hub file.
63
+
64
+ ## Hardware / scaling
65
+
66
+ LaCT path is aimed at **longer frame sequences** with more favorable VRAM scaling than full global attention; see the GitHub README for benchmark tables (DL3DV-style eval).
67
+
68
+ ## License and attribution
69
+
70
+ - This **adapter** repository and the training code release are under **Apache 2.0** (see project `LICENSE` / `NOTICE` on GitHub).
71
+ - **VGGT-1B** is subject to Meta’s license and terms on its model card; you must comply with those when using the backbone.
72
+ - Method builds on **VGGT** and **LaCT**-style components as described in the upstream README.
73
+
74
+ ## Citation
75
+
76
+ If you use these weights or the [vggt_ttt](https://github.com/Akrao9/vggt_ttt) codebase, cite the original **VGGT** paper/repo and credit this adapter as appropriate for your venue.