Instructions to use VAST-AI/GeoSAM2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sam2
How to use VAST-AI/GeoSAM2 with sam2:
# Use SAM2 with images import torch from sam2.sam2_image_predictor import SAM2ImagePredictor predictor = SAM2ImagePredictor.from_pretrained(VAST-AI/GeoSAM2) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image(<your_image>) masks, _, _ = predictor.predict(<input_prompts>)# Use SAM2 with videos import torch from sam2.sam2_video_predictor import SAM2VideoPredictor predictor = SAM2VideoPredictor.from_pretrained(VAST-AI/GeoSAM2) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state(<your_video>) # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points(state, <your_prompts>): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,9 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
|
| 6 |
+
|
| 7 |
+
Project Page: https://detailgen3d.github.io/GeoSAM2/
|
| 8 |
+
|
| 9 |
+
Github Code: https://github.com/VAST-AI-Research/GeoSAM2
|