File size: 1,419 Bytes
e30b1cc fa74b5c 67414a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
This repo contains the checkpoints for SAT.
We offer SAT-Pro, SAT-Nano (both trained on 72 datasets) and another 5 different variants of SAT-Nano (all trained on 49 datasets):
- SAT-Pro: ./Pro
- SAT-Nano: ./Nano
- UNET-Ours: ./Others/UNET-Ours
- UNET-CPT: ./Others/UNET-CPT
- UNET-BB: ./Others/UNET-BaseBERT
- UMamba-CPT: ./Others/UMamba-CPT
- SwinUNETR-CPT: ./Others/SwinUNETR-CPT
Check our [paper](https://github.com/zhaoziheng/SAT/tree/main) for more details, and [github repo](https://github.com/zhaoziheng/SAT/tree/main?tab=readme-ov-file) for usage instruction.
⚠️ Each model should be used with paired checkpoint and text encoder checkpoint.
In addition, we provide multiple pretrained encoders at ./Pretrain. Enhanced with multi-modal human anatomy knowledge, they significantly boost the segmentation performance and are potentially beneficial for other tasks:
- A version pretrained only with the textual knowledge (`textual_only.pth`).
- A version further pretrained with [SAT-DS](https://github.com/zhaoziheng/SAT-DS/tree/main) (`multimodal_sat_ds.pth`). It can be used to reproduce results in our [paper](https://arxiv.org/abs/2312.17183).
- A version further pretrained with 10% training data from [CVPR 2025: FOUNDATION MODELS FOR TEXT-GUIDED 3D BIOMEDICAL IMAGE SEGMENTATION](https://www.codabench.org/competitions/5651/) (`multimodal_cvpr25.pth`). It's explicitly optimized for the challenge.
|