Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SAM-Med2D [Paper]

Open in OpenXLab Open In Colab

🌀️ Highlights

  • πŸ† Collected and curated the largest medical image segmentation dataset (4.6M images and 19.7M masks) to date for training models.
  • πŸ† The most comprehensive fine-tuning based on Segment Anything Model (SAM).
  • πŸ† Comprehensive evaluation of SAM-Med2D on large-scale datasets.

πŸ”₯ Updates

  • (2023.09.02) Test code release
  • (2023.08.31) Pre-trained model release
  • (2023.08.31) Paper release
  • (2023.08.26) Online Demo release

πŸ‘‰ Dataset

SAM-Med2D is trained and tested on a dataset that includes 4.6M images and 19.7M masks. This dataset covers 10 medical data modalities, 4 anatomical structures + lesions, and 31 major human organs. To our knowledge, this is currently the largest and most diverse medical image segmentation dataset in terms of quantity and coverage of categories.

image

πŸ‘‰ Framework

The pipeline of SAM-Med2D. We freeze the image encoder and incorporate learnable adapter layers in each Transformer block to acquire domain-specific knowledge in the medical field. We fine-tune the prompt encoder using point, Bbox, and mask information, while updating the parameters of the mask decoder through interactive training.

image

πŸ‘‰ Results

Quantitative comparison of different methods on the test set:
Model Resolution Bbox (%) 1 pt (%) 3 pts (%) 5 pts (%) FPS Checkpoint
SAM $256\times256$ 61.63 18.94 28.28 37.47 51 Offical
SAM $1024\times1024$ 74.49 36.88 42.00 47.57 8 Offical
FT-SAM $256\times256$ 73.56 60.11 70.95 75.51 51 FT-SAM
SAM-Med2D $256\times256$ 79.30 70.01 76.35 78.68 35 SAM-Med2D
Generalization validation on 9 MICCAI2023 datasets, where "*" denotes that we drop adapter layer of SAM-Med2D in test phase:
Datasets Bbox prompt (%) 1 point prompt (%)
SAM SAM-Med2D SAM-Med2D* SAM SAM-Med2D SAM-Med2D*
CrossMoDA23 78.98 70.51 84.62 18.49 46.08 73.98
KiTS23 84.80 76.32 87.93 38.93 48.81 79.87
FLARE23 86.11 83.51 90.95 51.05 62.86 85.10
ATLAS2023 82.98 73.70 86.56 46.89 34.72 70.42
SEG2023 75.98 68.02 84.31 11.75 48.05 69.85
LNQ2023 72.31 63.84 81.33 3.81 44.81 59.84
CAS2023 52.34 46.11 60.38 0.45 28.79 15.19
TDSC-ABUS2023 71.66 64.65 76.65 12.11 35.99 61.84
ToothFairy2023 65.86 57.45 75.29 1.01 32.12 47.32
Weighted sum 85.35 81.93 90.12 48.08 60.31 83.41

πŸ‘‰ Visualization

image

πŸ‘‰ Test

Prepare your own dataset and refer to the samples in SAM-Med2D/data_demo to replace them according to your specific scenario. You need to generate the "label2image_test.json" file before running "test.py"

cd ./SAM-Med2d
python test.py
  • work_dir: Specifies the working directory for the testing process. Default value is "workdir".
  • batch_size: 1.
  • image_size: Default value is 256.
  • boxes_prompt: Use Bbox prompt to get segmentation results.
  • point_num: Specifies the number of points. Default value is 1.
  • iter_point: Specifies the number of iterations for point prompts.
  • sam_checkpoint: Load sam or sammed checkpoint.
  • encoder_adapter: Set to True if using SAM-Med2D's pretrained weights.
  • save_pred: Whether to save the prediction results.
  • prompt_path: Is there a fixed Prompt file? If not, the value is None, and it will be automatically generated in the latest prediction.

πŸš€ Try SAM-Med2D

πŸ—“οΈ Ongoing

  • Train code release
  • Test code release
  • Pre-trained model release
  • Paper release
  • Online Demo release

🎫 License

This project is released under the Apache 2.0 license.

πŸ’¬ Discussion Group

If you have any inquiries regarding SAM-Med2D, you are welcome to join our WeChat group discussion by adding the contact below:

image

🀝 Acknowledgement

  • We thank all medical workers and dataset owners for making public datasets available to the community.
  • Thanks to the open-source of the following projects: Segment Anything  

πŸ‘‹ Hiring & Global Collaboration

  • Hiring: We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.
  • Global Collaboration: We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.
  • Contact: Junjun He(hejunjun@pjlab.org.cn), Jin Ye(yejin@pjlab.org.cn), and Tianbin Li (litianbin@pjlab.org.cn).

Reference

@misc{cheng2023sammed2d,
      title={SAM-Med2D}, 
      author={Junlong Cheng and Jin Ye and Zhongying Deng and Jianpin Chen and Tianbin Li and Haoyu Wang and Yanzhou Su and
              Ziyan Huang and Jilong Chen and Lei Jiangand Hui Sun and Junjun He and Shaoting Zhang and Min Zhu and Yu Qiao},
      year={2023},
      eprint={2308.16184},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .