ybelkada commited on
Commit
d355616
1 Parent(s): 35e561f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # Model Card for Med-Segment Anything Model (MedSAM) - ViT Base (ViT-B) version
6
+
7
+ <p>
8
+ <img src="https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/F1LWM9MXjHJsiAtgBFpDP.png" alt="Model architecture">
9
+ <em> Detailed architecture of Segment Anything Model (SAM).</em>
10
+ </p>
11
+
12
+
13
+ # Table of Contents
14
+
15
+ 0. [TL;DR](#TL;DR)
16
+ 1. [Model Details](#model-details)
17
+ 2. [Usage](#usage)
18
+ 3. [Citation](#citation)
19
+
20
+ # TL;DR
21
+
22
+
23
+ [Link to original repository](https://github.com/facebookresearch/segment-anything)
24
+
25
+ | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/wHXbJx1oXqHCYNeUNKHs8.png" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> |
26
+ |---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
27
+
28
+
29
+ The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
30
+ The abstract of the paper states:
31
+
32
+ > We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision.
33
+
34
+ MedSam is a SAM model fine-tuned on medical images. The original paper can be found [here](https://t.co/QOC9KaCg41). And the code can be found [here](https://github.com/bowang-lab/MedSAM).
35
+
36
+ **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything).
37
+
38
+ # Model Details
39
+
40
+ The SAM model is made up of 3 modules:
41
+ - The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used.
42
+ - The `PromptEncoder`: generates embeddings for points and bounding boxes
43
+ - The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed
44
+ - The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`.
45
+ # Usage
46
+
47
+
48
+ ## Prompted-Mask-Generation
49
+
50
+ ```python
51
+ from PIL import Image
52
+ import requests
53
+ from transformers import SamModel, SamProcessor
54
+
55
+ model = SamModel.from_pretrained("nielsr/medsam-vit-base")
56
+ processor = SamProcessor.from_pretrained("nielsr/medsam-vit-base")
57
+
58
+ img_url = path_to_your_img
59
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
60
+ input_points = [[[450, 600]]] # 2D localization of a window
61
+ ```
62
+
63
+
64
+ ```python
65
+ inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
66
+ outputs = model(**inputs)
67
+ masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
68
+ scores = outputs.iou_scores
69
+ ```
70
+ Among other arguments to generate masks, you can pass 2D locations on the approximate position of your object of interest, a bounding box wrapping the object of interest (the format should be x, y coordinate of the top right and bottom left point of the bounding box), a segmentation mask. At this time of writing, passing a text as input is not supported by the official model according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844).
71
+ For more details, refer to this notebook, which shows a walk throught of how to use the model, with a visual example!
72
+
73
+
74
+
75
+
76
+ # Citation
77
+
78
+ If you use this model, please use the following BibTeX entries.
79
+ ```
80
+ @misc{ma2023segment,
81
+ title={Segment Anything in Medical Images},
82
+ author={Jun Ma and Bo Wang},
83
+ year={2023},
84
+ eprint={2304.12306},
85
+ archivePrefix={arXiv},
86
+ primaryClass={eess.IV}
87
+ }
88
+ ```
89
+ ```
90
+ @article{kirillov2023segany,
91
+ title={Segment Anything},
92
+ author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
93
+ journal={arXiv:2304.02643},
94
+ year={2023}
95
+ }
96
+ ```