obsxrver
/

obsxrver bodhicitta commited on
Commit
6530e19
·
0 Parent(s):

Duplicate from bodhicitta/sam3

Browse files

Co-authored-by: Hu <bodhicitta@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *.tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.db* filter=lfs diff=lfs merge=lfs -text
29
+ *.ark* filter=lfs diff=lfs merge=lfs -text
30
+ **/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
31
+ **/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
32
+ **/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
33
+
34
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
35
+ *.gguf* filter=lfs diff=lfs merge=lfs -text
36
+ *.ggml filter=lfs diff=lfs merge=lfs -text
37
+ *.llamafile* filter=lfs diff=lfs merge=lfs -text
38
+ *.pt2 filter=lfs diff=lfs merge=lfs -text
39
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
40
+ *.npy filter=lfs diff=lfs merge=lfs -text
41
+ *.npz filter=lfs diff=lfs merge=lfs -text
42
+ *.pickle filter=lfs diff=lfs merge=lfs -text
43
+ *.pkl filter=lfs diff=lfs merge=lfs -text
44
+ *.tar filter=lfs diff=lfs merge=lfs -text
45
+ *.wasm filter=lfs diff=lfs merge=lfs -text
46
+ *.zst filter=lfs diff=lfs merge=lfs -text
47
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
48
+
49
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
50
+ sam3.pt filter=lfs diff=lfs merge=lfs -text
51
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SAM License
2
+ Last Updated: November 19, 2025
3
+
4
+ “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the SAM Materials set forth herein.
5
+
6
+
7
+ “SAM Materials” means, collectively, Documentation and the models, software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code, and other elements of the foregoing distributed by Meta and made available under this Agreement.
8
+
9
+ “Documentation” means the specifications, manuals and documentation accompanying
10
+ SAM Materials distributed by Meta.
11
+
12
+
13
+ “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
14
+
15
+
16
+ “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) or Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
17
+
18
+
19
+ “Sanctions” means any economic or trade sanctions or restrictions administered or enforced by the United States (including the Office of Foreign Assets Control of the U.S. Department of the Treasury (“OFAC”), the U.S. Department of State and the U.S. Department of Commerce), the United Nations, the European Union, or the United Kingdom.
20
+
21
+
22
+ “Trade Controls” means any of the following: Sanctions and applicable export and import controls.
23
+
24
+ By using or distributing any portion or element of the SAM Materials, you agree to be bound by this Agreement.
25
+
26
+
27
+ 1. License Rights and Redistribution.
28
+
29
+
30
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the SAM Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the SAM Materials.
31
+
32
+ b. Redistribution and Use.
33
+ i. Distribution of SAM Materials, and any derivative works thereof, are subject to the terms of this Agreement. If you distribute or make the SAM Materials, or any derivative works thereof, available to a third party, you may only do so under the terms of this Agreement and you shall provide a copy of this Agreement with any such SAM Materials.
34
+
35
+
36
+ ii. If you submit for publication the results of research you perform on, using, or otherwise in connection with SAM Materials, you must acknowledge the use of SAM Materials in your publication.
37
+
38
+
39
+ iii. Your use of the SAM Materials must comply with applicable laws and regulations, including Trade Control Laws and applicable privacy and data protection laws.
40
+ iv. Your use of the SAM Materials will not involve or encourage others to reverse engineer, decompile or discover the underlying components of the SAM Materials.
41
+ v. You are not the target of Trade Controls and your use of SAM Materials must comply with Trade Controls. You agree not to use, or permit others to use, SAM Materials for any activities subject to the International Traffic in Arms Regulations (ITAR) or end uses prohibited by Trade Controls, including those related to military or warfare purposes, nuclear industries or applications, espionage, or the development or use of guns or illegal weapons.
42
+ 2. User Support. Your use of the SAM Materials is done at your own discretion; Meta does not process any information nor provide any service in relation to such use. Meta is under no obligation to provide any support services for the SAM Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind.
43
+
44
+
45
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SAM MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SAM MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SAM MATERIALS AND ANY OUTPUT AND RESULTS.
46
+
47
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY DIRECT OR INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
48
+
49
+ 5. Intellectual Property.
50
+
51
+
52
+ a. Subject to Meta’s ownership of SAM Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the SAM Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
53
+
54
+ b. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the SAM Materials, outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the SAM Materials.
55
+
56
+ 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the SAM Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the SAM Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
57
+
58
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
59
+
60
+
61
+ 8. Modifications and Amendments. Meta may modify this Agreement from time to time; provided that they are similar in spirit to the current version of the Agreement, but may differ in detail to address new problems or concerns. All such changes will be effective immediately. Your continued use of the SAM Materials after any modification to this Agreement constitutes your agreement to such modification. Except as provided in this Agreement, no modification or addition to any provision of this Agreement will be binding unless it is in writing and signed by an authorized representative of both you and Meta.
README.md ADDED
@@ -0,0 +1,723 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ extra_gated_fields:
4
+ First Name: text
5
+ Last Name: text
6
+ Date of birth: date_picker
7
+ Country: country
8
+ Affiliation: text
9
+ Job title:
10
+ type: select
11
+ options:
12
+ - Student
13
+ - Research Graduate
14
+ - AI researcher
15
+ - AI developer/engineer
16
+ - Reporter
17
+ - Other
18
+ geo: ip_location
19
+ By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
20
+ extra_gated_description: >-
21
+ The information you provide will be collected, stored, processed and shared in
22
+ accordance with the [Meta Privacy
23
+ Policy](https://www.facebook.com/privacy/policy/).
24
+ extra_gated_button_content: Submit
25
+ language:
26
+ - en
27
+ pipeline_tag: mask-generation
28
+ library_name: transformers
29
+ tags:
30
+ - sam3
31
+ ---
32
+
33
+ ** 💛💛💛 南無阿彌陀佛 💛 ** <br/>
34
+ ** 💛💛💛 NamoAhmituofo 💛 **
35
+ > 讓我們以慈心善待一切衆生,拒絕肉食、喫素放生,阿彌陀佛 💛 <br/>
36
+ > Let's be kind to all sentinent beings, being vegetarians and stop eatting meats, away from all sufferrings, Ahmituofo 💛. <br/>
37
+ > 如此,我們將獲得無量無邊利益,因爲衆生一體,因果相續。
38
+ > By doing so, we shall gain immeasurable benefits and peace for ourselves and all beings. For we are ONE, you get what you pay.
39
+
40
+ SAM 3 is a unified foundation model for promptable segmentation in images and videos. It can detect, segment, and track objects using text or visual prompts such as points, boxes, and masks. Compared to its predecessor [SAM 2](https://github.com/facebookresearch/sam2), SAM 3 introduces the ability to exhaustively segment all instances of an open-vocabulary concept specified by a short text phrase or exemplars. Unlike prior work, SAM 3 can handle a vastly larger set of open-vocabulary prompts. It achieves 75-80% of human performance on our new [SA-CO benchmark](https://github.com/facebookresearch/sam3/edit/main_readme/README.md#sa-co-dataset) which contains 270K unique concepts, over 50 times more than existing benchmarks.
41
+
42
+ [Hugging Face 🤗 app](https://huggingface.co/spaces/akhaliq/sam3)
43
+
44
+ ### Basic Usage
45
+
46
+ ```python
47
+ import torch
48
+ #################################### For Image ####################################
49
+ from PIL import Image
50
+ from sam3.model_builder import build_sam3_image_model
51
+ from sam3.model.sam3_image_processor import Sam3Processor
52
+ # Load the model
53
+ model = build_sam3_image_model()
54
+ processor = Sam3Processor(model)
55
+ # Load an image
56
+ image = Image.open("<YOUR_IMAGE_PATH.jpg>")
57
+ inference_state = processor.set_image(image)
58
+ # Prompt the model with text
59
+ output = processor.set_text_prompt(state=inference_state, prompt="<YOUR_TEXT_PROMPT>")
60
+
61
+ # Get the masks, bounding boxes, and scores
62
+ masks, boxes, scores = output["masks"], output["boxes"], output["scores"]
63
+
64
+ #################################### For Video ####################################
65
+
66
+ from sam3.model_builder import build_sam3_video_predictor
67
+
68
+ video_predictor = build_sam3_video_predictor()
69
+ video_path = "<YOUR_VIDEO_PATH>" # a JPEG folder or an MP4 video file
70
+ # Start a session
71
+ response = video_predictor.handle_request(
72
+ request=dict(
73
+ type="start_session",
74
+ resource_path=video_path,
75
+ )
76
+ )
77
+ response = video_predictor.handle_request(
78
+ request=dict(
79
+ type="add_prompt",
80
+ session_id=response["session_id"],
81
+ frame_index=0, # Arbitrary frame index
82
+ text="<YOUR_TEXT_PROMPT>",
83
+ )
84
+ )
85
+ output = response["outputs"]
86
+ ```
87
+
88
+ The official code is publicly released in the [sam3 repo](https://github.com/facebookresearch/sam3).
89
+
90
+
91
+ ## Usage with 🤗 Transformers
92
+
93
+ ### SAM3 - Promptable Concept Segmentation (PCS) for Images
94
+
95
+ SAM3 performs Promptable Concept Segmentation (PCS) on images, taking text and/or image exemplars as prompts and returning segmentation masks for **all matching object instances** in the image.
96
+
97
+ #### Text-Only Prompts
98
+
99
+ ```python
100
+ >>> from transformers import Sam3Processor, Sam3Model
101
+ >>> import torch
102
+ >>> from PIL import Image
103
+ >>> import requests
104
+
105
+ >>> device = "cuda" if torch.cuda.is_available() else "cpu"
106
+
107
+ >>> model = Sam3Model.from_pretrained("facebook/sam3").to(device)
108
+ >>> processor = Sam3Processor.from_pretrained("facebook/sam3")
109
+
110
+ >>> # Load image
111
+ >>> image_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
112
+ >>> image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
113
+
114
+ >>> # Segment using text prompt
115
+ >>> inputs = processor(images=image, text="ear", return_tensors="pt").to(device)
116
+
117
+ >>> with torch.no_grad():
118
+ ... outputs = model(**inputs)
119
+
120
+ >>> # Post-process results
121
+ >>> results = processor.post_process_instance_segmentation(
122
+ ... outputs,
123
+ ... threshold=0.5,
124
+ ... mask_threshold=0.5,
125
+ ... target_sizes=inputs.get("original_sizes").tolist()
126
+ ... )[0]
127
+
128
+ >>> print(f"Found {len(results['masks'])} objects")
129
+ >>> # Results contain:
130
+ >>> # - masks: Binary masks resized to original image size
131
+ >>> # - boxes: Bounding boxes in absolute pixel coordinates (xyxy format)
132
+ >>> # - scores: Confidence scores
133
+ ```
134
+
135
+ You can display masks using a simple helper like the following:
136
+
137
+ ```python
138
+ import numpy as np
139
+ import matplotlib
140
+
141
+ def overlay_masks(image, masks):
142
+ image = image.convert("RGBA")
143
+ masks = 255 * masks.cpu().numpy().astype(np.uint8)
144
+
145
+ n_masks = masks.shape[0]
146
+ cmap = matplotlib.colormaps.get_cmap("rainbow").resampled(n_masks)
147
+ colors = [
148
+ tuple(int(c * 255) for c in cmap(i)[:3])
149
+ for i in range(n_masks)
150
+ ]
151
+
152
+ for mask, color in zip(masks, colors):
153
+ mask = Image.fromarray(mask)
154
+ overlay = Image.new("RGBA", image.size, color + (0,))
155
+ alpha = mask.point(lambda v: int(v * 0.5))
156
+ overlay.putalpha(alpha)
157
+ image = Image.alpha_composite(image, overlay)
158
+ return image
159
+ ```
160
+
161
+ Then you can save the resulting composite image or display it in a notebook:
162
+
163
+ ```python
164
+ >>> overlay_masks(image, results["masks"])
165
+ ```
166
+
167
+ #### Single Bounding Box Prompt
168
+
169
+ Segment objects using a bounding box:
170
+
171
+ ```python
172
+ >>> # Box in xyxy format: [x1, y1, x2, y2] in pixel coordinates
173
+ >>> # Example: laptop region
174
+ >>> box_xyxy = [100, 150, 500, 450]
175
+ >>> input_boxes = [[box_xyxy]] # [batch, num_boxes, 4]
176
+ >>> input_boxes_labels = [[1]] # 1 = positive box
177
+
178
+ >>> inputs = processor(
179
+ ... images=image,
180
+ ... input_boxes=input_boxes,
181
+ ... input_boxes_labels=input_boxes_labels,
182
+ ... return_tensors="pt"
183
+ ... ).to(device)
184
+
185
+ >>> with torch.no_grad():
186
+ ... outputs = model(**inputs)
187
+
188
+ >>> # Post-process results
189
+ >>> results = processor.post_process_instance_segmentation(
190
+ ... outputs,
191
+ ... threshold=0.5,
192
+ ... mask_threshold=0.5,
193
+ ... target_sizes=inputs.get("original_sizes").tolist()
194
+ ... )[0]
195
+ ```
196
+
197
+ #### Multiple Box Prompts (Positive and Negative)
198
+
199
+ Use multiple boxes with positive and negative labels to refine the concept:
200
+
201
+ ```python
202
+ >>> # Load kitchen image
203
+ >>> kitchen_url = "http://images.cocodataset.org/val2017/000000136466.jpg"
204
+ >>> kitchen_image = Image.open(requests.get(kitchen_url, stream=True).raw).convert("RGB")
205
+
206
+ >>> # Define two positive boxes (e.g., dial and button on oven)
207
+ >>> # Boxes are in xyxy format [x1, y1, x2, y2] in pixel coordinates
208
+ >>> box1_xyxy = [59, 144, 76, 163] # Dial box
209
+ >>> box2_xyxy = [87, 148, 104, 159] # Button box
210
+ >>> input_boxes = [[box1_xyxy, box2_xyxy]]
211
+ >>> input_boxes_labels = [[1, 1]] # Both positive
212
+
213
+ >>> inputs = processor(
214
+ ... images=kitchen_image,
215
+ ... input_boxes=input_boxes,
216
+ ... input_boxes_labels=input_boxes_labels,
217
+ ... return_tensors="pt"
218
+ ... ).to(device)
219
+
220
+ >>> with torch.no_grad():
221
+ ... outputs = model(**inputs)
222
+
223
+ >>> # Post-process results
224
+ >>> results = processor.post_process_instance_segmentation(
225
+ ... outputs,
226
+ ... threshold=0.5,
227
+ ... mask_threshold=0.5,
228
+ ... target_sizes=inputs.get("original_sizes").tolist()
229
+ ... )[0]
230
+ >>> overlay_masks(kitchen_image, results["masks"])
231
+ ```
232
+
233
+ #### Combined Prompts (Text + Negative Box)
234
+
235
+ Use text prompts with negative visual prompts to refine the concept:
236
+
237
+ ```python
238
+ >>> # Segment "handle" but exclude the oven handle using a negative box
239
+ >>> text = "handle"
240
+ >>> # Negative box covering oven handle area (xyxy): [40, 183, 318, 204]
241
+ >>> oven_handle_box = [40, 183, 318, 204]
242
+ >>> input_boxes = [[oven_handle_box]]
243
+
244
+ >>> inputs = processor(
245
+ ... images=kitchen_image,
246
+ ... text=text,
247
+ ... input_boxes=input_boxes,
248
+ ... input_boxes_labels=[[0]], # 0 = negative (exclude this region)
249
+ ... return_tensors="pt"
250
+ ... ).to(device)
251
+
252
+ >>> with torch.no_grad():
253
+ ... outputs = model(**inputs)
254
+
255
+ >>> # Post-process results
256
+ >>> results = processor.post_process_instance_segmentation(
257
+ ... outputs,
258
+ ... threshold=0.5,
259
+ ... mask_threshold=0.5,
260
+ ... target_sizes=inputs.get("original_sizes").tolist()
261
+ ... )[0]
262
+ >>> # This will segment pot handles but exclude the oven handle
263
+ ```
264
+
265
+ #### Batched Inference with Text Prompts
266
+
267
+ Process multiple images with different text prompts by batch:
268
+
269
+ ```python
270
+ >>> cat_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
271
+ >>> kitchen_url = "http://images.cocodataset.org/val2017/000000136466.jpg"
272
+ >>> images = [
273
+ ... Image.open(requests.get(cat_url, stream=True).raw).convert("RGB"),
274
+ ... Image.open(requests.get(kitchen_url, stream=True).raw).convert("RGB")
275
+ ... ]
276
+
277
+ >>> text_prompts = ["ear", "dial"]
278
+
279
+ >>> inputs = processor(images=images, text=text_prompts, return_tensors="pt").to(device)
280
+
281
+ >>> with torch.no_grad():
282
+ ... outputs = model(**inputs)
283
+
284
+ >>> # Post-process results for both images
285
+ >>> results = processor.post_process_instance_segmentation(
286
+ ... outputs,
287
+ ... threshold=0.5,
288
+ ... mask_threshold=0.5,
289
+ ... target_sizes=inputs.get("original_sizes").tolist()
290
+ ... )
291
+
292
+ >>> print(f"Image 1: {len(results[0]['masks'])} objects found")
293
+ >>> print(f"Image 2: {len(results[1]['masks'])} objects found")
294
+ ```
295
+
296
+ #### Batched Mixed Prompts
297
+
298
+ Use different prompt types for different images in the same batch:
299
+
300
+ ```python
301
+ >>> # Image 1: text prompt "laptop"
302
+ >>> # Image 2: visual prompt (dial box)
303
+ >>> box2_xyxy = [59, 144, 76, 163]
304
+
305
+ >>> inputs = processor(
306
+ ... images=images,
307
+ ... text=["laptop", None], # Only first image has text
308
+ ... input_boxes=[None, [box2_xyxy]], # Only second image has box
309
+ ... input_boxes_labels=[None, [1]], # Positive box for second image
310
+ ... return_tensors="pt"
311
+ ... ).to(device)
312
+
313
+ >>> with torch.no_grad():
314
+ ... outputs = model(**inputs)
315
+
316
+ >>> # Post-process results for both images
317
+ >>> results = processor.post_process_instance_segmentation(
318
+ ... outputs,
319
+ ... threshold=0.5,
320
+ ... mask_threshold=0.5,
321
+ ... target_sizes=inputs.get("original_sizes").tolist()
322
+ ... )
323
+ >>> # Both images processed in single forward pass
324
+ ```
325
+
326
+ #### Semantic Segmentation Output
327
+
328
+ SAM3 also provides semantic segmentation alongside instance masks:
329
+
330
+ ```python
331
+ >>> inputs = processor(images=image, text="ear", return_tensors="pt").to(device)
332
+
333
+ >>> with torch.no_grad():
334
+ ... outputs = model(**inputs)
335
+
336
+ >>> # Instance segmentation masks
337
+ >>> instance_masks = torch.sigmoid(outputs.pred_masks) # [batch, num_queries, H, W]
338
+
339
+ >>> # Semantic segmentation (single channel)
340
+ >>> semantic_seg = outputs.semantic_seg # [batch, 1, H, W]
341
+
342
+ >>> print(f"Instance masks: {instance_masks.shape}")
343
+ >>> print(f"Semantic segmentation: {semantic_seg.shape}")
344
+ ```
345
+
346
+ ### SAM3 Video - Promptable Concept Segmentation (PCS) for Videos
347
+
348
+ SAM3 Video performs Promptable Concept Segmentation (PCS) on videos, taking text as prompts and detecting and tracking **all matching object instances** across video frames.
349
+
350
+ #### Pre-loaded Video Inference
351
+
352
+ Process a video with all frames already available using text prompts:
353
+
354
+ ```python
355
+ >>> from transformers import Sam3VideoModel, Sam3VideoProcessor
356
+ >>> from accelerate import Accelerator
357
+ >>> import torch
358
+
359
+ >>> device = Accelerator().device
360
+ >>> model = Sam3VideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
361
+ >>> processor = Sam3VideoProcessor.from_pretrained("facebook/sam3")
362
+
363
+ >>> # Load video frames
364
+ >>> from transformers.video_utils import load_video
365
+ >>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
366
+ >>> video_frames, _ = load_video(video_url)
367
+
368
+ >>> # Initialize video inference session
369
+ >>> inference_session = processor.init_video_session(
370
+ ... video=video_frames,
371
+ ... inference_device=device,
372
+ ... processing_device="cpu",
373
+ ... video_storage_device="cpu",
374
+ ... dtype=torch.bfloat16,
375
+ ... )
376
+
377
+ >>> # Add text prompt to detect and track objects
378
+ >>> text = "person"
379
+ >>> inference_session = processor.add_text_prompt(
380
+ ... inference_session=inference_session,
381
+ ... text=text,
382
+ ... )
383
+
384
+ >>> # Process all frames in the video
385
+ >>> outputs_per_frame = {}
386
+ >>> for model_outputs in model.propagate_in_video_iterator(
387
+ ... inference_session=inference_session, max_frame_num_to_track=50
388
+ ... ):
389
+ ... processed_outputs = processor.postprocess_outputs(inference_session, model_outputs)
390
+ ... outputs_per_frame[model_outputs.frame_idx] = processed_outputs
391
+
392
+ >>> print(f"Processed {len(outputs_per_frame)} frames")
393
+ Processed 51 frames
394
+
395
+ >>> # Access results for a specific frame
396
+ >>> frame_0_outputs = outputs_per_frame[0]
397
+ >>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects")
398
+ >>> print(f"Object IDs: {frame_0_outputs['object_ids'].tolist()}")
399
+ >>> print(f"Scores: {frame_0_outputs['scores'].tolist()}")
400
+ >>> print(f"Boxes shape (XYXY format, absolute coordinates): {frame_0_outputs['boxes'].shape}")
401
+ >>> print(f"Masks shape: {frame_0_outputs['masks'].shape}")
402
+ ```
403
+
404
+ #### Streaming Video Inference
405
+
406
+ For real-time applications, the Transformers implementation of SAM3 Video supports processing video frames as they arrive:
407
+
408
+ ```python
409
+ >>> # Initialize session for streaming
410
+ >>> streaming_inference_session = processor.init_video_session(
411
+ ... inference_device=device,
412
+ ... processing_device="cpu",
413
+ ... video_storage_device="cpu",
414
+ ... dtype=torch.bfloat16,
415
+ ... )
416
+
417
+ >>> # Add text prompt
418
+ >>> text = "person"
419
+ >>> streaming_inference_session = processor.add_text_prompt(
420
+ ... inference_session=streaming_inference_session,
421
+ ... text=text,
422
+ ... )
423
+
424
+ >>> # Process frames one by one (streaming mode)
425
+ >>> streaming_outputs_per_frame = {}
426
+ >>> for frame_idx, frame in enumerate(video_frames[:50]): # Process first 50 frames
427
+ ... # First, process the frame using the processor
428
+ ... inputs = processor(images=frame, device=device, return_tensors="pt")
429
+ ...
430
+ ... # Process frame using streaming inference - pass the processed pixel_values
431
+ ... model_outputs = model(
432
+ ... inference_session=streaming_inference_session,
433
+ ... frame=inputs.pixel_values[0], # Provide processed frame - this enables streaming mode
434
+ ... reverse=False,
435
+ ... )
436
+ ...
437
+ ... # Post-process outputs with original_sizes for proper resolution handling
438
+ ... processed_outputs = processor.postprocess_outputs(
439
+ ... streaming_inference_session,
440
+ ... model_outputs,
441
+ ... original_sizes=inputs.original_sizes, # Required for streaming inference
442
+ ... )
443
+ ... streaming_outputs_per_frame[frame_idx] = processed_outputs
444
+ ...
445
+ ... if (frame_idx + 1) % 10 == 0:
446
+ ... print(f"Processed {frame_idx + 1} frames...")
447
+
448
+ >>> print(f"✓ Streaming inference complete! Processed {len(streaming_outputs_per_frame)} frames")
449
+ ✓ Streaming inference complete! Processed 50 frames
450
+
451
+ >>> # Access results
452
+ >>> frame_0_outputs = streaming_outputs_per_frame[0]
453
+ >>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects in first frame")
454
+ >>> print(f"Boxes are in XYXY format (absolute pixel coordinates): {frame_0_outputs['boxes'].shape}")
455
+ >>> print(f"Masks are at original video resolution: {frame_0_outputs['masks'].shape}")
456
+ ```
457
+
458
+ <div class="warning">
459
+ ⚠️ **Note on Streaming Inference Quality**: Streaming inference disables hotstart heuristics that remove unmatched and duplicate objects, as these require access to future frames to make informed decisions. This may result in more false positive detections and duplicate object tracks compared to pre-loaded video inference. For best results, use pre-loaded video inference when all frames are available.
460
+ </div>
461
+
462
+ ### SAM3 Tracker - Promptable Visual Segmentation (PVS) for Images
463
+
464
+ Sam3Tracker performs Promptable Visual Segmentation (PVS) on images, taking interactive visual prompts (points, boxes, masks) to segment a **specific object instance** per prompt. It is an updated version of SAM2 that maintains the same API while providing improved performance, making it a drop-in replacement for SAM2 workflows.
465
+
466
+ #### Automatic Mask Generation with Pipeline
467
+
468
+ ```python
469
+ >>> from transformers import pipeline
470
+
471
+ >>> generator = pipeline("mask-generation", model="facebook/sam3", device=0)
472
+ >>> image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
473
+ >>> outputs = generator(image_url, points_per_batch=64)
474
+
475
+ >>> len(outputs["masks"]) # Number of masks generated
476
+ ```
477
+
478
+ #### Basic Image Segmentation
479
+
480
+ ##### Single Point Click
481
+
482
+ ```python
483
+ >>> from transformers import Sam3TrackerProcessor, Sam3TrackerModel
484
+ >>> from accelerate import Accelerator
485
+ >>> import torch
486
+ >>> from PIL import Image
487
+ >>> import requests
488
+
489
+ >>> device = Accelerator().device
490
+
491
+ >>> model = Sam3TrackerModel.from_pretrained("facebook/sam3").to(device)
492
+ >>> processor = Sam3TrackerProcessor.from_pretrained("facebook/sam3")
493
+
494
+ >>> image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
495
+ >>> raw_image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
496
+
497
+ >>> input_points = [[[[500, 375]]]] # Single point click, 4 dimensions (image_dim, object_dim, point_per_object_dim, coordinates)
498
+ >>> input_labels = [[[1]]] # 1 for positive click, 0 for negative click, 3 dimensions (image_dim, object_dim, point_label)
499
+
500
+ >>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
501
+
502
+ >>> with torch.no_grad():
503
+ ... outputs = model(**inputs)
504
+
505
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
506
+
507
+ >>> # The model outputs multiple mask predictions ranked by quality score
508
+ >>> print(f"Generated {masks.shape[1]} masks with shape {masks.shape}")
509
+ ```
510
+
511
+ ##### Multiple Points for Refinement
512
+
513
+ ```python
514
+ >>> # Add both positive and negative points to refine the mask
515
+ >>> input_points = [[[[500, 375], [1125, 625]]]] # Multiple points for refinement
516
+ >>> input_labels = [[[1, 1]]] # Both positive clicks
517
+
518
+ >>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(device)
519
+
520
+ >>> with torch.no_grad():
521
+ ... outputs = model(**inputs)
522
+
523
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
524
+ ```
525
+
526
+ ##### Bounding Box Input
527
+
528
+ ```python
529
+ >>> # Define bounding box as [x_min, y_min, x_max, y_max]
530
+ >>> input_boxes = [[[75, 275, 1725, 850]]]
531
+
532
+ >>> inputs = processor(images=raw_image, input_boxes=input_boxes, return_tensors="pt").to(device)
533
+
534
+ >>> with torch.no_grad():
535
+ ... outputs = model(**inputs)
536
+
537
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
538
+ ```
539
+
540
+ ##### Multiple Objects Segmentation
541
+
542
+ ```python
543
+ >>> # Define points for two different objects
544
+ >>> input_points = [[[[500, 375]], [[650, 750]]]] # Points for two objects in same image
545
+ >>> input_labels = [[[1], [1]]] # Positive clicks for both objects
546
+
547
+ >>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
548
+
549
+ >>> with torch.no_grad():
550
+ ... outputs = model(**inputs, multimask_output=False)
551
+
552
+ >>> # Each object gets its own mask
553
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
554
+ >>> print(f"Generated masks for {masks.shape[0]} objects")
555
+ Generated masks for 2 objects
556
+ ```
557
+
558
+ #### Batch Inference
559
+
560
+
561
+ ```python
562
+ >>> # Load multiple images
563
+ >>> image_urls = [
564
+ ... "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg",
565
+ ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dog-sam.png"
566
+ ... ]
567
+ >>> raw_images = [Image.open(requests.get(url, stream=True).raw).convert("RGB") for url in image_urls]
568
+
569
+ >>> # Single point per image
570
+ >>> input_points = [[[[500, 375]]], [[[770, 200]]]] # One point for each image
571
+ >>> input_labels = [[[1]], [[1]]] # Positive clicks for both images
572
+
573
+ >>> inputs = processor(images=raw_images, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
574
+
575
+ >>> with torch.no_grad():
576
+ ... outputs = model(**inputs, multimask_output=False)
577
+
578
+ >>> # Post-process masks for each image
579
+ >>> all_masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])
580
+ >>> print(f"Processed {len(all_masks)} images, each with {all_masks[0].shape[0]} objects")
581
+ ```
582
+
583
+ ### SAM3 Tracker Video - Promptable Visual Segmentation (PVS) for Videos
584
+
585
+ Sam3TrackerVideo performs Promptable Visual Segmentation (PVS) on videos, taking interactive visual prompts (points, boxes, masks) to track a **specific object instance** per prompt across video frames. It is an updated version of SAM2 Video that maintains the same API while providing improved performance, making it a drop-in replacement for SAM2 Video workflows.
586
+
587
+ #### Basic Video Tracking
588
+
589
+ ```python
590
+ >>> from transformers import Sam3TrackerVideoModel, Sam3TrackerVideoProcessor
591
+ >>> from accelerate import Accelerator
592
+ >>> import torch
593
+
594
+ >>> device = Accelerator().device
595
+ >>> model = Sam3TrackerVideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
596
+ >>> processor = Sam3TrackerVideoProcessor.from_pretrained("facebook/sam3")
597
+
598
+ >>> # Load video frames
599
+ >>> from transformers.video_utils import load_video
600
+ >>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
601
+ >>> video_frames, _ = load_video(video_url)
602
+
603
+ >>> # Initialize video inference session
604
+ >>> inference_session = processor.init_video_session(
605
+ ... video=video_frames,
606
+ ... inference_device=device,
607
+ ... dtype=torch.bfloat16,
608
+ ... )
609
+
610
+ >>> # Add click on first frame to select object
611
+ >>> ann_frame_idx = 0
612
+ >>> ann_obj_id = 1
613
+ >>> points = [[[[210, 350]]]]
614
+ >>> labels = [[[1]]]
615
+
616
+ >>> processor.add_inputs_to_inference_session(
617
+ ... inference_session=inference_session,
618
+ ... frame_idx=ann_frame_idx,
619
+ ... obj_ids=ann_obj_id,
620
+ ... input_points=points,
621
+ ... input_labels=labels,
622
+ ... )
623
+
624
+ >>> # Segment the object on the first frame (optional, you can also propagate the masks through the video directly)
625
+ >>> outputs = model(
626
+ ... inference_session=inference_session,
627
+ ... frame_idx=ann_frame_idx,
628
+ ... )
629
+ >>> video_res_masks = processor.post_process_masks(
630
+ ... [outputs.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
631
+ ... )[0]
632
+ >>> print(f"Segmentation shape: {video_res_masks.shape}")
633
+ Segmentation shape: torch.Size([1, 1, 480, 854])
634
+
635
+ >>> # Propagate through the entire video
636
+ >>> video_segments = {}
637
+ >>> for sam3_tracker_video_output in model.propagate_in_video_iterator(inference_session):
638
+ ... video_res_masks = processor.post_process_masks(
639
+ ... [sam3_tracker_video_output.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
640
+ ... )[0]
641
+ ... video_segments[sam3_tracker_video_output.frame_idx] = video_res_masks
642
+
643
+ >>> print(f"Tracked object through {len(video_segments)} frames")
644
+ Tracked object through 180 frames
645
+ ```
646
+
647
+ #### Multi-Object Video Tracking
648
+
649
+ Track multiple objects simultaneously across video frames:
650
+
651
+ ```python
652
+ >>> # Reset for new tracking session
653
+ >>> inference_session.reset_inference_session()
654
+
655
+ >>> # Add multiple objects on the first frame
656
+ >>> ann_frame_idx = 0
657
+ >>> obj_ids = [2, 3]
658
+ >>> input_points = [[[[200, 300]], [[400, 150]]]] # Points for two objects (batched)
659
+ >>> input_labels = [[[1], [1]]]
660
+
661
+ >>> processor.add_inputs_to_inference_session(
662
+ ... inference_session=inference_session,
663
+ ... frame_idx=ann_frame_idx,
664
+ ... obj_ids=obj_ids,
665
+ ... input_points=input_points,
666
+ ... input_labels=input_labels,
667
+ ... )
668
+
669
+ >>> # Get masks for both objects on first frame (optional, you can also propagate the masks through the video directly)
670
+ >>> outputs = model(
671
+ ... inference_session=inference_session,
672
+ ... frame_idx=ann_frame_idx,
673
+ ... )
674
+
675
+ >>> # Propagate both objects through video
676
+ >>> video_segments = {}
677
+ >>> for sam3_tracker_video_output in model.propagate_in_video_iterator(inference_session):
678
+ ... video_res_masks = processor.post_process_masks(
679
+ ... [sam3_tracker_video_output.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
680
+ ... )[0]
681
+ ... video_segments[sam3_tracker_video_output.frame_idx] = {
682
+ ... obj_id: video_res_masks[i]
683
+ ... for i, obj_id in enumerate(inference_session.obj_ids)
684
+ ... }
685
+
686
+ >>> print(f"Tracked {len(inference_session.obj_ids)} objects through {len(video_segments)} frames")
687
+ Tracked 2 objects through 180 frames
688
+ ```
689
+
690
+ #### Streaming Video Inference
691
+
692
+ For real-time applications, Sam3TrackerVideo supports processing video frames as they arrive:
693
+
694
+ ```python
695
+ >>> # Initialize session for streaming
696
+ >>> inference_session = processor.init_video_session(
697
+ ... inference_device=device,
698
+ ... dtype=torch.bfloat16,
699
+ ... )
700
+
701
+ >>> # Process frames one by one
702
+ >>> for frame_idx, frame in enumerate(video_frames[:10]): # Process first 10 frames
703
+ ... inputs = processor(images=frame, device=device, return_tensors="pt")
704
+ ...
705
+ ... if frame_idx == 0:
706
+ ... # Add point input on first frame
707
+ ... processor.add_inputs_to_inference_session(
708
+ ... inference_session=inference_session,
709
+ ... frame_idx=0,
710
+ ... obj_ids=1,
711
+ ... input_points=[[[[210, 350], [250, 220]]]],
712
+ ... input_labels=[[[1, 1]]],
713
+ ... original_size=inputs.original_sizes[0], # need to be provided when using streaming video inference
714
+ ... )
715
+ ...
716
+ ... # Process current frame
717
+ ... sam3_tracker_video_output = model(inference_session=inference_session, frame=inputs.pixel_values[0])
718
+ ...
719
+ ... video_res_masks = processor.post_process_masks(
720
+ ... [sam3_tracker_video_output.pred_masks], original_sizes=inputs.original_sizes, binarize=False
721
+ ... )[0]
722
+ ... print(f"Frame {frame_idx}: mask shape {video_res_masks.shape}")
723
+ ```
config.json ADDED
@@ -0,0 +1,896 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Sam3VideoModel"
4
+ ],
5
+ "assoc_iou_thresh": 0.1,
6
+ "decrease_trk_keep_alive_for_empty_masklets": false,
7
+ "det_nms_thresh": 0.1,
8
+ "detector_config": {
9
+ "detr_decoder_config": {
10
+ "_name_or_path": "",
11
+ "add_cross_attention": false,
12
+ "architectures": null,
13
+ "bad_words_ids": null,
14
+ "begin_suppress_tokens": null,
15
+ "bos_token_id": null,
16
+ "box_rpb_mode": "log",
17
+ "chunk_size_feed_forward": 0,
18
+ "cross_attention_hidden_size": null,
19
+ "decoder_start_token_id": null,
20
+ "diversity_penalty": 0.0,
21
+ "do_sample": false,
22
+ "dropout": 0.1,
23
+ "dtype": null,
24
+ "early_stopping": false,
25
+ "encoder_no_repeat_ngram_size": 0,
26
+ "eos_token_id": null,
27
+ "exponential_decay_length_penalty": null,
28
+ "finetuning_task": null,
29
+ "forced_bos_token_id": null,
30
+ "forced_eos_token_id": null,
31
+ "hidden_act": "relu",
32
+ "hidden_dropout": 0.0,
33
+ "hidden_size": 256,
34
+ "id2label": {
35
+ "0": "LABEL_0",
36
+ "1": "LABEL_1"
37
+ },
38
+ "initializer_range": 0.02,
39
+ "intermediate_size": 2048,
40
+ "is_decoder": false,
41
+ "is_encoder_decoder": false,
42
+ "label2id": {
43
+ "LABEL_0": 0,
44
+ "LABEL_1": 1
45
+ },
46
+ "layer_norm_eps": 1e-06,
47
+ "length_penalty": 1.0,
48
+ "max_length": 20,
49
+ "min_length": 0,
50
+ "model_type": "sam3_detr_decoder",
51
+ "no_repeat_ngram_size": 0,
52
+ "num_attention_heads": 8,
53
+ "num_beam_groups": 1,
54
+ "num_beams": 1,
55
+ "num_layers": 6,
56
+ "num_queries": 200,
57
+ "num_return_sequences": 1,
58
+ "output_attentions": false,
59
+ "output_hidden_states": false,
60
+ "output_scores": false,
61
+ "pad_token_id": null,
62
+ "prefix": null,
63
+ "problem_type": null,
64
+ "remove_invalid_values": false,
65
+ "repetition_penalty": 1.0,
66
+ "return_dict": true,
67
+ "return_dict_in_generate": false,
68
+ "sep_token_id": null,
69
+ "suppress_tokens": null,
70
+ "task_specific_params": null,
71
+ "temperature": 1.0,
72
+ "tie_encoder_decoder": false,
73
+ "tie_word_embeddings": true,
74
+ "tokenizer_class": null,
75
+ "top_k": 50,
76
+ "top_p": 1.0,
77
+ "typical_p": 1.0,
78
+ "use_presence_token": true
79
+ },
80
+ "detr_encoder_config": {
81
+ "_name_or_path": "",
82
+ "add_cross_attention": false,
83
+ "architectures": null,
84
+ "bad_words_ids": null,
85
+ "begin_suppress_tokens": null,
86
+ "bos_token_id": null,
87
+ "chunk_size_feed_forward": 0,
88
+ "cross_attention_hidden_size": null,
89
+ "decoder_start_token_id": null,
90
+ "diversity_penalty": 0.0,
91
+ "do_sample": false,
92
+ "dropout": 0.1,
93
+ "dtype": null,
94
+ "early_stopping": false,
95
+ "encoder_no_repeat_ngram_size": 0,
96
+ "eos_token_id": null,
97
+ "exponential_decay_length_penalty": null,
98
+ "finetuning_task": null,
99
+ "forced_bos_token_id": null,
100
+ "forced_eos_token_id": null,
101
+ "hidden_act": "relu",
102
+ "hidden_dropout": 0.0,
103
+ "hidden_size": 256,
104
+ "id2label": {
105
+ "0": "LABEL_0",
106
+ "1": "LABEL_1"
107
+ },
108
+ "initializer_range": 0.02,
109
+ "intermediate_size": 2048,
110
+ "is_decoder": false,
111
+ "is_encoder_decoder": false,
112
+ "label2id": {
113
+ "LABEL_0": 0,
114
+ "LABEL_1": 1
115
+ },
116
+ "layer_norm_eps": 1e-06,
117
+ "length_penalty": 1.0,
118
+ "max_length": 20,
119
+ "min_length": 0,
120
+ "model_type": "sam3_detr_encoder",
121
+ "no_repeat_ngram_size": 0,
122
+ "num_attention_heads": 8,
123
+ "num_beam_groups": 1,
124
+ "num_beams": 1,
125
+ "num_layers": 6,
126
+ "num_return_sequences": 1,
127
+ "output_attentions": false,
128
+ "output_hidden_states": false,
129
+ "output_scores": false,
130
+ "pad_token_id": null,
131
+ "prefix": null,
132
+ "problem_type": null,
133
+ "remove_invalid_values": false,
134
+ "repetition_penalty": 1.0,
135
+ "return_dict": true,
136
+ "return_dict_in_generate": false,
137
+ "sep_token_id": null,
138
+ "suppress_tokens": null,
139
+ "task_specific_params": null,
140
+ "temperature": 1.0,
141
+ "tie_encoder_decoder": false,
142
+ "tie_word_embeddings": true,
143
+ "tokenizer_class": null,
144
+ "top_k": 50,
145
+ "top_p": 1.0,
146
+ "typical_p": 1.0
147
+ },
148
+ "geometry_encoder_config": {
149
+ "_name_or_path": "",
150
+ "add_cross_attention": false,
151
+ "architectures": null,
152
+ "bad_words_ids": null,
153
+ "begin_suppress_tokens": null,
154
+ "bos_token_id": null,
155
+ "chunk_size_feed_forward": 0,
156
+ "cross_attention_hidden_size": null,
157
+ "decoder_start_token_id": null,
158
+ "diversity_penalty": 0.0,
159
+ "do_sample": false,
160
+ "dropout": 0.1,
161
+ "dtype": null,
162
+ "early_stopping": false,
163
+ "encoder_no_repeat_ngram_size": 0,
164
+ "eos_token_id": null,
165
+ "exponential_decay_length_penalty": null,
166
+ "finetuning_task": null,
167
+ "forced_bos_token_id": null,
168
+ "forced_eos_token_id": null,
169
+ "hidden_act": "relu",
170
+ "hidden_dropout": 0.0,
171
+ "hidden_size": 256,
172
+ "id2label": {
173
+ "0": "LABEL_0",
174
+ "1": "LABEL_1"
175
+ },
176
+ "initializer_range": 0.02,
177
+ "intermediate_size": 2048,
178
+ "is_decoder": false,
179
+ "is_encoder_decoder": false,
180
+ "label2id": {
181
+ "LABEL_0": 0,
182
+ "LABEL_1": 1
183
+ },
184
+ "layer_norm_eps": 1e-06,
185
+ "length_penalty": 1.0,
186
+ "max_length": 20,
187
+ "min_length": 0,
188
+ "model_type": "sam3_geometry_encoder",
189
+ "no_repeat_ngram_size": 0,
190
+ "num_attention_heads": 8,
191
+ "num_beam_groups": 1,
192
+ "num_beams": 1,
193
+ "num_layers": 3,
194
+ "num_return_sequences": 1,
195
+ "output_attentions": false,
196
+ "output_hidden_states": false,
197
+ "output_scores": false,
198
+ "pad_token_id": null,
199
+ "prefix": null,
200
+ "problem_type": null,
201
+ "remove_invalid_values": false,
202
+ "repetition_penalty": 1.0,
203
+ "return_dict": true,
204
+ "return_dict_in_generate": false,
205
+ "roi_size": 7,
206
+ "sep_token_id": null,
207
+ "suppress_tokens": null,
208
+ "task_specific_params": null,
209
+ "temperature": 1.0,
210
+ "tie_encoder_decoder": false,
211
+ "tie_word_embeddings": true,
212
+ "tokenizer_class": null,
213
+ "top_k": 50,
214
+ "top_p": 1.0,
215
+ "typical_p": 1.0
216
+ },
217
+ "initializer_range": 0.02,
218
+ "mask_decoder_config": {
219
+ "_name_or_path": "",
220
+ "add_cross_attention": false,
221
+ "architectures": null,
222
+ "bad_words_ids": null,
223
+ "begin_suppress_tokens": null,
224
+ "bos_token_id": null,
225
+ "chunk_size_feed_forward": 0,
226
+ "cross_attention_hidden_size": null,
227
+ "decoder_start_token_id": null,
228
+ "diversity_penalty": 0.0,
229
+ "do_sample": false,
230
+ "dropout": 0.0,
231
+ "dtype": null,
232
+ "early_stopping": false,
233
+ "encoder_no_repeat_ngram_size": 0,
234
+ "eos_token_id": null,
235
+ "exponential_decay_length_penalty": null,
236
+ "finetuning_task": null,
237
+ "forced_bos_token_id": null,
238
+ "forced_eos_token_id": null,
239
+ "hidden_size": 256,
240
+ "id2label": {
241
+ "0": "LABEL_0",
242
+ "1": "LABEL_1"
243
+ },
244
+ "initializer_range": 0.02,
245
+ "is_decoder": false,
246
+ "is_encoder_decoder": false,
247
+ "label2id": {
248
+ "LABEL_0": 0,
249
+ "LABEL_1": 1
250
+ },
251
+ "layer_norm_eps": 1e-06,
252
+ "length_penalty": 1.0,
253
+ "max_length": 20,
254
+ "min_length": 0,
255
+ "model_type": "sam3_mask_decoder",
256
+ "no_repeat_ngram_size": 0,
257
+ "num_attention_heads": 8,
258
+ "num_beam_groups": 1,
259
+ "num_beams": 1,
260
+ "num_return_sequences": 1,
261
+ "num_upsampling_stages": 3,
262
+ "output_attentions": false,
263
+ "output_hidden_states": false,
264
+ "output_scores": false,
265
+ "pad_token_id": null,
266
+ "prefix": null,
267
+ "problem_type": null,
268
+ "remove_invalid_values": false,
269
+ "repetition_penalty": 1.0,
270
+ "return_dict": true,
271
+ "return_dict_in_generate": false,
272
+ "sep_token_id": null,
273
+ "suppress_tokens": null,
274
+ "task_specific_params": null,
275
+ "temperature": 1.0,
276
+ "tie_encoder_decoder": false,
277
+ "tie_word_embeddings": true,
278
+ "tokenizer_class": null,
279
+ "top_k": 50,
280
+ "top_p": 1.0,
281
+ "typical_p": 1.0
282
+ },
283
+ "model_type": "sam3",
284
+ "text_config": {
285
+ "_name_or_path": "",
286
+ "add_cross_attention": false,
287
+ "architectures": null,
288
+ "attention_dropout": 0.0,
289
+ "bad_words_ids": null,
290
+ "begin_suppress_tokens": null,
291
+ "bos_token_id": 49406,
292
+ "chunk_size_feed_forward": 0,
293
+ "cross_attention_hidden_size": null,
294
+ "decoder_start_token_id": null,
295
+ "diversity_penalty": 0.0,
296
+ "do_sample": false,
297
+ "dtype": null,
298
+ "early_stopping": false,
299
+ "encoder_no_repeat_ngram_size": 0,
300
+ "eos_token_id": 49407,
301
+ "exponential_decay_length_penalty": null,
302
+ "finetuning_task": null,
303
+ "forced_bos_token_id": null,
304
+ "forced_eos_token_id": null,
305
+ "hidden_act": "gelu",
306
+ "hidden_size": 1024,
307
+ "id2label": {
308
+ "0": "LABEL_0",
309
+ "1": "LABEL_1"
310
+ },
311
+ "initializer_factor": 1.0,
312
+ "initializer_range": 0.02,
313
+ "intermediate_size": 4096,
314
+ "is_decoder": false,
315
+ "is_encoder_decoder": false,
316
+ "label2id": {
317
+ "LABEL_0": 0,
318
+ "LABEL_1": 1
319
+ },
320
+ "layer_norm_eps": 1e-05,
321
+ "length_penalty": 1.0,
322
+ "max_length": 20,
323
+ "max_position_embeddings": 32,
324
+ "min_length": 0,
325
+ "model_type": "clip_text_model",
326
+ "no_repeat_ngram_size": 0,
327
+ "num_attention_heads": 16,
328
+ "num_beam_groups": 1,
329
+ "num_beams": 1,
330
+ "num_hidden_layers": 24,
331
+ "num_return_sequences": 1,
332
+ "output_attentions": false,
333
+ "output_hidden_states": false,
334
+ "output_scores": false,
335
+ "pad_token_id": 1,
336
+ "prefix": null,
337
+ "problem_type": null,
338
+ "projection_dim": 512,
339
+ "remove_invalid_values": false,
340
+ "repetition_penalty": 1.0,
341
+ "return_dict": true,
342
+ "return_dict_in_generate": false,
343
+ "sep_token_id": null,
344
+ "suppress_tokens": null,
345
+ "task_specific_params": null,
346
+ "temperature": 1.0,
347
+ "tie_encoder_decoder": false,
348
+ "tie_word_embeddings": true,
349
+ "tokenizer_class": null,
350
+ "top_k": 50,
351
+ "top_p": 1.0,
352
+ "typical_p": 1.0,
353
+ "vocab_size": 49408
354
+ },
355
+ "vision_config": {
356
+ "_name_or_path": "",
357
+ "add_cross_attention": false,
358
+ "architectures": null,
359
+ "backbone_config": {
360
+ "_name_or_path": "",
361
+ "add_cross_attention": false,
362
+ "architectures": null,
363
+ "attention_dropout": 0.0,
364
+ "bad_words_ids": null,
365
+ "begin_suppress_tokens": null,
366
+ "bos_token_id": null,
367
+ "chunk_size_feed_forward": 0,
368
+ "cross_attention_hidden_size": null,
369
+ "decoder_start_token_id": null,
370
+ "diversity_penalty": 0.0,
371
+ "do_sample": false,
372
+ "dtype": null,
373
+ "early_stopping": false,
374
+ "encoder_no_repeat_ngram_size": 0,
375
+ "eos_token_id": null,
376
+ "exponential_decay_length_penalty": null,
377
+ "finetuning_task": null,
378
+ "forced_bos_token_id": null,
379
+ "forced_eos_token_id": null,
380
+ "global_attn_indexes": [
381
+ 7,
382
+ 15,
383
+ 23,
384
+ 31
385
+ ],
386
+ "hidden_act": "gelu",
387
+ "hidden_dropout": 0.0,
388
+ "hidden_size": 1024,
389
+ "id2label": {
390
+ "0": "LABEL_0",
391
+ "1": "LABEL_1"
392
+ },
393
+ "image_size": 1008,
394
+ "initializer_range": 0.02,
395
+ "intermediate_size": 4736,
396
+ "is_decoder": false,
397
+ "is_encoder_decoder": false,
398
+ "label2id": {
399
+ "LABEL_0": 0,
400
+ "LABEL_1": 1
401
+ },
402
+ "layer_norm_eps": 1e-06,
403
+ "layer_scale_init_value": null,
404
+ "length_penalty": 1.0,
405
+ "max_length": 20,
406
+ "min_length": 0,
407
+ "model_type": "sam3_vit_model",
408
+ "no_repeat_ngram_size": 0,
409
+ "num_attention_heads": 16,
410
+ "num_beam_groups": 1,
411
+ "num_beams": 1,
412
+ "num_channels": 3,
413
+ "num_hidden_layers": 32,
414
+ "num_return_sequences": 1,
415
+ "output_attentions": false,
416
+ "output_hidden_states": false,
417
+ "output_scores": false,
418
+ "pad_token_id": null,
419
+ "patch_size": 14,
420
+ "prefix": null,
421
+ "pretrain_image_size": 336,
422
+ "problem_type": null,
423
+ "qkv_bias": true,
424
+ "remove_invalid_values": false,
425
+ "repetition_penalty": 1.0,
426
+ "return_dict": true,
427
+ "return_dict_in_generate": false,
428
+ "rope_theta": 10000.0,
429
+ "sep_token_id": null,
430
+ "suppress_tokens": null,
431
+ "task_specific_params": null,
432
+ "temperature": 1.0,
433
+ "tie_encoder_decoder": false,
434
+ "tie_word_embeddings": true,
435
+ "tokenizer_class": null,
436
+ "top_k": 50,
437
+ "top_p": 1.0,
438
+ "typical_p": 1.0,
439
+ "window_size": 24
440
+ },
441
+ "backbone_feature_sizes": [
442
+ [
443
+ 288,
444
+ 288
445
+ ],
446
+ [
447
+ 144,
448
+ 144
449
+ ],
450
+ [
451
+ 72,
452
+ 72
453
+ ]
454
+ ],
455
+ "bad_words_ids": null,
456
+ "begin_suppress_tokens": null,
457
+ "bos_token_id": null,
458
+ "chunk_size_feed_forward": 0,
459
+ "cross_attention_hidden_size": null,
460
+ "decoder_start_token_id": null,
461
+ "diversity_penalty": 0.0,
462
+ "do_sample": false,
463
+ "dtype": null,
464
+ "early_stopping": false,
465
+ "encoder_no_repeat_ngram_size": 0,
466
+ "eos_token_id": null,
467
+ "exponential_decay_length_penalty": null,
468
+ "finetuning_task": null,
469
+ "forced_bos_token_id": null,
470
+ "forced_eos_token_id": null,
471
+ "fpn_hidden_size": 256,
472
+ "fpn_kernel_size": 2,
473
+ "fpn_stride": 2,
474
+ "hidden_act": "gelu",
475
+ "id2label": {
476
+ "0": "LABEL_0",
477
+ "1": "LABEL_1"
478
+ },
479
+ "initializer_range": 0.02,
480
+ "is_decoder": false,
481
+ "is_encoder_decoder": false,
482
+ "label2id": {
483
+ "LABEL_0": 0,
484
+ "LABEL_1": 1
485
+ },
486
+ "layer_norm_eps": 1e-06,
487
+ "length_penalty": 1.0,
488
+ "max_length": 20,
489
+ "min_length": 0,
490
+ "model_type": "sam3_vision_model",
491
+ "no_repeat_ngram_size": 0,
492
+ "num_beam_groups": 1,
493
+ "num_beams": 1,
494
+ "num_feature_levels": 3,
495
+ "num_return_sequences": 1,
496
+ "output_attentions": false,
497
+ "output_hidden_states": false,
498
+ "output_scores": false,
499
+ "pad_token_id": null,
500
+ "prefix": null,
501
+ "problem_type": null,
502
+ "remove_invalid_values": false,
503
+ "repetition_penalty": 1.0,
504
+ "return_dict": true,
505
+ "return_dict_in_generate": false,
506
+ "scale_factors": [
507
+ 4.0,
508
+ 2.0,
509
+ 1.0,
510
+ 0.5
511
+ ],
512
+ "sep_token_id": null,
513
+ "suppress_tokens": null,
514
+ "task_specific_params": null,
515
+ "temperature": 1.0,
516
+ "tie_encoder_decoder": false,
517
+ "tie_word_embeddings": true,
518
+ "tokenizer_class": null,
519
+ "top_k": 50,
520
+ "top_p": 1.0,
521
+ "typical_p": 1.0
522
+ }
523
+ },
524
+ "dtype": "float32",
525
+ "fill_hole_area": 16,
526
+ "high_conf_thresh": 0.8,
527
+ "high_iou_thresh": 0.8,
528
+ "hotstart_delay": 15,
529
+ "hotstart_dup_thresh": 8,
530
+ "hotstart_unmatch_thresh": 8,
531
+ "init_trk_keep_alive": 30,
532
+ "initializer_range": 0.02,
533
+ "low_res_mask_size": 288,
534
+ "max_num_objects": 10000,
535
+ "max_trk_keep_alive": 30,
536
+ "min_trk_keep_alive": -1,
537
+ "model_type": "sam3_video",
538
+ "new_det_thresh": 0.7,
539
+ "recondition_every_nth_frame": 16,
540
+ "recondition_on_trk_masks": false,
541
+ "score_threshold_detection": 0.5,
542
+ "suppress_overlapping_based_on_recent_occlusion_threshold": 0.7,
543
+ "suppress_unmatched_only_within_hotstart": true,
544
+ "tracker_config": {
545
+ "enable_occlusion_spatial_embedding": true,
546
+ "enable_temporal_pos_encoding_for_object_pointers": true,
547
+ "image_size": 1008,
548
+ "initializer_range": 0.02,
549
+ "mask_decoder_config": {
550
+ "_name_or_path": "",
551
+ "add_cross_attention": false,
552
+ "architectures": null,
553
+ "attention_downsample_rate": 2,
554
+ "bad_words_ids": null,
555
+ "begin_suppress_tokens": null,
556
+ "bos_token_id": null,
557
+ "chunk_size_feed_forward": 0,
558
+ "cross_attention_hidden_size": null,
559
+ "decoder_start_token_id": null,
560
+ "diversity_penalty": 0.0,
561
+ "do_sample": false,
562
+ "dtype": null,
563
+ "dynamic_multimask_stability_delta": 0.05,
564
+ "dynamic_multimask_stability_thresh": 0.98,
565
+ "dynamic_multimask_via_stability": true,
566
+ "early_stopping": false,
567
+ "encoder_no_repeat_ngram_size": 0,
568
+ "eos_token_id": null,
569
+ "exponential_decay_length_penalty": null,
570
+ "finetuning_task": null,
571
+ "forced_bos_token_id": null,
572
+ "forced_eos_token_id": null,
573
+ "hidden_act": "gelu",
574
+ "hidden_size": 256,
575
+ "id2label": {
576
+ "0": "LABEL_0",
577
+ "1": "LABEL_1"
578
+ },
579
+ "iou_head_depth": 3,
580
+ "iou_head_hidden_dim": 256,
581
+ "is_decoder": false,
582
+ "is_encoder_decoder": false,
583
+ "label2id": {
584
+ "LABEL_0": 0,
585
+ "LABEL_1": 1
586
+ },
587
+ "length_penalty": 1.0,
588
+ "max_length": 20,
589
+ "min_length": 0,
590
+ "mlp_dim": 2048,
591
+ "model_type": "",
592
+ "no_repeat_ngram_size": 0,
593
+ "num_attention_heads": 8,
594
+ "num_beam_groups": 1,
595
+ "num_beams": 1,
596
+ "num_hidden_layers": 2,
597
+ "num_multimask_outputs": 3,
598
+ "num_return_sequences": 1,
599
+ "output_attentions": false,
600
+ "output_hidden_states": false,
601
+ "output_scores": false,
602
+ "pad_token_id": null,
603
+ "prefix": null,
604
+ "problem_type": null,
605
+ "remove_invalid_values": false,
606
+ "repetition_penalty": 1.0,
607
+ "return_dict": true,
608
+ "return_dict_in_generate": false,
609
+ "sep_token_id": null,
610
+ "suppress_tokens": null,
611
+ "task_specific_params": null,
612
+ "temperature": 1.0,
613
+ "tie_encoder_decoder": false,
614
+ "tie_word_embeddings": true,
615
+ "tokenizer_class": null,
616
+ "top_k": 50,
617
+ "top_p": 1.0,
618
+ "typical_p": 1.0
619
+ },
620
+ "mask_downsampler_embed_dim": 256,
621
+ "mask_downsampler_hidden_act": "gelu",
622
+ "mask_downsampler_kernel_size": 3,
623
+ "mask_downsampler_padding": 1,
624
+ "mask_downsampler_stride": 2,
625
+ "mask_downsampler_total_stride": 16,
626
+ "max_cond_frame_num": 4,
627
+ "max_object_pointers_in_encoder": 16,
628
+ "memory_attention_downsample_rate": 1,
629
+ "memory_attention_dropout": 0.1,
630
+ "memory_attention_feed_forward_hidden_act": "relu",
631
+ "memory_attention_feed_forward_hidden_size": 2048,
632
+ "memory_attention_hidden_size": 256,
633
+ "memory_attention_num_attention_heads": 1,
634
+ "memory_attention_num_layers": 4,
635
+ "memory_attention_rope_dropout": 0.1,
636
+ "memory_attention_rope_feat_sizes": [
637
+ 72,
638
+ 72
639
+ ],
640
+ "memory_attention_rope_theta": 10000,
641
+ "memory_encoder_hidden_size": 256,
642
+ "memory_encoder_output_channels": 64,
643
+ "memory_fuser_embed_dim": 256,
644
+ "memory_fuser_hidden_act": "gelu",
645
+ "memory_fuser_intermediate_dim": 1024,
646
+ "memory_fuser_kernel_size": 7,
647
+ "memory_fuser_layer_scale_init_value": 1e-06,
648
+ "memory_fuser_num_layers": 2,
649
+ "memory_fuser_padding": 3,
650
+ "model_type": "sam3_tracker_video",
651
+ "multimask_max_pt_num": 1,
652
+ "multimask_min_pt_num": 0,
653
+ "multimask_output_for_tracking": true,
654
+ "multimask_output_in_sam": true,
655
+ "num_maskmem": 7,
656
+ "prompt_encoder_config": {
657
+ "_name_or_path": "",
658
+ "add_cross_attention": false,
659
+ "architectures": null,
660
+ "bad_words_ids": null,
661
+ "begin_suppress_tokens": null,
662
+ "bos_token_id": null,
663
+ "chunk_size_feed_forward": 0,
664
+ "cross_attention_hidden_size": null,
665
+ "decoder_start_token_id": null,
666
+ "diversity_penalty": 0.0,
667
+ "do_sample": false,
668
+ "dtype": null,
669
+ "early_stopping": false,
670
+ "encoder_no_repeat_ngram_size": 0,
671
+ "eos_token_id": null,
672
+ "exponential_decay_length_penalty": null,
673
+ "finetuning_task": null,
674
+ "forced_bos_token_id": null,
675
+ "forced_eos_token_id": null,
676
+ "hidden_act": "gelu",
677
+ "hidden_size": 256,
678
+ "id2label": {
679
+ "0": "LABEL_0",
680
+ "1": "LABEL_1"
681
+ },
682
+ "image_size": 1008,
683
+ "is_decoder": false,
684
+ "is_encoder_decoder": false,
685
+ "label2id": {
686
+ "LABEL_0": 0,
687
+ "LABEL_1": 1
688
+ },
689
+ "layer_norm_eps": 1e-06,
690
+ "length_penalty": 1.0,
691
+ "mask_input_channels": 16,
692
+ "max_length": 20,
693
+ "min_length": 0,
694
+ "model_type": "",
695
+ "no_repeat_ngram_size": 0,
696
+ "num_beam_groups": 1,
697
+ "num_beams": 1,
698
+ "num_point_embeddings": 4,
699
+ "num_return_sequences": 1,
700
+ "output_attentions": false,
701
+ "output_hidden_states": false,
702
+ "output_scores": false,
703
+ "pad_token_id": null,
704
+ "patch_size": 14,
705
+ "prefix": null,
706
+ "problem_type": null,
707
+ "remove_invalid_values": false,
708
+ "repetition_penalty": 1.0,
709
+ "return_dict": true,
710
+ "return_dict_in_generate": false,
711
+ "scale": 1,
712
+ "sep_token_id": null,
713
+ "suppress_tokens": null,
714
+ "task_specific_params": null,
715
+ "temperature": 1.0,
716
+ "tie_encoder_decoder": false,
717
+ "tie_word_embeddings": true,
718
+ "tokenizer_class": null,
719
+ "top_k": 50,
720
+ "top_p": 1.0,
721
+ "typical_p": 1.0
722
+ },
723
+ "sigmoid_bias_for_mem_enc": -10.0,
724
+ "sigmoid_scale_for_mem_enc": 20.0,
725
+ "vision_config": {
726
+ "_name_or_path": "",
727
+ "add_cross_attention": false,
728
+ "architectures": null,
729
+ "backbone_config": {
730
+ "_name_or_path": "",
731
+ "add_cross_attention": false,
732
+ "architectures": null,
733
+ "attention_dropout": 0.0,
734
+ "bad_words_ids": null,
735
+ "begin_suppress_tokens": null,
736
+ "bos_token_id": null,
737
+ "chunk_size_feed_forward": 0,
738
+ "cross_attention_hidden_size": null,
739
+ "decoder_start_token_id": null,
740
+ "diversity_penalty": 0.0,
741
+ "do_sample": false,
742
+ "dtype": null,
743
+ "early_stopping": false,
744
+ "encoder_no_repeat_ngram_size": 0,
745
+ "eos_token_id": null,
746
+ "exponential_decay_length_penalty": null,
747
+ "finetuning_task": null,
748
+ "forced_bos_token_id": null,
749
+ "forced_eos_token_id": null,
750
+ "global_attn_indexes": [
751
+ 7,
752
+ 15,
753
+ 23,
754
+ 31
755
+ ],
756
+ "hidden_act": "gelu",
757
+ "hidden_dropout": 0.0,
758
+ "hidden_size": 1024,
759
+ "id2label": {
760
+ "0": "LABEL_0",
761
+ "1": "LABEL_1"
762
+ },
763
+ "image_size": 1008,
764
+ "initializer_range": 0.02,
765
+ "intermediate_size": 4736,
766
+ "is_decoder": false,
767
+ "is_encoder_decoder": false,
768
+ "label2id": {
769
+ "LABEL_0": 0,
770
+ "LABEL_1": 1
771
+ },
772
+ "layer_norm_eps": 1e-06,
773
+ "layer_scale_init_value": null,
774
+ "length_penalty": 1.0,
775
+ "max_length": 20,
776
+ "min_length": 0,
777
+ "model_type": "sam3_vit_model",
778
+ "no_repeat_ngram_size": 0,
779
+ "num_attention_heads": 16,
780
+ "num_beam_groups": 1,
781
+ "num_beams": 1,
782
+ "num_channels": 3,
783
+ "num_hidden_layers": 32,
784
+ "num_return_sequences": 1,
785
+ "output_attentions": false,
786
+ "output_hidden_states": false,
787
+ "output_scores": false,
788
+ "pad_token_id": null,
789
+ "patch_size": 14,
790
+ "prefix": null,
791
+ "pretrain_image_size": 336,
792
+ "problem_type": null,
793
+ "qkv_bias": true,
794
+ "remove_invalid_values": false,
795
+ "repetition_penalty": 1.0,
796
+ "return_dict": true,
797
+ "return_dict_in_generate": false,
798
+ "rope_theta": 10000.0,
799
+ "sep_token_id": null,
800
+ "suppress_tokens": null,
801
+ "task_specific_params": null,
802
+ "temperature": 1.0,
803
+ "tie_encoder_decoder": false,
804
+ "tie_word_embeddings": true,
805
+ "tokenizer_class": null,
806
+ "top_k": 50,
807
+ "top_p": 1.0,
808
+ "typical_p": 1.0,
809
+ "window_size": 24
810
+ },
811
+ "backbone_feature_sizes": [
812
+ [
813
+ 288,
814
+ 288
815
+ ],
816
+ [
817
+ 144,
818
+ 144
819
+ ],
820
+ [
821
+ 72,
822
+ 72
823
+ ]
824
+ ],
825
+ "bad_words_ids": null,
826
+ "begin_suppress_tokens": null,
827
+ "bos_token_id": null,
828
+ "chunk_size_feed_forward": 0,
829
+ "cross_attention_hidden_size": null,
830
+ "decoder_start_token_id": null,
831
+ "diversity_penalty": 0.0,
832
+ "do_sample": false,
833
+ "dtype": null,
834
+ "early_stopping": false,
835
+ "encoder_no_repeat_ngram_size": 0,
836
+ "eos_token_id": null,
837
+ "exponential_decay_length_penalty": null,
838
+ "finetuning_task": null,
839
+ "forced_bos_token_id": null,
840
+ "forced_eos_token_id": null,
841
+ "fpn_hidden_size": 256,
842
+ "fpn_kernel_size": 2,
843
+ "fpn_stride": 2,
844
+ "hidden_act": "gelu",
845
+ "id2label": {
846
+ "0": "LABEL_0",
847
+ "1": "LABEL_1"
848
+ },
849
+ "initializer_range": 0.02,
850
+ "is_decoder": false,
851
+ "is_encoder_decoder": false,
852
+ "label2id": {
853
+ "LABEL_0": 0,
854
+ "LABEL_1": 1
855
+ },
856
+ "layer_norm_eps": 1e-06,
857
+ "length_penalty": 1.0,
858
+ "max_length": 20,
859
+ "min_length": 0,
860
+ "model_type": "sam3_vision_model",
861
+ "no_repeat_ngram_size": 0,
862
+ "num_beam_groups": 1,
863
+ "num_beams": 1,
864
+ "num_feature_levels": 3,
865
+ "num_return_sequences": 1,
866
+ "output_attentions": false,
867
+ "output_hidden_states": false,
868
+ "output_scores": false,
869
+ "pad_token_id": null,
870
+ "prefix": null,
871
+ "problem_type": null,
872
+ "remove_invalid_values": false,
873
+ "repetition_penalty": 1.0,
874
+ "return_dict": true,
875
+ "return_dict_in_generate": false,
876
+ "scale_factors": [
877
+ 4.0,
878
+ 2.0,
879
+ 1.0,
880
+ 0.5
881
+ ],
882
+ "sep_token_id": null,
883
+ "suppress_tokens": null,
884
+ "task_specific_params": null,
885
+ "temperature": 1.0,
886
+ "tie_encoder_decoder": false,
887
+ "tie_word_embeddings": true,
888
+ "tokenizer_class": null,
889
+ "top_k": 50,
890
+ "top_p": 1.0,
891
+ "typical_p": 1.0
892
+ }
893
+ },
894
+ "transformers_version": "5.0.0.dev0",
895
+ "trk_assoc_iou_thresh": 0.5
896
+ }
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework": "pytorch", "task": "mask-generation", "allow_remote": true}
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d06f0a5f84e435071fe6603e61d0b4cc7b40e0d39d487cfd4d67d8cc11cc14a
3
+ size 3439938512
processor_config.json ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor": {
3
+ "crop_size": null,
4
+ "data_format": "channels_first",
5
+ "device": null,
6
+ "disable_grouping": null,
7
+ "do_center_crop": null,
8
+ "do_convert_rgb": true,
9
+ "do_normalize": true,
10
+ "do_pad": null,
11
+ "do_rescale": true,
12
+ "do_resize": true,
13
+ "image_mean": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "image_processor_type": "Sam3ImageProcessorFast",
19
+ "image_seq_length": null,
20
+ "image_std": [
21
+ 0.5,
22
+ 0.5,
23
+ 0.5
24
+ ],
25
+ "input_data_format": null,
26
+ "mask_size": {
27
+ "height": 288,
28
+ "width": 288
29
+ },
30
+ "pad_size": null,
31
+ "processor_class": "Sam3VideoProcessor",
32
+ "resample": 2,
33
+ "rescale_factor": 0.00392156862745098,
34
+ "return_tensors": null,
35
+ "size": {
36
+ "height": 1008,
37
+ "width": 1008
38
+ }
39
+ },
40
+ "processor_class": "Sam3VideoProcessor",
41
+ "target_size": 1008,
42
+ "video_processor": {
43
+ "crop_size": null,
44
+ "data_format": "channels_first",
45
+ "default_to_square": true,
46
+ "device": null,
47
+ "do_center_crop": null,
48
+ "do_convert_rgb": true,
49
+ "do_normalize": true,
50
+ "do_pad": null,
51
+ "do_rescale": true,
52
+ "do_resize": true,
53
+ "do_sample_frames": null,
54
+ "fps": null,
55
+ "image_mean": [
56
+ 0.5,
57
+ 0.5,
58
+ 0.5
59
+ ],
60
+ "image_std": [
61
+ 0.5,
62
+ 0.5,
63
+ 0.5
64
+ ],
65
+ "input_data_format": null,
66
+ "num_frames": null,
67
+ "pad_size": null,
68
+ "processor_class": "Sam3VideoProcessor",
69
+ "resample": 2,
70
+ "rescale_factor": 0.00392156862745098,
71
+ "return_metadata": false,
72
+ "return_tensors": null,
73
+ "size": {
74
+ "height": 1008,
75
+ "width": 1008
76
+ },
77
+ "video_metadata": null,
78
+ "video_processor_type": "Sam2VideoVideoProcessor"
79
+ }
80
+ }
sam3.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9999e2341ceef5e136daa386eecb55cb414446a00ac2b55eb2dfd2f7c3cf8c9e
3
+ size 3450062241
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "49406": {
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "49407": {
13
+ "content": "<|endoftext|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ }
20
+ },
21
+ "bos_token": "<|startoftext|>",
22
+ "clean_up_tokenization_spaces": false,
23
+ "do_lower_case": true,
24
+ "eos_token": "<|endoftext|>",
25
+ "errors": "replace",
26
+ "extra_special_tokens": {},
27
+ "max_length": 32,
28
+ "model_max_length": 32,
29
+ "pad_token": "<|endoftext|>",
30
+ "processor_class": "Sam3VideoProcessor",
31
+ "tokenizer_class": "CLIPTokenizer",
32
+ "unk_token": "<|endoftext|>"
33
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff