davanstrien HF Staff Claude Opus 4.6 (1M context) commited on
Commit
a83bb1b
·
1 Parent(s): 9b79ff8

Add segment-objects.py for pixel-level image segmentation

Browse files

New script that produces segmentation masks (semantic maps or per-instance
binary masks) using SAM3 with text prompts. Tested on HF Jobs with wildlife
camera trap images. Also updates README to document both scripts and adds
example segmentation image.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Files changed (3) hide show
  1. README.md +135 -77
  2. example-segmentation.png +3 -0
  3. segment-objects.py +558 -0
README.md CHANGED
@@ -1,14 +1,27 @@
1
  ---
2
  viewer: false
3
- tags: [uv-script, computer-vision, object-detection, sam3, image-processing, hf-jobs]
4
  license: apache-2.0
5
  ---
6
 
7
- # SAM3 Object Detection
8
 
9
- Detect objects in images using Meta's [sam3](https://huggingface.co/facebook/sam3) (Segment Anything Model 3) with text prompts. Process HuggingFace datasets with zero-shot object detection using natural language descriptions.
10
 
11
- ## Quick Start
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  **Requires GPU.** Use HuggingFace Jobs for cloud execution:
14
 
@@ -21,9 +34,7 @@ hf jobs uv run --flavor a100-large \
21
  --class-name photograph
22
  ```
23
 
24
- ## Example Output
25
-
26
- Here's an example of detected objects (photographs in historical newspapers) with bounding boxes and confidence scores:
27
 
28
  <div style="max-width: 400px;">
29
  <img src="./example-detection.png" alt="Example Detection" style="width: 100%; height: auto;"/>
@@ -32,35 +43,101 @@ _Photograph detected in a historical newspaper with bounding box and confidence
32
 
33
  </div>
34
 
35
- ## Local Execution
36
 
37
- If you have a CUDA GPU locally:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ```bash
40
- uv run detect-objects.py INPUT OUTPUT --class-name CLASSNAME
 
 
 
 
 
41
  ```
42
 
43
- ## Arguments
 
 
 
 
 
 
 
 
 
44
 
45
  **Required:**
46
 
47
  - `input_dataset` - Input HF dataset ID
48
  - `output_dataset` - Output HF dataset ID
49
- - `--class-name` - Object class to detect (e.g., `"photograph"`, `"animal"`, `"table"`)
50
 
51
  **Common options:**
52
 
 
53
  - `--confidence-threshold FLOAT` - Min confidence (default: 0.5)
 
54
  - `--batch-size INT` - Batch size (default: 4)
55
  - `--max-samples INT` - Limit samples for testing
56
- - `--image-column STR` - Image column name (default: "image")
57
  - `--private` - Make output private
58
 
59
  <details>
60
  <summary>All options</summary>
61
 
62
  ```
63
- --mask-threshold FLOAT Mask generation threshold (default: 0.5)
 
64
  --split STR Dataset split (default: "train")
65
  --shuffle Shuffle before processing
66
  --model STR Model ID (default: "facebook/sam3")
@@ -70,48 +147,59 @@ uv run detect-objects.py INPUT OUTPUT --class-name CLASSNAME
70
 
71
  </details>
72
 
73
- ## HuggingFace Jobs Examples
74
 
75
- ### Historical Newspapers
 
 
 
 
 
 
 
 
 
 
76
 
77
- Detect photographs in historical newspaper scans:
78
 
79
  ```bash
80
  hf jobs uv run --flavor a100-large \
81
  -s HF_TOKEN=HF_TOKEN \
82
- https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
83
- davanstrien/newspapers-with-images-after-photography \
84
- my-username/newspapers-detected \
85
- --class-name photograph \
86
- --confidence-threshold 0.6 \
87
- --batch-size 8
88
  ```
89
 
90
- ### Document Tables
 
 
91
 
92
- Extract tables from document scans:
93
 
94
  ```bash
95
  hf jobs uv run --flavor a100-large \
96
  -s HF_TOKEN=HF_TOKEN \
97
  https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
98
- my-documents \
99
- documents-with-tables \
100
- --class-name table
 
 
101
  ```
102
 
103
  ### Wildlife Camera Traps
104
 
105
- Detect animals in camera trap images:
106
-
107
  ```bash
108
  hf jobs uv run --flavor a100-large \
109
  -s HF_TOKEN=HF_TOKEN \
110
- https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
111
  wildlife-images \
112
- wildlife-detections \
113
  --class-name animal \
114
- --confidence-threshold 0.5
115
  ```
116
 
117
  ### Quick Testing
@@ -121,14 +209,14 @@ Test on a small subset before full run:
121
  ```bash
122
  hf jobs uv run --flavor a100-large \
123
  -s HF_TOKEN=HF_TOKEN \
124
- https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
125
  large-dataset \
126
  test-output \
127
  --class-name object \
128
  --max-samples 20
129
  ```
130
 
131
- ### Using Different GPU Flavors
132
 
133
  ```bash
134
  # L4 (cost-effective)
@@ -140,50 +228,25 @@ hf jobs uv run --flavor a100-large \
140
 
141
  See [HF Jobs pricing](https://huggingface.co/pricing#spaces-compute).
142
 
143
- ## Output Format
144
-
145
- Adds `objects` column with ClassLabel-based detections:
146
-
147
- ```python
148
- {
149
- "objects": [
150
- {
151
- "bbox": [x, y, width, height],
152
- "category": 0, # Always 0 for single class
153
- "score": 0.87
154
- }
155
- ]
156
- }
157
- ```
158
-
159
- Load and use:
160
-
161
- ```python
162
- from datasets import load_dataset
163
 
164
- ds = load_dataset("username/output", split="train")
165
 
166
- # ClassLabel feature preserves your class name
167
- class_name = ds.features["objects"].feature["category"].names[0]
168
- print(f"Detected class: {class_name}")
169
 
170
- for sample in ds:
171
- for obj in sample["objects"]:
172
- print(f"{class_name}: {obj['score']:.2f} at {obj['bbox']}")
173
  ```
174
 
175
- ## Detecting Multiple Object Types
176
 
177
- To detect multiple object types, run the script multiple times with different `--class-name` values:
178
 
179
  ```bash
180
- # Detect photographs
181
  hf jobs uv run ... --class-name photograph
182
-
183
- # Detect illustrations
184
  hf jobs uv run ... --class-name illustration
185
-
186
- # Merge results as needed
187
  ```
188
 
189
  ## Performance
@@ -211,17 +274,12 @@ _Varies by image size and detection complexity_
211
 
212
  ## About SAM3
213
 
214
- [SAM3](https://huggingface.co/facebook/sam3) is Meta's zero-shot vision model. Describe any object in natural language and it will detect it—no training required.
215
-
216
- **Note:** This script uses transformers from git (SAM3 not yet in stable release).
217
 
218
  ## See Also
219
 
220
- More UV scripts at [huggingface.co/uv-scripts](https://huggingface.co/uv-scripts):
221
-
222
- - **dataset-creation** - Create HF datasets from files
223
- - **vllm** - Fast LLM inference
224
- - **ocr** - Document OCR
225
 
226
  ## License
227
 
 
1
  ---
2
  viewer: false
3
+ tags: [uv-script, computer-vision, object-detection, image-segmentation, sam3, image-processing, hf-jobs]
4
  license: apache-2.0
5
  ---
6
 
7
+ # SAM3 Vision Scripts
8
 
9
+ Detect and segment objects in images using Meta's **SAM3** (Segment Anything Model 3) with text prompts. Process HuggingFace datasets with zero-shot detection and segmentation using natural language descriptions.
10
 
11
+ | Script | What it does | Output |
12
+ |--------|-------------|--------|
13
+ | `detect-objects.py` | Object detection with bounding boxes | `objects` column with bbox, category, score |
14
+ | `segment-objects.py` | Pixel-level segmentation masks | Segmentation maps or per-instance masks |
15
+
16
+ Browse results interactively: **[SAM3 Results Browser](https://huggingface.co/spaces/uv-scripts/sam3-detection-browser)**
17
+
18
+ ---
19
+
20
+ ## Object Detection (`detect-objects.py`)
21
+
22
+ Detect objects and output bounding boxes in HuggingFace object detection format.
23
+
24
+ ### Quick Start
25
 
26
  **Requires GPU.** Use HuggingFace Jobs for cloud execution:
27
 
 
34
  --class-name photograph
35
  ```
36
 
37
+ ### Example Output
 
 
38
 
39
  <div style="max-width: 400px;">
40
  <img src="./example-detection.png" alt="Example Detection" style="width: 100%; height: auto;"/>
 
43
 
44
  </div>
45
 
46
+ ### Arguments
47
 
48
+ **Required:**
49
+
50
+ - `input_dataset` - Input HF dataset ID
51
+ - `output_dataset` - Output HF dataset ID
52
+ - `--class-name` - Object class to detect (e.g., `"photograph"`, `"animal"`, `"table"`)
53
+
54
+ **Common options:**
55
+
56
+ - `--confidence-threshold FLOAT` - Min confidence (default: 0.5)
57
+ - `--batch-size INT` - Batch size (default: 4)
58
+ - `--max-samples INT` - Limit samples for testing
59
+ - `--image-column STR` - Image column name (default: "image")
60
+ - `--private` - Make output private
61
+
62
+ <details>
63
+ <summary>All options</summary>
64
+
65
+ ```
66
+ --mask-threshold FLOAT Mask generation threshold (default: 0.5)
67
+ --split STR Dataset split (default: "train")
68
+ --shuffle Shuffle before processing
69
+ --model STR Model ID (default: "facebook/sam3")
70
+ --dtype STR Precision: float32|float16|bfloat16
71
+ --hf-token STR HF token (or use HF_TOKEN env var)
72
+ ```
73
+
74
+ </details>
75
+
76
+ ### Output Format
77
+
78
+ Adds `objects` column with ClassLabel-based detections:
79
+
80
+ ```python
81
+ {
82
+ "objects": [
83
+ {
84
+ "bbox": [x, y, width, height],
85
+ "category": 0, # Always 0 for single class
86
+ "score": 0.87
87
+ }
88
+ ]
89
+ }
90
+ ```
91
+
92
+ ---
93
+
94
+ ## Image Segmentation (`segment-objects.py`)
95
+
96
+ Produce pixel-level segmentation masks for objects matching a text prompt. Two output formats available.
97
+
98
+ ### Quick Start
99
 
100
  ```bash
101
+ hf jobs uv run --flavor a100-large \
102
+ -s HF_TOKEN=HF_TOKEN \
103
+ https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
104
+ input-dataset \
105
+ output-dataset \
106
+ --class-name deer
107
  ```
108
 
109
+ ### Example Output
110
+
111
+ <div style="max-width: 400px;">
112
+ <img src="./example-segmentation.png" alt="Example Segmentation" style="width: 100%; height: auto;"/>
113
+
114
+ _Deer segmented in a wildlife camera trap image with pixel-level mask and bounding box. Generated from [davanstrien/ena24-detection](https://huggingface.co/datasets/davanstrien/ena24-detection)._
115
+
116
+ </div>
117
+
118
+ ### Arguments
119
 
120
  **Required:**
121
 
122
  - `input_dataset` - Input HF dataset ID
123
  - `output_dataset` - Output HF dataset ID
124
+ - `--class-name` - Object class to segment (e.g., `"deer"`, `"animal"`, `"table"`)
125
 
126
  **Common options:**
127
 
128
+ - `--output-format` - `semantic-mask` (default) or `instance-masks`
129
  - `--confidence-threshold FLOAT` - Min confidence (default: 0.5)
130
+ - `--include-boxes` - Also output bounding boxes
131
  - `--batch-size INT` - Batch size (default: 4)
132
  - `--max-samples INT` - Limit samples for testing
 
133
  - `--private` - Make output private
134
 
135
  <details>
136
  <summary>All options</summary>
137
 
138
  ```
139
+ --mask-threshold FLOAT Mask binarization threshold (default: 0.5)
140
+ --image-column STR Image column name (default: "image")
141
  --split STR Dataset split (default: "train")
142
  --shuffle Shuffle before processing
143
  --model STR Model ID (default: "facebook/sam3")
 
147
 
148
  </details>
149
 
150
+ ### Output Formats
151
 
152
+ **Semantic mask** (`--output-format semantic-mask`, default):
153
+ - Adds `segmentation_map` column: single image per sample where pixel value = instance ID (0 = background)
154
+ - More compact, viewable in the HF dataset viewer
155
+ - Also adds `num_instances` and `scores` columns
156
+
157
+ **Instance masks** (`--output-format instance-masks`):
158
+ - Adds `segmentation_masks` column: list of binary mask images (one per detected instance)
159
+ - Also adds `scores` and `category` columns
160
+ - Best for extracting individual objects or creating training data
161
+
162
+ ### Example
163
 
164
+ Segment deer in wildlife camera trap images:
165
 
166
  ```bash
167
  hf jobs uv run --flavor a100-large \
168
  -s HF_TOKEN=HF_TOKEN \
169
+ https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
170
+ davanstrien/ena24-detection \
171
+ my-username/wildlife-segmented \
172
+ --class-name deer \
173
+ --include-boxes
 
174
  ```
175
 
176
+ ---
177
+
178
+ ## HuggingFace Jobs Examples
179
 
180
+ ### Historical Newspapers
181
 
182
  ```bash
183
  hf jobs uv run --flavor a100-large \
184
  -s HF_TOKEN=HF_TOKEN \
185
  https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
186
+ davanstrien/newspapers-with-images-after-photography \
187
+ my-username/newspapers-detected \
188
+ --class-name photograph \
189
+ --confidence-threshold 0.6 \
190
+ --batch-size 8
191
  ```
192
 
193
  ### Wildlife Camera Traps
194
 
 
 
195
  ```bash
196
  hf jobs uv run --flavor a100-large \
197
  -s HF_TOKEN=HF_TOKEN \
198
+ https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
199
  wildlife-images \
200
+ wildlife-segmented \
201
  --class-name animal \
202
+ --include-boxes
203
  ```
204
 
205
  ### Quick Testing
 
209
  ```bash
210
  hf jobs uv run --flavor a100-large \
211
  -s HF_TOKEN=HF_TOKEN \
212
+ https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
213
  large-dataset \
214
  test-output \
215
  --class-name object \
216
  --max-samples 20
217
  ```
218
 
219
+ ### GPU Flavors
220
 
221
  ```bash
222
  # L4 (cost-effective)
 
228
 
229
  See [HF Jobs pricing](https://huggingface.co/pricing#spaces-compute).
230
 
231
+ ## Local Execution
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
232
 
233
+ If you have a CUDA GPU locally:
234
 
235
+ ```bash
236
+ # Detection
237
+ uv run detect-objects.py INPUT OUTPUT --class-name CLASSNAME
238
 
239
+ # Segmentation
240
+ uv run segment-objects.py INPUT OUTPUT --class-name CLASSNAME
 
241
  ```
242
 
243
+ ## Multiple Object Types
244
 
245
+ Run the script multiple times with different `--class-name` values:
246
 
247
  ```bash
 
248
  hf jobs uv run ... --class-name photograph
 
 
249
  hf jobs uv run ... --class-name illustration
 
 
250
  ```
251
 
252
  ## Performance
 
274
 
275
  ## About SAM3
276
 
277
+ [SAM3](https://huggingface.co/facebook/sam3) is Meta's zero-shot vision model. Describe any object in natural language and it will detect and segment it no training required.
 
 
278
 
279
  ## See Also
280
 
281
+ - **[SAM3 Results Browser](https://huggingface.co/spaces/uv-scripts/sam3-detection-browser)** - Browse detection and segmentation results interactively
282
+ - More UV scripts at [huggingface.co/uv-scripts](https://huggingface.co/uv-scripts)
 
 
 
283
 
284
  ## License
285
 
example-segmentation.png ADDED

Git LFS Details

  • SHA256: cce8df1c43e0bbd061d2796ed643d649952a297d5482134776e1c7454f3ff8c6
  • Pointer size: 131 Bytes
  • Size of remote file: 714 kB
segment-objects.py ADDED
@@ -0,0 +1,558 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # requires-python = ">=3.10"
4
+ # dependencies = [
5
+ # "transformers>=5.4.0",
6
+ # "datasets",
7
+ # "huggingface-hub[hf_transfer]",
8
+ # "pillow",
9
+ # "torch",
10
+ # "torchvision",
11
+ # "accelerate",
12
+ # ]
13
+ # ///
14
+
15
+ """
16
+ Segment objects in images using Meta's SAM3 (Segment Anything Model 3).
17
+
18
+ This script processes images from a HuggingFace dataset and produces pixel-level
19
+ segmentation masks for objects matching a text prompt. Outputs either per-instance
20
+ binary masks or a combined semantic segmentation map.
21
+
22
+ Examples:
23
+ # Segment photographs in historical newspapers (instance masks)
24
+ uv run segment-objects.py \\
25
+ davanstrien/newspapers-with-images-after-photography \\
26
+ my-username/newspapers-segmented \\
27
+ --class-name photograph
28
+
29
+ # Segment with semantic map output (single image per sample)
30
+ uv run segment-objects.py \\
31
+ wildlife-images \\
32
+ wildlife-segmented \\
33
+ --class-name animal \\
34
+ --output-format semantic-mask
35
+
36
+ # Include bounding boxes alongside masks
37
+ uv run segment-objects.py \\
38
+ input-dataset output-dataset \\
39
+ --class-name table \\
40
+ --include-boxes
41
+
42
+ # Test on small subset
43
+ uv run segment-objects.py input output \\
44
+ --class-name table \\
45
+ --max-samples 10
46
+
47
+ # Run on HF Jobs with GPU
48
+ hf jobs uv run --flavor a100-large \\
49
+ -s HF_TOKEN=HF_TOKEN \\
50
+ https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \\
51
+ input-dataset output-dataset \\
52
+ --class-name photograph
53
+
54
+ Note: To segment multiple object types, run the script multiple times with different
55
+ --class-name values.
56
+ """
57
+
58
+ import argparse
59
+ import logging
60
+ import os
61
+ import sys
62
+ import time
63
+ from typing import Any
64
+
65
+ import numpy as np
66
+ import torch
67
+ from datasets import ClassLabel, Dataset, Sequence, Value, load_dataset
68
+ from datasets import Image as ImageFeature
69
+ from huggingface_hub import DatasetCard, login
70
+ from PIL import Image
71
+ from transformers import Sam3Model, Sam3Processor
72
+
73
+ os.environ["HF_XET_HIGH_PERFORMANCE"] = "1"
74
+
75
+ logging.basicConfig(
76
+ level=logging.INFO,
77
+ format="%(asctime)s - %(levelname)s - %(message)s",
78
+ datefmt="%H:%M:%S",
79
+ )
80
+ logger = logging.getLogger(__name__)
81
+
82
+ if not torch.cuda.is_available():
83
+ logger.error("CUDA is not available. This script requires a GPU.")
84
+ logger.error("For cloud execution, use HF Jobs with --flavor l4x1 or similar.")
85
+ sys.exit(1)
86
+
87
+
88
+ def parse_args():
89
+ parser = argparse.ArgumentParser(
90
+ description="Segment objects in images using SAM3",
91
+ formatter_class=argparse.RawDescriptionHelpFormatter,
92
+ epilog=__doc__,
93
+ )
94
+
95
+ parser.add_argument(
96
+ "input_dataset", help="Input HuggingFace dataset ID (e.g., 'username/dataset')"
97
+ )
98
+ parser.add_argument(
99
+ "output_dataset", help="Output HuggingFace dataset ID (e.g., 'username/output')"
100
+ )
101
+
102
+ parser.add_argument(
103
+ "--class-name",
104
+ required=True,
105
+ help="Object class to segment (e.g., 'photograph', 'animal', 'table')",
106
+ )
107
+ parser.add_argument(
108
+ "--output-format",
109
+ default="semantic-mask",
110
+ choices=["instance-masks", "semantic-mask"],
111
+ help="Output format: 'instance-masks' (one binary mask per object) or "
112
+ "'semantic-mask' (single image, pixel value = instance ID). Default: semantic-mask",
113
+ )
114
+ parser.add_argument(
115
+ "--confidence-threshold",
116
+ type=float,
117
+ default=0.5,
118
+ help="Minimum confidence score for detections (default: 0.5)",
119
+ )
120
+ parser.add_argument(
121
+ "--mask-threshold",
122
+ type=float,
123
+ default=0.5,
124
+ help="Threshold for mask binarization (default: 0.5)",
125
+ )
126
+ parser.add_argument(
127
+ "--include-boxes",
128
+ action="store_true",
129
+ help="Also include bounding boxes in output",
130
+ )
131
+
132
+ parser.add_argument(
133
+ "--image-column",
134
+ default="image",
135
+ help="Name of the column containing images (default: 'image')",
136
+ )
137
+ parser.add_argument(
138
+ "--split", default="train", help="Dataset split to process (default: 'train')"
139
+ )
140
+ parser.add_argument(
141
+ "--max-samples",
142
+ type=int,
143
+ default=None,
144
+ help="Maximum number of samples to process (for testing)",
145
+ )
146
+ parser.add_argument(
147
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
148
+ )
149
+
150
+ parser.add_argument(
151
+ "--batch-size",
152
+ type=int,
153
+ default=4,
154
+ help="Batch size for processing (default: 4)",
155
+ )
156
+ parser.add_argument(
157
+ "--model",
158
+ default="facebook/sam3",
159
+ help="SAM3 model ID (default: 'facebook/sam3')",
160
+ )
161
+ parser.add_argument(
162
+ "--dtype",
163
+ default="bfloat16",
164
+ choices=["float32", "float16", "bfloat16"],
165
+ help="Model precision (default: 'bfloat16')",
166
+ )
167
+
168
+ parser.add_argument(
169
+ "--private", action="store_true", help="Make output dataset private"
170
+ )
171
+ parser.add_argument(
172
+ "--hf-token",
173
+ default=None,
174
+ help="HuggingFace token (default: uses HF_TOKEN env var or cached token)",
175
+ )
176
+
177
+ return parser.parse_args()
178
+
179
+
180
+ def masks_to_semantic_map(masks: torch.Tensor) -> Image.Image:
181
+ """Combine per-instance binary masks into a single semantic segmentation map.
182
+
183
+ Pixel values: 0=background, 1=first instance, 2=second instance, etc.
184
+ Later instances take priority in overlapping regions.
185
+ """
186
+ if len(masks) == 0:
187
+ return Image.new("L", (1, 1), 0)
188
+
189
+ h, w = masks.shape[1], masks.shape[2]
190
+ seg_map = np.zeros((h, w), dtype=np.uint8)
191
+
192
+ for i, mask in enumerate(masks):
193
+ binary = mask.cpu().numpy().astype(bool)
194
+ seg_map[binary] = i + 1 # 0 is background
195
+
196
+ return Image.fromarray(seg_map, mode="L")
197
+
198
+
199
+ def masks_to_instance_images(masks: torch.Tensor) -> list[Image.Image]:
200
+ """Convert per-instance mask tensors to a list of binary PIL Images."""
201
+ images = []
202
+ for mask in masks:
203
+ binary = (mask.cpu().numpy() * 255).astype(np.uint8)
204
+ images.append(Image.fromarray(binary, mode="L"))
205
+ return images
206
+
207
+
208
+ def process_batch(
209
+ batch: dict[str, list[Any]],
210
+ image_column: str,
211
+ class_name: str,
212
+ processor: Sam3Processor,
213
+ model: Sam3Model,
214
+ confidence_threshold: float,
215
+ mask_threshold: float,
216
+ output_format: str,
217
+ include_boxes: bool,
218
+ ) -> dict[str, list]:
219
+ """Process a batch of images and return segmentation masks."""
220
+ images = batch[image_column]
221
+
222
+ pil_images = []
223
+ for img in images:
224
+ if isinstance(img, str):
225
+ img = Image.open(img)
226
+ if img.mode != "RGB":
227
+ img = img.convert("RGB")
228
+ pil_images.append(img)
229
+
230
+ try:
231
+ inputs = processor(
232
+ images=pil_images,
233
+ text=[class_name] * len(pil_images),
234
+ return_tensors="pt",
235
+ ).to(model.device, dtype=model.dtype)
236
+
237
+ with torch.no_grad():
238
+ outputs = model(**inputs)
239
+
240
+ results = processor.post_process_instance_segmentation(
241
+ outputs,
242
+ threshold=confidence_threshold,
243
+ mask_threshold=mask_threshold,
244
+ target_sizes=inputs.get("original_sizes").tolist(),
245
+ )
246
+
247
+ except Exception as e:
248
+ logger.warning(f"Failed to process batch: {e}")
249
+ return _empty_batch_result(len(pil_images), output_format, include_boxes)
250
+
251
+ batch_result: dict[str, list] = {}
252
+
253
+ if output_format == "semantic-mask":
254
+ batch_result["segmentation_map"] = []
255
+ batch_result["num_instances"] = []
256
+ else:
257
+ batch_result["segmentation_masks"] = []
258
+
259
+ batch_result["scores"] = []
260
+ batch_result["category"] = []
261
+ if include_boxes:
262
+ batch_result["boxes"] = []
263
+
264
+ for result in results:
265
+ masks = result.get("masks", torch.tensor([]))
266
+ scores = result.get("scores", torch.tensor([]))
267
+ boxes = result.get("boxes", torch.tensor([]))
268
+
269
+ scores_np = scores.cpu().float().numpy() if len(scores) > 0 else np.array([])
270
+ score_list = [float(s) for s in scores_np]
271
+ category_list = [0] * len(score_list)
272
+
273
+ if output_format == "semantic-mask":
274
+ batch_result["segmentation_map"].append(masks_to_semantic_map(masks))
275
+ batch_result["num_instances"].append(len(score_list))
276
+ else:
277
+ batch_result["segmentation_masks"].append(
278
+ masks_to_instance_images(masks) if len(masks) > 0 else []
279
+ )
280
+
281
+ batch_result["scores"].append(score_list)
282
+ batch_result["category"].append(category_list)
283
+
284
+ if include_boxes:
285
+ if len(boxes) > 0:
286
+ boxes_np = boxes.cpu().float().numpy()
287
+ box_list = []
288
+ for box in boxes_np:
289
+ x1, y1, x2, y2 = box
290
+ box_list.append(
291
+ [float(x1), float(y1), float(x2 - x1), float(y2 - y1)]
292
+ )
293
+ batch_result["boxes"].append(box_list)
294
+ else:
295
+ batch_result["boxes"].append([])
296
+
297
+ return batch_result
298
+
299
+
300
+ def _empty_batch_result(
301
+ n: int, output_format: str, include_boxes: bool
302
+ ) -> dict[str, list]:
303
+ result: dict[str, list] = {}
304
+ if output_format == "semantic-mask":
305
+ result["segmentation_map"] = [Image.new("L", (1, 1), 0)] * n
306
+ result["num_instances"] = [0] * n
307
+ else:
308
+ result["segmentation_masks"] = [[]] * n
309
+ result["scores"] = [[]] * n
310
+ result["category"] = [[]] * n
311
+ if include_boxes:
312
+ result["boxes"] = [[]] * n
313
+ return result
314
+
315
+
316
+ def load_and_validate_dataset(
317
+ dataset_id: str,
318
+ split: str,
319
+ image_column: str,
320
+ max_samples: int | None = None,
321
+ shuffle: bool = False,
322
+ hf_token: str | None = None,
323
+ ) -> Dataset:
324
+ logger.info(f"Loading dataset: {dataset_id} (split: {split})")
325
+
326
+ try:
327
+ dataset = load_dataset(dataset_id, split=split, token=hf_token)
328
+ except Exception as e:
329
+ logger.error(f"Failed to load dataset '{dataset_id}': {e}")
330
+ sys.exit(1)
331
+
332
+ if image_column not in dataset.column_names:
333
+ logger.error(f"Column '{image_column}' not found in dataset")
334
+ logger.error(f"Available columns: {dataset.column_names}")
335
+ sys.exit(1)
336
+
337
+ if shuffle:
338
+ logger.info("Shuffling dataset")
339
+ dataset = dataset.shuffle()
340
+
341
+ if max_samples is not None:
342
+ logger.info(f"Limiting to {max_samples} samples")
343
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
344
+
345
+ logger.info(f"Loaded {len(dataset)} samples")
346
+ return dataset
347
+
348
+
349
+ def create_dataset_card(
350
+ source_dataset: str,
351
+ model: str,
352
+ class_name: str,
353
+ output_format: str,
354
+ num_samples: int,
355
+ total_detections: int,
356
+ images_with_detections: int,
357
+ processing_time: str,
358
+ confidence_threshold: float,
359
+ mask_threshold: float,
360
+ include_boxes: bool,
361
+ ) -> str:
362
+ from datetime import datetime
363
+
364
+ detection_rate = (
365
+ (images_with_detections / num_samples * 100) if num_samples > 0 else 0
366
+ )
367
+ avg_detections = total_detections / num_samples if num_samples > 0 else 0
368
+
369
+ format_desc = (
370
+ "per-instance binary masks"
371
+ if output_format == "instance-masks"
372
+ else "semantic segmentation maps"
373
+ )
374
+
375
+ return f"""---
376
+ tags:
377
+ - image-segmentation
378
+ - sam3
379
+ - segment-anything
380
+ - segmentation-masks
381
+ - uv-script
382
+ - generated
383
+ ---
384
+
385
+ # Image Segmentation: {class_name.title()} using SAM3
386
+
387
+ This dataset contains **{format_desc}** for **{class_name}** segmented in images from [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using Meta's SAM3.
388
+
389
+ **Generated using**: [uv-scripts/sam3](https://huggingface.co/datasets/uv-scripts/sam3) segmentation script
390
+
391
+ ## Statistics
392
+
393
+ - **Objects Segmented**: {class_name}
394
+ - **Total Instances**: {total_detections:,}
395
+ - **Images with Detections**: {images_with_detections:,} / {num_samples:,} ({detection_rate:.1f}%)
396
+ - **Average Instances per Image**: {avg_detections:.2f}
397
+ - **Output Format**: {output_format}
398
+
399
+ ## Processing Details
400
+
401
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
402
+ - **Model**: [{model}](https://huggingface.co/{model})
403
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
404
+ - **Processing Time**: {processing_time}
405
+ - **Confidence Threshold**: {confidence_threshold}
406
+ - **Mask Threshold**: {mask_threshold}
407
+ - **Includes Bounding Boxes**: {"Yes" if include_boxes else "No"}
408
+
409
+ ## Reproduction
410
+
411
+ ```bash
412
+ uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \\
413
+ {source_dataset} \\
414
+ <output-dataset> \\
415
+ --class-name {class_name} \\
416
+ --output-format {output_format} \\
417
+ --confidence-threshold {confidence_threshold} \\
418
+ --mask-threshold {mask_threshold}{" --include-boxes" if include_boxes else ""}
419
+ ```
420
+
421
+ ---
422
+
423
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts)
424
+ """
425
+
426
+
427
+ def main():
428
+ args = parse_args()
429
+
430
+ class_name = args.class_name.strip()
431
+ if not class_name:
432
+ logger.error("Invalid --class-name argument. Provide a class name.")
433
+ sys.exit(1)
434
+
435
+ logger.info("SAM3 Image Segmentation")
436
+ logger.info(f" Input: {args.input_dataset}")
437
+ logger.info(f" Output: {args.output_dataset}")
438
+ logger.info(f" Class: {class_name}")
439
+ logger.info(f" Format: {args.output_format}")
440
+ logger.info(f" Confidence threshold: {args.confidence_threshold}")
441
+ logger.info(f" Batch size: {args.batch_size}")
442
+
443
+ if args.hf_token:
444
+ login(token=args.hf_token)
445
+ elif os.getenv("HF_TOKEN"):
446
+ login(token=os.getenv("HF_TOKEN"))
447
+
448
+ dataset = load_and_validate_dataset(
449
+ args.input_dataset,
450
+ args.split,
451
+ args.image_column,
452
+ args.max_samples,
453
+ args.shuffle,
454
+ args.hf_token,
455
+ )
456
+
457
+ logger.info(f"Loading SAM3 model: {args.model}")
458
+ try:
459
+ processor = Sam3Processor.from_pretrained(args.model)
460
+ model = Sam3Model.from_pretrained(
461
+ args.model, torch_dtype=getattr(torch, args.dtype), device_map="auto"
462
+ )
463
+ logger.info(f"Model loaded on {model.device}")
464
+ except Exception as e:
465
+ logger.error(f"Failed to load model: {e}")
466
+ logger.error("Ensure the model exists and you have access permissions")
467
+ sys.exit(1)
468
+
469
+ # Build output features
470
+ new_features = dataset.features.copy()
471
+ if args.output_format == "semantic-mask":
472
+ new_features["segmentation_map"] = ImageFeature()
473
+ new_features["num_instances"] = Value("int32")
474
+ else:
475
+ new_features["segmentation_masks"] = Sequence(ImageFeature())
476
+
477
+ new_features["scores"] = Sequence(Value("float32"))
478
+ new_features["category"] = Sequence(ClassLabel(names=[class_name]))
479
+
480
+ if args.include_boxes:
481
+ new_features["boxes"] = Sequence(Sequence(Value("float32"), length=4))
482
+
483
+ logger.info("Processing images...")
484
+ start_time = time.time()
485
+ processed_dataset = dataset.map(
486
+ lambda batch: process_batch(
487
+ batch,
488
+ args.image_column,
489
+ class_name,
490
+ processor,
491
+ model,
492
+ args.confidence_threshold,
493
+ args.mask_threshold,
494
+ args.output_format,
495
+ args.include_boxes,
496
+ ),
497
+ batched=True,
498
+ batch_size=args.batch_size,
499
+ features=new_features,
500
+ desc="Segmenting objects",
501
+ )
502
+ end_time = time.time()
503
+ processing_time_str = f"{(end_time - start_time) / 60:.1f} minutes"
504
+
505
+ # Calculate statistics
506
+ if args.output_format == "semantic-mask":
507
+ total_detections = sum(processed_dataset["num_instances"])
508
+ images_with_detections = sum(
509
+ 1 for n in processed_dataset["num_instances"] if n > 0
510
+ )
511
+ else:
512
+ total_detections = sum(
513
+ len(masks) for masks in processed_dataset["segmentation_masks"]
514
+ )
515
+ images_with_detections = sum(
516
+ 1 for masks in processed_dataset["segmentation_masks"] if len(masks) > 0
517
+ )
518
+
519
+ logger.info("Segmentation complete!")
520
+ logger.info(f" Total instances: {total_detections}")
521
+ logger.info(
522
+ f" Images with detections: {images_with_detections}/{len(processed_dataset)}"
523
+ )
524
+
525
+ logger.info(f"Pushing to HuggingFace Hub: {args.output_dataset}")
526
+ try:
527
+ processed_dataset.push_to_hub(args.output_dataset, private=args.private)
528
+ logger.info(
529
+ f"Dataset available at: https://huggingface.co/datasets/{args.output_dataset}"
530
+ )
531
+ except Exception as e:
532
+ logger.error(f"Failed to push to hub: {e}")
533
+ logger.info("Saving locally as backup...")
534
+ processed_dataset.save_to_disk("./output_dataset")
535
+ logger.info("Saved to ./output_dataset")
536
+ sys.exit(1)
537
+
538
+ logger.info("Creating dataset card...")
539
+ card_content = create_dataset_card(
540
+ source_dataset=args.input_dataset,
541
+ model=args.model,
542
+ class_name=class_name,
543
+ output_format=args.output_format,
544
+ num_samples=len(processed_dataset),
545
+ total_detections=total_detections,
546
+ images_with_detections=images_with_detections,
547
+ processing_time=processing_time_str,
548
+ confidence_threshold=args.confidence_threshold,
549
+ mask_threshold=args.mask_threshold,
550
+ include_boxes=args.include_boxes,
551
+ )
552
+ card = DatasetCard(card_content)
553
+ card.push_to_hub(args.output_dataset, token=args.hf_token or os.getenv("HF_TOKEN"))
554
+ logger.info("Dataset card created and pushed!")
555
+
556
+
557
+ if __name__ == "__main__":
558
+ main()