Spaces:
Runtime error
Runtime error
improved
Browse files- README.md +88 -68
- desert_segmentation/infer/predict.py +11 -4
- eval_summary.json +4 -4
- scripts/demo_gradio.py +66 -7
README.md
CHANGED
|
@@ -42,14 +42,16 @@ End-to-end **semantic segmentation** for **off-road / desert** scenes: every pix
|
|
| 42 |
|
| 43 |
## 2. Problem statement and goals
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
|
| 47 |
-
|
|
|
|
|
| 48 |
| Robustness (synthetic → harder real domains) | Strong photometric + mild “desert-like” augmentations (sun flare, shadow, blur, noise, JPEG) |
|
| 49 |
-
| Class imbalance
|
| 50 |
-
| Stable training
|
| 51 |
-
| Fast iteration
|
| 52 |
-
| Deployment story
|
|
|
|
| 53 |
|
| 54 |
**Note:** The original hackathon plan also mentioned **SegFormer-B2** as a balanced option. This codebase’s **default** is **DeepLabV3+ + ResNet-50**. UNet and FPN are supported in code; SegFormer is **not** implemented as a separate architecture in `models/factory.py` (you can experiment with **MiT** encoders under DeepLabV3+ if SMP supports your chosen encoder name).
|
| 55 |
|
|
@@ -83,22 +85,22 @@ All paths in config are **relative to the workspace root** (`--root` on the CLI,
|
|
| 83 |
### 4.1 What the masks are
|
| 84 |
|
| 85 |
- Masks are read as **2D arrays** (single channel).
|
| 86 |
-
- In this dataset they behave as **
|
| 87 |
-
|
| 88 |
|
| 89 |
### 4.2 Mapping raw IDs → training indices
|
| 90 |
|
| 91 |
The class `RawMaskCodec` in `desert_segmentation/data/mask_encoding.py`:
|
| 92 |
|
| 93 |
1. Builds a **lookup table (LUT)** from `max(raw_ids)` down to 0.
|
| 94 |
-
2. Maps each legal raw ID to a contiguous index **
|
| 95 |
3. **Raises** if any pixel is not in the configured `raw_ids` list (unknown pixels would map to sentinel `255` in the LUT and trigger an error).
|
| 96 |
|
| 97 |
**Why this matters:** Using the wrong mapping (or treating masks as 8-bit class indices) silently destroys learning.
|
| 98 |
|
| 99 |
### 4.3 Ignore index (255)
|
| 100 |
|
| 101 |
-
- **Training:** `ShiftScaleRotate` can introduce border pixels on the mask; those are filled with **
|
| 102 |
- **Validation:** `PadIfNeeded` pads the mask with **255** so square tensors align; metrics and loss skip those pixels.
|
| 103 |
|
| 104 |
### 4.4 Class names
|
|
@@ -159,16 +161,18 @@ codewizard 2.0/
|
|
| 159 |
|
| 160 |
Key sections (see `desert_segmentation/configs/default.yaml` for the full file):
|
| 161 |
|
| 162 |
-
|
| 163 |
-
|
|
| 164 |
-
|
|
| 165 |
-
| `
|
| 166 |
-
| `
|
| 167 |
-
| `
|
| 168 |
-
| `
|
| 169 |
-
| `
|
| 170 |
-
| `
|
| 171 |
-
| `
|
|
|
|
|
|
|
| 172 |
|
| 173 |
---
|
| 174 |
|
|
@@ -218,6 +222,8 @@ flowchart TB
|
|
| 218 |
DL --> ONNX
|
| 219 |
```
|
| 220 |
|
|
|
|
|
|
|
| 221 |
---
|
| 222 |
|
| 223 |
## 8. Data pipeline (detailed)
|
|
@@ -227,11 +233,11 @@ flowchart TB
|
|
| 227 |
1. **List images** in `images_dir` with extensions: `.png`, `.jpg`, `.jpeg`, `.bmp`, `.tif`, `.tiff`.
|
| 228 |
2. **Verify** each image has a mask with the same filename in `masks_dir`.
|
| 229 |
3. **Load RGB** with Pillow → `HxWx3` uint8.
|
| 230 |
-
4. **Load mask** as numpy 2D → cast to `uint16` → **
|
| 231 |
|
| 232 |
**Train mode (`mode="train"`):**
|
| 233 |
|
| 234 |
-
- **
|
| 235 |
- With probability `rare_class_crop_prob` (default **0.35**), pick the **rarest class** in that image (by histogram) and center the crop on a random pixel of that class (if any exist).
|
| 236 |
- Otherwise pick a uniformly random center.
|
| 237 |
- If the image is smaller than the crop, **zero-pad** the image and **255-pad** the mask (ignore regions).
|
|
@@ -248,7 +254,7 @@ flowchart TB
|
|
| 248 |
- **Photometric:** brightness/contrast, hue/sat/value, Gaussian blur, Gaussian noise, JPEG compression simulation, RGB shift.
|
| 249 |
- **If `augmentation.strong`:** `RandomSunFlare`, `RandomShadow` (desert-relevant appearance stress).
|
| 250 |
- **Normalize:** ImageNet mean/std.
|
| 251 |
-
- **
|
| 252 |
|
| 253 |
**Val (`build_val_transforms`):**
|
| 254 |
|
|
@@ -256,7 +262,7 @@ flowchart TB
|
|
| 256 |
|
| 257 |
### 8.3 Class frequency estimation (`utils/freq.py`)
|
| 258 |
|
| 259 |
-
Before training, `scripts/train.py` calls **
|
| 260 |
|
| 261 |
---
|
| 262 |
|
|
@@ -264,11 +270,13 @@ Before training, `scripts/train.py` calls **`estimate_pixel_frequencies`** over
|
|
| 264 |
|
| 265 |
**Factory:** `desert_segmentation/models/factory.py`
|
| 266 |
|
| 267 |
-
|
| 268 |
-
|
|
|
|
|
| 269 |
| `deeplabv3plus` (default) | `smp.DeepLabV3Plus` | Mainline; strong decoder + atrous spatial pyramid |
|
| 270 |
-
| `unet`
|
| 271 |
-
| `fpn`
|
|
|
|
| 272 |
|
| 273 |
**Default encoder:** `resnet50` with `encoder_weights: imagenet`.
|
| 274 |
|
|
@@ -282,24 +290,26 @@ Before training, `scripts/train.py` calls **`estimate_pixel_frequencies`** over
|
|
| 282 |
|
| 283 |
**Modes (`loss.name`):**
|
| 284 |
|
| 285 |
-
|
| 286 |
-
|
|
| 287 |
-
|
|
| 288 |
-
| `
|
| 289 |
-
| `
|
| 290 |
-
| `
|
| 291 |
-
| `
|
|
|
|
|
|
|
| 292 |
|
| 293 |
**Shared options:**
|
| 294 |
|
| 295 |
-
- **
|
| 296 |
-
- **
|
| 297 |
|
| 298 |
**Class weights (`compute_class_weights_from_freq`):**
|
| 299 |
|
| 300 |
1. Start from per-class pixel frequency `freq` on the training masks.
|
| 301 |
2. `w ∝ 1 / log(freq + ε)`, normalize by mean.
|
| 302 |
-
3. Clamp the ratio `w / median(w)` to **
|
| 303 |
|
| 304 |
---
|
| 305 |
|
|
@@ -308,13 +318,13 @@ Before training, `scripts/train.py` calls **`estimate_pixel_frequencies`** over
|
|
| 308 |
**File:** `desert_segmentation/metrics/iou.py`
|
| 309 |
|
| 310 |
1. **Confusion matrix** `C×C` (implementation uses `idx = tgt * C + pred` then `bincount`; rows correspond to **ground-truth class**, columns to **predicted class**).
|
| 311 |
-
2. **Per-class IoU:**
|
| 312 |
-
|
| 313 |
with `TP_k = CM[k,k]`, row/col sums for FP/FN.
|
| 314 |
3. **mIoU:** Mean of per-class IoU over finite entries.
|
| 315 |
-
4. **fwIoU (frequency-weighted IoU):** \
|
| 316 |
|
| 317 |
-
**Note:** The docstring in `compute_confusion` mentions “pred rows, target columns”; the actual indexing follows **
|
| 318 |
|
| 319 |
---
|
| 320 |
|
|
@@ -369,9 +379,9 @@ Before training, `scripts/train.py` calls **`estimate_pixel_frequencies`** over
|
|
| 369 |
3. **Weight loading priority:** If `ema` dict exists in checkpoint, **EMA tensors are copied into parameters** for evaluation; else `state_dict` from `model` key.
|
| 370 |
4. Runs full val loader → logs **mIoU**, **fwIoU**, per-class IoU.
|
| 371 |
5. Writes:
|
| 372 |
-
|
| 373 |
-
|
| 374 |
-
|
| 375 |
|
| 376 |
---
|
| 377 |
|
|
@@ -387,7 +397,7 @@ Before training, `scripts/train.py` calls **`estimate_pixel_frequencies`** over
|
|
| 387 |
- If **both** height and width ≤ `tile_size` (512): single forward pass.
|
| 388 |
- Else: **sliding window** with stride `tile_size * (1 - overlap)` (default overlap **0.25** → stride **384**).
|
| 389 |
- Pads the image with **reflect** padding so tile grid covers corners; crops back to original size.
|
| 390 |
-
- Accumulates **per-class logits** weighted by a **2D Gaussian** (`sigma ∝ tile/3`) so tile borders blend smoothly; final prediction is **
|
| 391 |
|
| 392 |
### 14.2 Test-time augmentation (TTA)
|
| 393 |
|
|
@@ -409,13 +419,15 @@ Under `--out_dir` (default `infer_outputs/`):
|
|
| 409 |
|
| 410 |
## 15. Checkpoints and artifacts
|
| 411 |
|
| 412 |
-
|
| 413 |
-
|
|
| 414 |
-
|
|
| 415 |
-
| `checkpoints/
|
| 416 |
-
| `checkpoints/
|
| 417 |
-
| `
|
| 418 |
-
| `
|
|
|
|
|
|
|
| 419 |
|
| 420 |
---
|
| 421 |
|
|
@@ -474,7 +486,7 @@ python scripts\infer.py --root "d:\codewizard 2.0" --checkpoint checkpoints\best
|
|
| 474 |
|
| 475 |
## 17. Interactive demo (Gradio)
|
| 476 |
|
| 477 |
-
Upload an RGB image in the browser and get a **colored class mask**, **overlay**, a **side-by-side strip** (RGB | mask | overlay), a **fixed legend** (colors match `palette()` in training), **inference time**, and **dominant classes** (pixel histogram). Uses the same path as CLI inference:
|
| 478 |
|
| 479 |
**Install** (base + demo extras):
|
| 480 |
|
|
@@ -493,11 +505,13 @@ python scripts\demo_gradio.py --root "d:\codewizard 2.0" --checkpoint checkpoint
|
|
| 493 |
|
| 494 |
**Environment variables** (optional defaults if flags omitted):
|
| 495 |
|
| 496 |
-
|
| 497 |
-
|
|
| 498 |
-
|
|
|
|
|
| 499 |
| `CHECKPOINT_PATH` | Path to `best.pt` (relative paths resolve under `ROOT`) |
|
| 500 |
|
|
|
|
| 501 |
**Advanced panel:** TTA on/off, tile overlap slider, tile size slider (256–2048, step 64). Overrides are passed into `predict_image` only; the checkpoint file is not modified.
|
| 502 |
|
| 503 |
**v1 limitations:** No per-pixel **confidence heatmap** for full sliding-window runs (only `argmax` is returned from `predict_image`). See plan follow-up to add logits fusion if needed.
|
|
@@ -524,7 +538,7 @@ Covers:
|
|
| 524 |
|
| 525 |
## 19. Dependencies and environment notes
|
| 526 |
|
| 527 |
-
**
|
| 528 |
|
| 529 |
- `torch`, `torchvision`, `numpy`, `Pillow`, `PyYAML`
|
| 530 |
- `albumentations` pinned to `<1.5` to reduce optional native build issues on some Windows setups
|
|
@@ -540,14 +554,16 @@ Covers:
|
|
| 540 |
|
| 541 |
## 20. Design decisions and limitations
|
| 542 |
|
| 543 |
-
|
| 544 |
-
|
|
| 545 |
-
|
|
| 546 |
-
|
|
| 547 |
-
|
|
| 548 |
-
|
|
| 549 |
-
|
|
| 550 |
-
|
|
|
|
|
|
|
|
| 551 |
|
| 552 |
---
|
| 553 |
|
|
@@ -594,6 +610,8 @@ flowchart TD
|
|
| 594 |
early -->|no| start
|
| 595 |
```
|
| 596 |
|
|
|
|
|
|
|
| 597 |
### 22.2 Inference on large images
|
| 598 |
|
| 599 |
```mermaid
|
|
@@ -609,6 +627,8 @@ flowchart LR
|
|
| 609 |
img --> pad --> tiles --> fwdT --> g --> acc --> argmax --> cropBack
|
| 610 |
```
|
| 611 |
|
|
|
|
|
|
|
| 612 |
---
|
| 613 |
|
| 614 |
## Acknowledgments
|
|
@@ -618,4 +638,4 @@ flowchart LR
|
|
| 618 |
|
| 619 |
---
|
| 620 |
|
| 621 |
-
*Generated to document the implementation in this repository as of the README authoring date. For the original hackathon planning narrative, see your separate plan document (not stored in this repo’s `README`).*
|
|
|
|
| 42 |
|
| 43 |
## 2. Problem statement and goals
|
| 44 |
|
| 45 |
+
|
| 46 |
+
| Goal | How we address it |
|
| 47 |
+
| -------------------------------------------- | -------------------------------------------------------------------------------------------- |
|
| 48 |
+
| Accurate pixel-wise classification | DeepLabV3+ with ImageNet-pretrained encoder; CE + Dice loss; class-frequency weights |
|
| 49 |
| Robustness (synthetic → harder real domains) | Strong photometric + mild “desert-like” augmentations (sun flare, shadow, blur, noise, JPEG) |
|
| 50 |
+
| Class imbalance | Inverse log-frequency weights with a **cap**; rare-class-biased random crops |
|
| 51 |
+
| Stable training | AdamW, cosine decay with **warmup**, gradient clipping, optional **EMA** |
|
| 52 |
+
| Fast iteration | YAML-driven config; SMP for one-line model construction; scripts for train / eval / infer |
|
| 53 |
+
| Deployment story | Optional **ONNX** export; inference timing written to `latency.txt` |
|
| 54 |
+
|
| 55 |
|
| 56 |
**Note:** The original hackathon plan also mentioned **SegFormer-B2** as a balanced option. This codebase’s **default** is **DeepLabV3+ + ResNet-50**. UNet and FPN are supported in code; SegFormer is **not** implemented as a separate architecture in `models/factory.py` (you can experiment with **MiT** encoders under DeepLabV3+ if SMP supports your chosen encoder name).
|
| 57 |
|
|
|
|
| 85 |
### 4.1 What the masks are
|
| 86 |
|
| 87 |
- Masks are read as **2D arrays** (single channel).
|
| 88 |
+
- In this dataset they behave as `**I;16` (16-bit unsigned)** semantic IDs: pixel values are **not** 0, 1, 2, …
|
| 89 |
+
They are **dataset-specific raw IDs**, e.g. `100, 200, 300, 500, 550, 600, 700, 800, 7100, 10000`.
|
| 90 |
|
| 91 |
### 4.2 Mapping raw IDs → training indices
|
| 92 |
|
| 93 |
The class `RawMaskCodec` in `desert_segmentation/data/mask_encoding.py`:
|
| 94 |
|
| 95 |
1. Builds a **lookup table (LUT)** from `max(raw_ids)` down to 0.
|
| 96 |
+
2. Maps each legal raw ID to a contiguous index `**0 … num_classes-1`** (uint8 for Albumentations compatibility).
|
| 97 |
3. **Raises** if any pixel is not in the configured `raw_ids` list (unknown pixels would map to sentinel `255` in the LUT and trigger an error).
|
| 98 |
|
| 99 |
**Why this matters:** Using the wrong mapping (or treating masks as 8-bit class indices) silently destroys learning.
|
| 100 |
|
| 101 |
### 4.3 Ignore index (255)
|
| 102 |
|
| 103 |
+
- **Training:** `ShiftScaleRotate` can introduce border pixels on the mask; those are filled with `**ignore_index` (255)**. Cross-entropy and Dice **ignore** those pixels.
|
| 104 |
- **Validation:** `PadIfNeeded` pads the mask with **255** so square tensors align; metrics and loss skip those pixels.
|
| 105 |
|
| 106 |
### 4.4 Class names
|
|
|
|
| 161 |
|
| 162 |
Key sections (see `desert_segmentation/configs/default.yaml` for the full file):
|
| 163 |
|
| 164 |
+
|
| 165 |
+
| Section | Purpose |
|
| 166 |
+
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
| 167 |
+
| `root` | Base path for resolving relative data paths (overridden by `--root` in scripts) |
|
| 168 |
+
| `data.`* | Relative dirs for train/val images and masks, test images, `raw_ids`, `class_names`, `crop_size`, `rare_class_crop_prob`, `weighted_sampler`, `weighted_sampler_eps`, `ignore_index` |
|
| 169 |
+
| `model.*` | `architecture` (`deeplabv3plus` | `unet` | `fpn`), `encoder_name`, `encoder_weights` |
|
| 170 |
+
| `train.*` | `batch_size`, `epochs`, `lr`, `weight_decay`, `warmup_ratio`, `amp`, `gradient_clip`, `seed`, `checkpoint_dir`, `log_interval`, `early_stop_patience` |
|
| 171 |
+
| `loss.*` | `name` (`ce` | `weighted_ce` | `ce_dice` | `focal_ce` | `focal_ce_dice`), `dice_weight`, `label_smoothing` (CE modes only), `class_weight_cap`, `focal_gamma` |
|
| 172 |
+
| `augmentation.strong` | Enables extra sun flare + shadow blocks in training |
|
| 173 |
+
| `ema.*` | Optional exponential moving average of weights for evaluation |
|
| 174 |
+
| `inference.*` | `tile_size`, `overlap` (for sliding window), `tta_flip`, `batch_size` (reserved for future batching) |
|
| 175 |
+
|
| 176 |
|
| 177 |
---
|
| 178 |
|
|
|
|
| 222 |
DL --> ONNX
|
| 223 |
```
|
| 224 |
|
| 225 |
+
|
| 226 |
+
|
| 227 |
---
|
| 228 |
|
| 229 |
## 8. Data pipeline (detailed)
|
|
|
|
| 233 |
1. **List images** in `images_dir` with extensions: `.png`, `.jpg`, `.jpeg`, `.bmp`, `.tif`, `.tiff`.
|
| 234 |
2. **Verify** each image has a mask with the same filename in `masks_dir`.
|
| 235 |
3. **Load RGB** with Pillow → `HxWx3` uint8.
|
| 236 |
+
4. **Load mask** as numpy 2D → cast to `uint16` → `**codec.encode_mask`** → `HxW` uint8 with values `0 … C-1` (or padded 255 later in transforms).
|
| 237 |
|
| 238 |
**Train mode (`mode="train"`):**
|
| 239 |
|
| 240 |
+
- `**_random_crop_bias_rare`:** Extract a `**crop_size × crop_size`** patch.
|
| 241 |
- With probability `rare_class_crop_prob` (default **0.35**), pick the **rarest class** in that image (by histogram) and center the crop on a random pixel of that class (if any exist).
|
| 242 |
- Otherwise pick a uniformly random center.
|
| 243 |
- If the image is smaller than the crop, **zero-pad** the image and **255-pad** the mask (ignore regions).
|
|
|
|
| 254 |
- **Photometric:** brightness/contrast, hue/sat/value, Gaussian blur, Gaussian noise, JPEG compression simulation, RGB shift.
|
| 255 |
- **If `augmentation.strong`:** `RandomSunFlare`, `RandomShadow` (desert-relevant appearance stress).
|
| 256 |
- **Normalize:** ImageNet mean/std.
|
| 257 |
+
- `**ToTensorV2`:** Image → `float` tensor `CHW`; mask handled so downstream converts to `long` in `__getitem__`.
|
| 258 |
|
| 259 |
**Val (`build_val_transforms`):**
|
| 260 |
|
|
|
|
| 262 |
|
| 263 |
### 8.3 Class frequency estimation (`utils/freq.py`)
|
| 264 |
|
| 265 |
+
Before training, `scripts/train.py` calls `**estimate_pixel_frequencies`** over **all** training mask files (configurable `max_files` in code; train script uses full corpus). This yields a normalized frequency vector per class → used to build **class weights**.
|
| 266 |
|
| 267 |
---
|
| 268 |
|
|
|
|
| 270 |
|
| 271 |
**Factory:** `desert_segmentation/models/factory.py`
|
| 272 |
|
| 273 |
+
|
| 274 |
+
| `architecture` | SMP class | Notes |
|
| 275 |
+
| ------------------------- | ------------------- | ------------------------------------------------- |
|
| 276 |
| `deeplabv3plus` (default) | `smp.DeepLabV3Plus` | Mainline; strong decoder + atrous spatial pyramid |
|
| 277 |
+
| `unet` | `smp.Unet` | Classic encoder–decoder skips |
|
| 278 |
+
| `fpn` | `smp.FPN` | Feature pyramid neck |
|
| 279 |
+
|
| 280 |
|
| 281 |
**Default encoder:** `resnet50` with `encoder_weights: imagenet`.
|
| 282 |
|
|
|
|
| 290 |
|
| 291 |
**Modes (`loss.name`):**
|
| 292 |
|
| 293 |
+
|
| 294 |
+
| Mode | Description |
|
| 295 |
+
| ------------------- | ------------------------------------------------------------------------------------ |
|
| 296 |
+
| `ce` | Plain cross-entropy, unweighted |
|
| 297 |
+
| `weighted_ce` | Cross-entropy with per-class `weight` tensor |
|
| 298 |
+
| `ce_dice` (default) | `CE(weighted) + dice_weight * multiclass_Dice_loss` |
|
| 299 |
+
| `focal_ce` | Focal modulated CE; optional class weights on pixels |
|
| 300 |
+
| `focal_ce_dice` | `focal_ce` + `dice_weight * multiclass_Dice_loss` (same class weights in focal term) |
|
| 301 |
+
|
| 302 |
|
| 303 |
**Shared options:**
|
| 304 |
|
| 305 |
+
- `**ignore_index`:** Pixels with label 255 are masked out of CE / focal / dice.
|
| 306 |
+
- `**label_smoothing`:** Applied to **CE-based** modes (`ce`, `weighted_ce`, `ce_dice`) only; not used in `focal_ce` / `focal_ce_dice`.
|
| 307 |
|
| 308 |
**Class weights (`compute_class_weights_from_freq`):**
|
| 309 |
|
| 310 |
1. Start from per-class pixel frequency `freq` on the training masks.
|
| 311 |
2. `w ∝ 1 / log(freq + ε)`, normalize by mean.
|
| 312 |
+
3. Clamp the ratio `w / median(w)` to `**class_weight_cap`** (default **15**) so rare classes do not explode the loss.
|
| 313 |
|
| 314 |
---
|
| 315 |
|
|
|
|
| 318 |
**File:** `desert_segmentation/metrics/iou.py`
|
| 319 |
|
| 320 |
1. **Confusion matrix** `C×C` (implementation uses `idx = tgt * C + pred` then `bincount`; rows correspond to **ground-truth class**, columns to **predicted class**).
|
| 321 |
+
2. **Per-class IoU:**
|
| 322 |
+
\text{IoU}_k = \frac{TP_k}{TP_k + FP_k + FN_k}
|
| 323 |
with `TP_k = CM[k,k]`, row/col sums for FP/FN.
|
| 324 |
3. **mIoU:** Mean of per-class IoU over finite entries.
|
| 325 |
+
4. **fwIoU (frequency-weighted IoU):** \sum_k \text{IoU}_k \cdot p_k where p_k is the empirical frequency of class k in the ground-truth pixels (column marginals).
|
| 326 |
|
| 327 |
+
**Note:** The docstring in `compute_confusion` mentions “pred rows, target columns”; the actual indexing follows `**tgt` (row) × `C` + `pred` (col)`** after reshape.
|
| 328 |
|
| 329 |
---
|
| 330 |
|
|
|
|
| 379 |
3. **Weight loading priority:** If `ema` dict exists in checkpoint, **EMA tensors are copied into parameters** for evaluation; else `state_dict` from `model` key.
|
| 380 |
4. Runs full val loader → logs **mIoU**, **fwIoU**, per-class IoU.
|
| 381 |
5. Writes:
|
| 382 |
+
- `eval_outputs/metrics.json` (or `--out_dir`)
|
| 383 |
+
- `confusion.npy`
|
| 384 |
+
- Up to `--max_viz` side-by-side **RGB | GT | Pred** PNGs (`save_triplet` in `utils/viz.py`), with ImageNet denormalization for RGB panels.
|
| 385 |
|
| 386 |
---
|
| 387 |
|
|
|
|
| 397 |
- If **both** height and width ≤ `tile_size` (512): single forward pass.
|
| 398 |
- Else: **sliding window** with stride `tile_size * (1 - overlap)` (default overlap **0.25** → stride **384**).
|
| 399 |
- Pads the image with **reflect** padding so tile grid covers corners; crops back to original size.
|
| 400 |
+
- Accumulates **per-class logits** weighted by a **2D Gaussian** (`sigma ∝ tile/3`) so tile borders blend smoothly; final prediction is `**argmax` over classes** per pixel.
|
| 401 |
|
| 402 |
### 14.2 Test-time augmentation (TTA)
|
| 403 |
|
|
|
|
| 419 |
|
| 420 |
## 15. Checkpoints and artifacts
|
| 421 |
|
| 422 |
+
|
| 423 |
+
| Artifact | Contents |
|
| 424 |
+
| -------------------------- | --------------------------------------------------------------------------- |
|
| 425 |
+
| `checkpoints/best.pt` | `model`, `ema` (optional), `miou`, `per_class_iou`, `config`, `class_names` |
|
| 426 |
+
| `checkpoints/last.pt` | Latest epoch snapshot + optimizer |
|
| 427 |
+
| `checkpoints/history.json` | List of `{epoch, miou, fw_iou}` |
|
| 428 |
+
| `eval_outputs/`* | `metrics.json`, `confusion.npy`, visualization PNGs |
|
| 429 |
+
| `infer_outputs/*` | Overlays, triplets, `latency.txt` |
|
| 430 |
+
|
| 431 |
|
| 432 |
---
|
| 433 |
|
|
|
|
| 486 |
|
| 487 |
## 17. Interactive demo (Gradio)
|
| 488 |
|
| 489 |
+
Upload an RGB image in the browser and get a **colored class mask**, **overlay**, a **side-by-side strip** (RGB | mask | overlay), a **fixed legend** (colors match `palette()` in training), **inference time**, and **dominant classes** (pixel histogram). Uses the same path as CLI inference: `[_load_model_for_inference](d:\codewizard 2.0\desert_segmentation\infer\predict.py)` and `[predict_image](d:\codewizard 2.0\desert_segmentation\infer\predict.py)` (EMA weights preferred when present in the checkpoint).
|
| 490 |
|
| 491 |
**Install** (base + demo extras):
|
| 492 |
|
|
|
|
| 505 |
|
| 506 |
**Environment variables** (optional defaults if flags omitted):
|
| 507 |
|
| 508 |
+
|
| 509 |
+
| Variable | Purpose |
|
| 510 |
+
| ----------------- | ------------------------------------------------------- |
|
| 511 |
+
| `ROOT` | Workspace root (same as `--root`) |
|
| 512 |
| `CHECKPOINT_PATH` | Path to `best.pt` (relative paths resolve under `ROOT`) |
|
| 513 |
|
| 514 |
+
|
| 515 |
**Advanced panel:** TTA on/off, tile overlap slider, tile size slider (256–2048, step 64). Overrides are passed into `predict_image` only; the checkpoint file is not modified.
|
| 516 |
|
| 517 |
**v1 limitations:** No per-pixel **confidence heatmap** for full sliding-window runs (only `argmax` is returned from `predict_image`). See plan follow-up to add logits fusion if needed.
|
|
|
|
| 538 |
|
| 539 |
## 19. Dependencies and environment notes
|
| 540 |
|
| 541 |
+
`**requirements.txt`:**
|
| 542 |
|
| 543 |
- `torch`, `torchvision`, `numpy`, `Pillow`, `PyYAML`
|
| 544 |
- `albumentations` pinned to `<1.5` to reduce optional native build issues on some Windows setups
|
|
|
|
| 554 |
|
| 555 |
## 20. Design decisions and limitations
|
| 556 |
|
| 557 |
+
|
| 558 |
+
| Topic | Decision / limitation |
|
| 559 |
+
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
| 560 |
+
| Mask modes | **16-bit raw IDs** supported via LUT; **P-mode palette** and **RGB color masks** are *not* auto-detected in this codebase—extend `mask_encoding.py` if your dataset uses them |
|
| 561 |
+
| SegFormer | **Not** a separate `architecture` enum; plan mentioned SegFormer-B2 as an alternative—would require additional factory code or using a supported SMP encoder |
|
| 562 |
+
| Val resolution | Images are **letterboxed** to 512×512 for batching; mIoU is on padded regions with ignore—fine for hackathon; for publication-grade eval consider sliding-window val too |
|
| 563 |
+
| Inference fusion | Overlapping tiles add **Gaussian-weighted logits** per class into an accumulator; the final label is `**argmax` over the accumulated logits** (feathered overlap fusion). A per-pixel `weight` tensor is also accumulated in code for possible future normalization extensions |
|
| 564 |
+
| Poly LR / sync BN | **Not** implemented (cosine+warmup only) |
|
| 565 |
+
| Ensemble | **Not** implemented (single model + optional EMA) |
|
| 566 |
+
|
| 567 |
|
| 568 |
---
|
| 569 |
|
|
|
|
| 610 |
early -->|no| start
|
| 611 |
```
|
| 612 |
|
| 613 |
+
|
| 614 |
+
|
| 615 |
### 22.2 Inference on large images
|
| 616 |
|
| 617 |
```mermaid
|
|
|
|
| 627 |
img --> pad --> tiles --> fwdT --> g --> acc --> argmax --> cropBack
|
| 628 |
```
|
| 629 |
|
| 630 |
+
|
| 631 |
+
|
| 632 |
---
|
| 633 |
|
| 634 |
## Acknowledgments
|
|
|
|
| 638 |
|
| 639 |
---
|
| 640 |
|
| 641 |
+
*Generated to document the implementation in this repository as of the README authoring date. For the original hackathon planning narrative, see your separate plan document (not stored in this repo’s `README`).*
|
desert_segmentation/infer/predict.py
CHANGED
|
@@ -124,7 +124,7 @@ def predict_image(
|
|
| 124 |
def _load_model_for_inference(
|
| 125 |
checkpoint_path: Path,
|
| 126 |
device: torch.device,
|
| 127 |
-
) -> Tuple[nn.Module, dict, RawMaskCodec]:
|
| 128 |
try:
|
| 129 |
ckpt = torch.load(checkpoint_path, map_location=device, weights_only=False)
|
| 130 |
except TypeError:
|
|
@@ -143,7 +143,14 @@ def _load_model_for_inference(
|
|
| 143 |
if n in ckpt["ema"]:
|
| 144 |
p.data.copy_(ckpt["ema"][n].to(device))
|
| 145 |
model.eval()
|
| 146 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 147 |
|
| 148 |
|
| 149 |
@torch.no_grad()
|
|
@@ -155,7 +162,7 @@ def predict_folder(
|
|
| 155 |
limit: Optional[int] = None,
|
| 156 |
) -> None:
|
| 157 |
device = device or torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 158 |
-
model, cfg, codec = _load_model_for_inference(checkpoint_path, device)
|
| 159 |
icfg = cfg.get("inference") or {}
|
| 160 |
tile_size = int(icfg.get("tile_size", 512))
|
| 161 |
overlap = float(icfg.get("overlap", 0.25))
|
|
@@ -193,7 +200,7 @@ def export_onnx(
|
|
| 193 |
opset: int = 17,
|
| 194 |
) -> None:
|
| 195 |
device = torch.device("cpu")
|
| 196 |
-
model, _, _ = _load_model_for_inference(checkpoint_path, device)
|
| 197 |
model.eval()
|
| 198 |
dummy = torch.randn(1, 3, height, width, device=device)
|
| 199 |
torch.onnx.export(
|
|
|
|
| 124 |
def _load_model_for_inference(
|
| 125 |
checkpoint_path: Path,
|
| 126 |
device: torch.device,
|
| 127 |
+
) -> Tuple[nn.Module, dict, RawMaskCodec, Optional[float]]:
|
| 128 |
try:
|
| 129 |
ckpt = torch.load(checkpoint_path, map_location=device, weights_only=False)
|
| 130 |
except TypeError:
|
|
|
|
| 143 |
if n in ckpt["ema"]:
|
| 144 |
p.data.copy_(ckpt["ema"][n].to(device))
|
| 145 |
model.eval()
|
| 146 |
+
miou: Optional[float] = None
|
| 147 |
+
raw_miou = ckpt.get("miou")
|
| 148 |
+
if raw_miou is not None:
|
| 149 |
+
try:
|
| 150 |
+
miou = float(raw_miou)
|
| 151 |
+
except (TypeError, ValueError):
|
| 152 |
+
miou = None
|
| 153 |
+
return model, cfg, codec, miou
|
| 154 |
|
| 155 |
|
| 156 |
@torch.no_grad()
|
|
|
|
| 162 |
limit: Optional[int] = None,
|
| 163 |
) -> None:
|
| 164 |
device = device or torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 165 |
+
model, cfg, codec, _ = _load_model_for_inference(checkpoint_path, device)
|
| 166 |
icfg = cfg.get("inference") or {}
|
| 167 |
tile_size = int(icfg.get("tile_size", 512))
|
| 168 |
overlap = float(icfg.get("overlap", 0.25))
|
|
|
|
| 200 |
opset: int = 17,
|
| 201 |
) -> None:
|
| 202 |
device = torch.device("cpu")
|
| 203 |
+
model, _, _, _ = _load_model_for_inference(checkpoint_path, device)
|
| 204 |
model.eval()
|
| 205 |
dummy = torch.randn(1, 3, height, width, device=device)
|
| 206 |
torch.onnx.export(
|
eval_summary.json
CHANGED
|
@@ -1,9 +1,9 @@
|
|
| 1 |
{
|
| 2 |
-
"checkpoint": "D:\\codewizard 2.0\\checkpoints\\best.pt",
|
| 3 |
-
"val_dir": "D:\\codewizard 2.0\\training\\val\\Color_Images",
|
| 4 |
"num_val_samples": 317,
|
| 5 |
-
"miou": 0.
|
| 6 |
-
"miou_all_classes": 0.
|
| 7 |
"miou_valid_gt_classes": 0.07851162270064849,
|
| 8 |
"fw_iou": 0.3974744379520416,
|
| 9 |
"global_pixel_accuracy": 0.448105526939844,
|
|
|
|
| 1 |
{
|
| 2 |
+
"checkpoint": "D:\\codewizard 2.0 - Copy\\checkpoints\\best.pt",
|
| 3 |
+
"val_dir": "D:\\codewizard 2.0 - Copy\\training\\val\\Color_Images",
|
| 4 |
"num_val_samples": 317,
|
| 5 |
+
"miou": 0.6051162552833557,
|
| 6 |
+
"miou_all_classes": 0.6051162552833557,
|
| 7 |
"miou_valid_gt_classes": 0.07851162270064849,
|
| 8 |
"fw_iou": 0.3974744379520416,
|
| 9 |
"global_pixel_accuracy": 0.448105526939844,
|
scripts/demo_gradio.py
CHANGED
|
@@ -4,12 +4,13 @@
|
|
| 4 |
from __future__ import annotations
|
| 5 |
|
| 6 |
import argparse
|
|
|
|
| 7 |
import logging
|
| 8 |
import os
|
| 9 |
import sys
|
| 10 |
import time
|
| 11 |
from pathlib import Path
|
| 12 |
-
from typing import Any, Dict, Tuple
|
| 13 |
|
| 14 |
ROOT = Path(__file__).resolve().parents[1]
|
| 15 |
if str(ROOT) not in sys.path:
|
|
@@ -54,14 +55,45 @@ def _to_uint8_rgb(arr: Any) -> np.ndarray:
|
|
| 54 |
return np.ascontiguousarray(a)
|
| 55 |
|
| 56 |
|
| 57 |
-
def
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
global _STATE
|
| 59 |
if _STATE:
|
| 60 |
return
|
| 61 |
logger.info("Loading checkpoint: %s", checkpoint)
|
| 62 |
-
model, cfg, codec = _load_model_for_inference(checkpoint, device)
|
| 63 |
icfg = cfg.get("inference") or {}
|
| 64 |
legend_rows, colors = build_legend_rows(codec.class_names, codec.num_classes, seed=42)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
_STATE.update(
|
| 66 |
{
|
| 67 |
"model": model,
|
|
@@ -72,6 +104,8 @@ def _init_state(checkpoint: Path, device: torch.device) -> None:
|
|
| 72 |
"legend_rows": legend_rows,
|
| 73 |
"colors": colors,
|
| 74 |
"legend_html_static": legend_table_html(legend_rows),
|
|
|
|
|
|
|
| 75 |
},
|
| 76 |
)
|
| 77 |
logger.info(
|
|
@@ -122,10 +156,20 @@ def _run(
|
|
| 122 |
if device.type == "cpu":
|
| 123 |
dev_str += " (CPU mode — slower than GPU)"
|
| 124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
stats = (
|
| 126 |
-
|
| 127 |
-
f"**
|
| 128 |
-
f"**
|
|
|
|
| 129 |
)
|
| 130 |
dominant = "### Dominant classes in this image\n" + dominant_classes_markdown(pred, codec.class_names, top_k=3)
|
| 131 |
|
|
@@ -148,6 +192,12 @@ def main() -> None:
|
|
| 148 |
parser.add_argument("--share", action="store_true", help="Create a temporary public Gradio link")
|
| 149 |
parser.add_argument("--max-side", type=int, default=4096)
|
| 150 |
parser.add_argument("--max-megapixels", type=float, default=16.0)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 151 |
args = parser.parse_args()
|
| 152 |
|
| 153 |
root = Path(args.root or ROOT).resolve()
|
|
@@ -158,8 +208,17 @@ def main() -> None:
|
|
| 158 |
if not ckpt.is_file():
|
| 159 |
raise SystemExit(f"Checkpoint not found: {ckpt}")
|
| 160 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 162 |
-
_init_state(ckpt, device)
|
| 163 |
|
| 164 |
icfg = _STATE["icfg"]
|
| 165 |
def_tta = bool(icfg.get("tta_flip", True))
|
|
|
|
| 4 |
from __future__ import annotations
|
| 5 |
|
| 6 |
import argparse
|
| 7 |
+
import json
|
| 8 |
import logging
|
| 9 |
import os
|
| 10 |
import sys
|
| 11 |
import time
|
| 12 |
from pathlib import Path
|
| 13 |
+
from typing import Any, Dict, Optional, Tuple
|
| 14 |
|
| 15 |
ROOT = Path(__file__).resolve().parents[1]
|
| 16 |
if str(ROOT) not in sys.path:
|
|
|
|
| 55 |
return np.ascontiguousarray(a)
|
| 56 |
|
| 57 |
|
| 58 |
+
def _try_read_eval_pixel_accuracy(path: Optional[Path]) -> Optional[float]:
|
| 59 |
+
"""Returns global val pixel accuracy in 0..1 from eval_summary.json, or None."""
|
| 60 |
+
if path is None or not path.is_file():
|
| 61 |
+
return None
|
| 62 |
+
try:
|
| 63 |
+
data = json.loads(path.read_text(encoding="utf-8"))
|
| 64 |
+
except (OSError, UnicodeDecodeError, json.JSONDecodeError):
|
| 65 |
+
return None
|
| 66 |
+
raw = data.get("global_pixel_accuracy")
|
| 67 |
+
if raw is None:
|
| 68 |
+
return None
|
| 69 |
+
try:
|
| 70 |
+
v = float(raw)
|
| 71 |
+
except (TypeError, ValueError):
|
| 72 |
+
return None
|
| 73 |
+
if v < 0.0 or v > 1.0:
|
| 74 |
+
return None
|
| 75 |
+
return v
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
def _init_state(checkpoint: Path, device: torch.device, eval_summary: Optional[Path]) -> None:
|
| 79 |
global _STATE
|
| 80 |
if _STATE:
|
| 81 |
return
|
| 82 |
logger.info("Loading checkpoint: %s", checkpoint)
|
| 83 |
+
model, cfg, codec, ckpt_miou = _load_model_for_inference(checkpoint, device)
|
| 84 |
icfg = cfg.get("inference") or {}
|
| 85 |
legend_rows, colors = build_legend_rows(codec.class_names, codec.num_classes, seed=42)
|
| 86 |
+
|
| 87 |
+
gpa = _try_read_eval_pixel_accuracy(eval_summary)
|
| 88 |
+
val_accuracy_pct: Optional[float] = None
|
| 89 |
+
val_accuracy_kind: str = ""
|
| 90 |
+
if gpa is not None:
|
| 91 |
+
val_accuracy_pct = 100.0 * gpa
|
| 92 |
+
val_accuracy_kind = "pixel"
|
| 93 |
+
elif ckpt_miou is not None:
|
| 94 |
+
val_accuracy_pct = 100.0 * ckpt_miou
|
| 95 |
+
val_accuracy_kind = "miou"
|
| 96 |
+
|
| 97 |
_STATE.update(
|
| 98 |
{
|
| 99 |
"model": model,
|
|
|
|
| 104 |
"legend_rows": legend_rows,
|
| 105 |
"colors": colors,
|
| 106 |
"legend_html_static": legend_table_html(legend_rows),
|
| 107 |
+
"val_accuracy_pct": val_accuracy_pct,
|
| 108 |
+
"val_accuracy_kind": val_accuracy_kind,
|
| 109 |
},
|
| 110 |
)
|
| 111 |
logger.info(
|
|
|
|
| 156 |
if device.type == "cpu":
|
| 157 |
dev_str += " (CPU mode — slower than GPU)"
|
| 158 |
|
| 159 |
+
acc_pct = st.get("val_accuracy_pct")
|
| 160 |
+
acc_kind = st.get("val_accuracy_kind") or ""
|
| 161 |
+
if acc_pct is not None and acc_kind == "pixel":
|
| 162 |
+
acc_line = f"**Accuracy (val):** {acc_pct:.1f}% \n"
|
| 163 |
+
elif acc_pct is not None and acc_kind == "miou":
|
| 164 |
+
acc_line = f"**Val mIoU (checkpoint):** {acc_pct:.1f}% \n"
|
| 165 |
+
else:
|
| 166 |
+
acc_line = ""
|
| 167 |
+
|
| 168 |
stats = (
|
| 169 |
+
acc_line
|
| 170 |
+
+ f"**Inference:** {ms:.1f} ms \n"
|
| 171 |
+
+ f"**Device:** {dev_str} \n"
|
| 172 |
+
+ f"**Tile size:** {tile} | **Overlap:** {ov:.2f} | **TTA:** {use_tta}"
|
| 173 |
)
|
| 174 |
dominant = "### Dominant classes in this image\n" + dominant_classes_markdown(pred, codec.class_names, top_k=3)
|
| 175 |
|
|
|
|
| 192 |
parser.add_argument("--share", action="store_true", help="Create a temporary public Gradio link")
|
| 193 |
parser.add_argument("--max-side", type=int, default=4096)
|
| 194 |
parser.add_argument("--max-megapixels", type=float, default=16.0)
|
| 195 |
+
parser.add_argument(
|
| 196 |
+
"--eval-summary",
|
| 197 |
+
type=str,
|
| 198 |
+
default=None,
|
| 199 |
+
help="Optional path to eval_summary.json (default: <root>/eval_summary.json if that file exists)",
|
| 200 |
+
)
|
| 201 |
args = parser.parse_args()
|
| 202 |
|
| 203 |
root = Path(args.root or ROOT).resolve()
|
|
|
|
| 208 |
if not ckpt.is_file():
|
| 209 |
raise SystemExit(f"Checkpoint not found: {ckpt}")
|
| 210 |
|
| 211 |
+
eval_summary_arg = args.eval_summary
|
| 212 |
+
if eval_summary_arg:
|
| 213 |
+
eval_summary = Path(eval_summary_arg)
|
| 214 |
+
if not eval_summary.is_absolute():
|
| 215 |
+
eval_summary = (root / eval_summary).resolve()
|
| 216 |
+
else:
|
| 217 |
+
cand = root / "eval_summary.json"
|
| 218 |
+
eval_summary = cand if cand.is_file() else None
|
| 219 |
+
|
| 220 |
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 221 |
+
_init_state(ckpt, device, eval_summary)
|
| 222 |
|
| 223 |
icfg = _STATE["icfg"]
|
| 224 |
def_tta = bool(icfg.get("tta_flip", True))
|