Model description
These models were trained for detecting both standing and fallen deadwood from UAV RGB images. All model configurations and weights here are fine-tuned from models available in Detectron2 Model Zoo.
The models are trained on 512x512px RGB image patches with spatial resolution between 3.9 cm and 4.3 cm and with hand-annotated deadwood data based on visual inspection. The location of training dataset is in the vicinity of Hiidenportti national, Sotkamo, Finland, and the images were acquired during leaf-on season, at 16. and 17.7.2019. Most likely the models are most suitable to use with imagery from leaf-on season and with similar ground sampling distance.
Example app is running on https://huggingface.co/spaces/mayrajeo/maskrcnn-deadwood, which uses R101 backbone without TTA.
Training data
These models were trained on expert-annotated deadwood data, acquired on during leaf-on season at 16.-17.7.2019 from Hiidenportti, Sotkamo, Eastern-Finland. The ground resolution for the data varied between 3.9 and 4.4 cm. In addition, the model was tested with data collected from Evo, Hämeenlinna, Southern-Finland, acquired on 11.7.2018. The data from Evo was used only for testing the models.
Results
Patch-level data are non-overlapping 512x512 pixel non-overlapping tiles extracted from larger virtual plots. The results presented here are the run with test-time augmentation.
Scene-level data are the full virtual plots extracted from the full images. For Hiidenportti, the virtual plot sizes vary between 2560x2560px and 8192x4864px. These patches contain also non-annotated buffer areas in order to extract the complete annotated area. For Sudenpesänkangas, all 71 scenes are 100x100 meters (2063x2062) pixels, and during inference they are extracted from the full mosaic with enough buffer to cover the full area. The results presented here are run for 512x512 pixel tiles with 256 px overlap, with both edge filtering and mask merging described in the workflow.
Hiidenportti test set
Hiidenportti test set contains 241 non-overlapping 512x512 pixel image patches, extracted from 5 scenes that cover 11 circular field plots.
Model | Patch AP50 | Patch AP | Patch AP-groundwood | Patch AP-uprightwood | Scene AP50 | Scene AP | Scene AP-groundwood | Scene AP-uprightwood |
---|---|---|---|---|---|---|---|---|
mask_rcnn_R_50_FPN_3x | 0.654 | 0.339 | 0.316 | 0.363 | 0.640 | 0.315 | 0.235 | 0.396 |
mask_rcnn_R_101_FPN_3x | 0.704 | 0.366 | 0.326 | 0.406 | 0.683 | 0.341 | 0.246 | 0.436 |
mask_rcnn_X_101_32x8d_FPN_3x | 0.679 | 0.355 | 0.333 | 0.377 | 0.661 | 0.333 | 0.255 | 0.412 |
cascade_mask_rcnn_R_50_FPN_3x | 0.652 | 0.345 | 0.306 | 0.384 | 0.623 | 0.317 | 0.223 | 0.411 |
Evo dataset
Sudenpesänkangas dataset contains 798 on-overlapping 512x512 pixel image patches, extracted from 71 scenes.
Model | Patch AP50 | Patch AP | Patch AP-groundwood | Patch AP-uprightwood | Scene AP50 | Scene AP | Scene AP-groundwood | Scene AP-uprightwood |
---|---|---|---|---|---|---|---|---|
mask_rcnn_R_50_FPN_3x | 0.486 | 0.237 | 0.175 | 0.299 | 0.474 | 0.221 | 0.152 | 0.290 |
mask_rcnn_R_101_FPN_3x | 0.519 | 0.252 | 0.183 | 0.321 | 0.511 | 0.236 | 0.160 | 0.311 |
mask_rcnn_X_101_32x8d_FPN_3x | 0.502 | 0.245 | 0.182 | 0.307 | 0.494 | 0.232 | 0.159 | 0.305 |
cascade_mask_rcnn_R_50_FPN_3x | 0.497 | 0.248 | 0.172 | 0.323 | 0.473 | 0.225 | 0.148 | 0.302 |
How to use
Running for small patches
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.data import build_detection_test_loader
import cv2
cfg = get_cfg()
cfg.merge_from_file(<path_to_model_config>)
cfg.OUTPUT_DIR = '<path_to_output>'
cfg.MODEL.WEIGHTS = '<path_to_weights>'
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # score threshold for detections
predictor = DefaultPredictor(cfg)
img = cv2.imread('<path_to_image_patch>')
outputs = predictor(image)
Running for larger scenes
Running on larger scenes requires the following steps:
- Tiling the scenes into smaller image patches, optionally with overlap
- Running the model on these smaller patches
- Gathering the predictions into a single GIS data file
- Optionally post-processing the results
drone_detector package has helpers for this:
from drone_detector.engines.detectron2.predict import predict_instance_masks
predict_instance_masks(
path_to_model_config='<path_to_model_config>', # model config file
path_to_image='<path_to_image>', # which image to process
outfile='<name_for_predictions>.geojson', # where to save the results
processing_dir='temp', # directory for temporary files, deleted afterwards. Default: temp
tile_size=512, # image patch size in pixels, square patches. Default: 400
tile_overlap=256, # overlap between tiles. Default: 100
smooth_preds=False, # not yet implemented, at some points runs dilation+erosion to smooth polygons. Default: False
coco_set='<path_to_coco>', # the coco set the model was trained on to infer the class names. None defaults to dummy categories. Default: None
postproc_results=True # whether to discard masks in the edge regions of patches Default: False
)
Also, after installing the package, predict_instance_masks_detectron2
can be used as CLI command with identical syntax.
When provided models default to dummy classes, 1 is standing deadwood and 2 is fallen deadwood.