mayrajeo commited on
Commit
b0f2458
1 Parent(s): 8d1fd84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -14
README.md CHANGED
@@ -8,27 +8,46 @@ tags:
8
 
9
  ## Model description
10
 
11
- This model is trained for detecting both standing and fallen deadwood from UAV RGB images. More thorough description is available on [https://mayrajeo.github.io/maskrcnn-deadwood](https://mayrajeo.github.io/maskrcnn-deadwood).
 
 
12
 
13
  ## Training data
14
 
15
- The model was trained on expert-annotated deadwood data, acquired on during leaf-on season 16.-17.7.2019 from Hiidenportti, Sotkamo, Eastern-Finland. The ground resolution for the data varied between 3.9 and 4.4cm. In addition, the model was tested with data collected from Evo, Hämeenlinna, Southern-Finland, acquired on 11.7.2018. The data from Evo was used only for testing the models.
 
 
 
 
 
 
 
 
16
 
17
- ## Metrics
18
 
19
- |Metric|Hiidenportti|Evo|
20
- |------|------------|---|
21
- |Patch AP50|0.704|0.519|
22
- |Patch AP|0.366|0.252|
23
- |Patch AP groundwood|0.326|0.183|
24
- |Patch AP uprightwood|0.406|0.321|
25
- |Scene AP50|0.683|0.511|
26
- |Scene AP|0.341|0.236|
27
- |Scene AP groundwood|0.246|0.160|
28
- |Scene AP uprightwood|0.436|0.311|
 
 
 
 
 
 
 
29
 
30
  ## How to use
31
 
 
 
32
  ```python
33
  from detectron2.engine import DefaultPredictor
34
  from detectron2.config import get_cfg
@@ -44,4 +63,35 @@ predictor = DefaultPredictor(cfg)
44
 
45
  img = cv2.imread('<path_to_image_patch>')
46
  outputs = predictor(image)
47
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  ## Model description
10
 
11
+ These models were trained for detecting both standing and fallen deadwood from UAV RGB images. All model configurations and weights here are fine-tuned from models available in [Detectron2 Model Zoo](https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md).
12
+
13
+ The models are trained on 512x512px RGB image patches with spatial resolution between 3.9cm and 4.3 cm and with hand-annotated deadwood data based on visual inspection. The location of training dataset is in the vicinity of Hiidenportti national, Sotkamo, Finland, and the images were acquired during leaf-on season, on 16. and 17.7.2019. Most likely the models are most suitable to use with imagery from leaf-on season and with similar ground sampling distance.
14
 
15
  ## Training data
16
 
17
+ These models were trained on expert-annotated deadwood data, acquired on during leaf-on season 16.-17.7.2019 from Hiidenportti, Sotkamo, Eastern-Finland. The ground resolution for the data varied between 3.9 and 4.4cm. In addition, the model was tested with data collected from Evo, Hämeenlinna, Southern-Finland, acquired on 11.7.2018. The data from Evo was used only for testing the models.
18
+
19
+ ## Results
20
+
21
+ Patch-level data are non-overlapping 512x512 pixel tiles extracted from larger virtual plots. The results presented here are the run with test-time augmentation.
22
+
23
+ Scene-level data are the full virtual plots extracted from the full images. For Hiidenportti, the virtual plot sizes vary between 2560x2560px and 8192x4864px. These patches contain also non-annotated buffer areas in order to extract the complete annotated area. For Sudenpesänkangas, all 71 scenes are 100x100 meters (2063x2062) pixels, and during inference they are extracted from the full mosaic with enough buffer to cover the full area. The results presented here are run for 512x512 pixel tiles with 256 px overlap, with both edge filtering and mask merging described in the workflow.
24
+
25
+ ### Hiidenportti test set
26
 
27
+ Hiidenportti test set contains 241 non-overlapping 512x512 pixel image patches, extracted from 5 scenes that cover 11 circular field plots.
28
 
29
+ |Model|Patch AP50|Patch AP|Patch AP-groundwood|Patch AP-uprightwood|Scene AP50|Scene AP|Scene AP-groundwood|Scene AP-uprightwood|
30
+ |----|-----------|--------|-------------------|--------------------|-----------|--------|-------------------|--------------------|
31
+ |mask_rcnn_R_50_FPN_3x|0.654|0.339|0.316|0.363|0.640|0.315|0.235|0.396|
32
+ |mask_rcnn_R_101_FPN_3x|**0.704**|**0.366**|0.326|**0.406**|**0.683**|**0.341**|0.246|**0.436**|
33
+ |mask_rcnn_X_101_32x8d_FPN_3x|0.679|0.355|**0.333**|0.377|0.661|0.333|**0.255**|0.412|
34
+ |cascade_mask_rcnn_R_50_FPN_3x|0.652|0.345|0.306|0.384|0.623|0.317|0.223|0.411|
35
+
36
+ ### Evo dataset
37
+
38
+ Sudenpesänkangas dataset contains 798 on-overlapping 512x512 pixel image patches, extracted from 71 scenes.
39
+
40
+ |Model|Patch AP50|Patch AP|Patch AP-groundwood|Patch AP-uprightwood|Scene AP50|Scene AP|Scene AP-groundwood|Scene AP-uprightwood|
41
+ |----|-----------|--------|-------------------|--------------------|-----------|--------|-------------------|--------------------|
42
+ |mask_rcnn_R_50_FPN_3x|0.486|0.237|0.175|0.299|0.474|0.221|0.152|0.290|
43
+ |mask_rcnn_R_101_FPN_3x|**0.519**|**0.252**|**0.183**|0.321|**0.511**|**0.236**|**0.160**|**0.311**|
44
+ |mask_rcnn_X_101_32x8d_FPN_3x|0.502|0.245|0.182|0.307|0.494|0.232|0.159|0.305|
45
+ |cascade_mask_rcnn_R_50_FPN_3x|0.497|0.248|0.172|**0.323**|0.473|0.225|0.148|0.302|
46
 
47
  ## How to use
48
 
49
+ ### Running for small patches
50
+
51
  ```python
52
  from detectron2.engine import DefaultPredictor
53
  from detectron2.config import get_cfg
 
63
 
64
  img = cv2.imread('<path_to_image_patch>')
65
  outputs = predictor(image)
66
+ ```
67
+
68
+ ### Running for larger scenes
69
+
70
+ Running on larger scenes requires the following steps:
71
+
72
+ 1. Tiling the scenes into smaller image patches, optionally with overlap
73
+ 2. Running the model on these smaller patches
74
+ 3. Gathering the predictions into a single GIS data file
75
+ 4. Optionally post-processing the results
76
+
77
+ [drone_detector](https://jaeeolma.github.io/drone_detector) package has helpers for this:
78
+
79
+ ```python
80
+ from drone_detector.engines.detectron2.predict import predict_instance_masks
81
+
82
+ predict_instance_masks(
83
+ path_to_model_config='<path_to_model_config>', # model config file
84
+ path_to_image='<path_to_image>', # which image to process
85
+ outfile='<name_for_predictions>.geojson', # where to save the results
86
+ processing_dir='temp', # directory for temporary files, deleted afterwards. Default: temp
87
+ tile_size=512, # image patch size in pixels, square patches. Default: 400
88
+ tile_overlap=256, # overlap between tiles. Default: 100
89
+ smooth_preds=False, # not yet implemented, at some points runs dilation+erosion to smooth polygons. Default: False
90
+ coco_set='<path_to_coco>', # the coco set the model was trained on to infer the class names. None defaults to dummy categories. Default: None
91
+ postproc_results=True # whether to discard masks in the edge regions of patches Default: False
92
+ )
93
+ ```
94
+
95
+ Also, after installing the package, `predict_instance_masks_detectron2` can be used as CLI command with identical syntax.
96
+
97
+ When provided models default to dummy classes, **1** is *standing deadwood* and **2** is *fallen deadwood*.