sgiordano commited on
Commit
9035b02
1 Parent(s): 7f5b41e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -65
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - landcover
8
  - IGN
9
  model-index:
10
- - name: FLAIR-INC_rgb_12cl_resnet34-unet
11
  results:
12
  - task:
13
  type: semantic-segmentation
@@ -17,62 +17,56 @@ model-index:
17
  metrics:
18
  - name: mIoU
19
  type: mIoU
20
- value: 58.63
21
  - name: Overall Accuracy
22
  type: OA
23
- value: 76.3711
24
  - name: Fscore
25
  type: Fscore
26
- value: 72.4353
27
  - name: Precision
28
  type: Precision
29
- value: 74.3015
30
  - name: Recall
31
  type: Recall
32
- value: 72.4891
33
 
34
  - name: IoU Buildings
35
  type: IoU
36
- value: 82.6313
37
  - name: IoU Pervious surface
38
  type: IoU
39
- value: 53.2351
40
  - name: IoU Impervious surface
41
  type: IoU
42
- value: 74.1742
43
  - name: IoU Bare soil
44
  type: IoU
45
- value: 60.3958
46
  - name: IoU Water
47
  type: IoU
48
- value: 87.5887
49
  - name: IoU Coniferous
50
  type: IoU
51
- value: 46.3504
52
  - name: IoU Deciduous
53
  type: IoU
54
- value: 67.4473
55
  - name: IoU Brushwood
56
  type: IoU
57
- value: 30.2346
58
  - name: IoU Vineyard
59
  type: IoU
60
- value: 82.9251
61
  - name: IoU Herbaceous vegetation
62
  type: IoU
63
- value: 55.0283
64
  - name: IoU Agricultural land
65
  type: IoU
66
- value: 52.0145
67
  - name: IoU Plowed land
68
  type: IoU
69
- value: 40.8387
70
- - name: IoU Swimming pool
71
- type: IoU
72
- value: 48.4433
73
- - name: IoU Greenhouse
74
- type: IoU
75
- value: 39.4447
76
 
77
  pipeline_tag: image-segmentation
78
  ---
@@ -91,13 +85,13 @@ pipeline_tag: image-segmentation
91
  <br>
92
 
93
  <div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
94
- <h1>FLAIR-INC_rgbie_15cl_resnet34-unet</h1>
95
- <p>The general characteristics of this specific model <strong>FLAIR-INC_rgbie_15cl_resnet34-unet</strong> are :</p>
96
  <ul style="list-style-type:disc;">
97
  <li>Trained with the FLAIR-INC dataset</li>
98
  <li>RGBIE images (true colours + infrared + elevation)</li>
99
  <li>U-Net with a Resnet-34 encoder</li>
100
- <li>15 class nomenclature : [building, pervious surface, impervious surface, bare soil, water, coniferous, deciduous, brushwood, vineyard, herbaceous, agricultural land, plowed land, swimming pool, snow, greenhouse]</li>
101
  </ul>
102
  </div>
103
 
@@ -119,7 +113,7 @@ The product called ([BD ORTHO®](https://geoservices.ign.fr/bdortho)) has its ow
119
  Consequently, the model’s prediction would improve if the user images are similar to the original ones.
120
 
121
  _**Radiometry of input images**_ :
122
- The BD ORTHO input images are distributed in 8-bit encoding format per channel. When traning the model, input normalization was performed (see section **Trainingg Details**).
123
  It is recommended that the user apply the same type of input normalization while inferring the model.
124
 
125
  _**Multi-domain model**_ :
@@ -133,23 +127,23 @@ When decoded to [0,255] ints, a difference of 1 should coresponds to 0.2 meters
133
 
134
  _**Land Cover classes of prediction**_ :
135
  The orginial class nomenclature of the FLAIR Dataset encompasses 19 classes (See the [FLAIR dataset](https://huggingface.co/datasets/IGNF/FLAIR) page for details).
136
- However 3 classes corresponding to uncertain labelisation (Mixed (16), Ligneous (17) and Other (19)) and 1 class with very poor labelling (Clear cut (15)) were desactivated during training.
137
- As a result, the logits produced by the model are of size 19x1, but classes n° 15, 16, 17 and 19 should appear at 0 in the logits and should not be present in the final argmax product.
138
 
139
 
140
 
141
  ## Bias, Risks, Limitations and Recommendations
142
 
143
  _**Using the model on input images with other spatial resolution**_ :
144
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained with fixed scale conditions. All patches used for training are derived from aerial images with 0.2 meters spatial resolution. Only flip and rotate augmentations were performed during the training process.
145
  No data augmentation method concerning scale change was used during training. The user should pay attention that generalization issues can occur while applying this model to images that have different spatial resolutions.
146
 
147
  _**Using the model for other remote sensing sensors**_ :
148
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained with aerial images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)) that encopass very specific radiometric image processing.
149
  Using the model on other type of aerial images or satellite images may imply the use of transfer learning or domain adaptation techniques.
150
 
151
  _**Using the model on other spatial areas**_ :
152
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained on patches reprensenting the French Metropolitan territory.
153
  The user should be aware that applying the model to other type of landscapes may imply a drop in model metrics.
154
 
155
  ---
@@ -166,7 +160,7 @@ Fine-tuning and prediction tasks are detailed in the README file.
166
 
167
  ### Training Data
168
 
169
- 218 400 patches of 512 x 512 pixels were used to train the **FLAIR-INC_RVBIE_resnet34_unet_15cl_norm** model.
170
  The train/validation split was performed patchwise to obtain a 80% / 20% distribution between train and validation.
171
  Annotation was performed at the _zone_ level (~100 patches per _zone_). Spatial independancy between patches is guaranted as patches from the same _zone_ were assigned to the same set (TRAIN or VALIDATION).
172
  The following number of patches were used for train and validation :
@@ -218,17 +212,17 @@ Statistics of the TRAIN+VALIDATION set :
218
 
219
  #### Speeds, Sizes, Times
220
 
221
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained on a HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803).
222
  16 V100 GPUs were used ( 4 nodes, 4 GPUS per node). With this configuration the approximate learning time is 6 minutes per epoch.
223
 
224
- FLAIR-INC_rgbie_15cl_resnet34-unet was obtained for num_epoch=76 with corresponding val_loss=0.56.
225
 
226
 
227
  <div style="position: relative; text-align: center;">
228
  <p style="margin: 0;">TRAIN loss</p>
229
- <img src="figs/train_loss_FLAIR-INC_RGBIE_resnet34_unet_15cl_norm.png" alt="TRAIN loss" style="width: 60%; display: block; margin: 0 auto;"/>
230
  <p style="margin: 0;">VALIDATION loss</p>
231
- <img src="figs/val_loss_FLAIR-INC_RGBIE_resnet34_unet_15cl_norm.png" alt="VALIDATION loss" style="width: 60%; display: block; margin: 0 auto;"/>
232
  </div>
233
 
234
 
@@ -243,37 +237,29 @@ The evaluation was performed on a TEST set of 31 750 patches that are independan
243
  The TEST set corresponds to the reunion of the TEST set of scientific challenges FLAIR#1 and FLAIR#2. See the [FLAIR challenge page](https://ignf.github.io/FLAIR/) for more details.
244
 
245
  The choice of a separate TEST set instead of cross validation was made to be coherent with the FLAIR challenges.
246
- However the metrics for the Challenge were calculated on 12 classes and the TEST set acordingly.
247
- As a result the _Snow_ class is absent from the TEST set.
248
 
249
  #### Metrics
250
 
251
- With the evaluation protocol, the **FLAIR-INC_RVBIE_resnet34_unet_15cl_norm** have been evaluated to **OA= 76.37%** and **mIoU=58.63%**.
252
- The _snow_ class is discarded from the average metrics.
253
 
254
  The following table give the class-wise metrics :
255
 
256
- | Modalities | IoU (%) | Fscore (%) | Precision (%) | Recall (%) |
257
- | ----------------------- | ----------|---------|---------|---------|
258
- | building | 82.63 | 90.49 | 90.26 | 90.72 |
259
- | pervious surface | 53.24 | 69.48 | 68.97 | 70.00 |
260
- | impervious surface | 74.17 | 85.17 | 86.28 | 84.09 |
261
- | bare soil | 60.40 | 75.31 | 80.49 | 70.75 |
262
- | water | 87.59 | 93.38 | 93.16 | 93.61 |
263
- | coniferous | 46.35 | 63.34 | 63.52 | 63.16 |
264
- | deciduous | 67.45 | 80.56 | 77.44 | 83.94 |
265
- | brushwood | 30.23 | 46.43 | 63.55 | 36.58 |
266
- | vineyard | 82.93 | 90.67 | 91.35 | 89.99 |
267
- | herbaceous vegetation | 55.03 | 70.99 | 70.59 | 71.40 |
268
- | agricultural land | 52.01 | 68.43 | 59.18 | 81.12 |
269
- | plowed land | 40.84 | 57.99 | 68.28 | 50.40 |
270
- | swimming_pool | 48.44 | 65.27 | 81.62 | 54.37 |
271
- | _snow_ | _00.00_ | _00.00_ | _00.00_ | _00.00_ |
272
- | greenhouse | 39.45 | 56.57 | 45.52 | 74.72 |
273
- | **average** | **58.63** | **72.44** | **74.3** | **72.49** |
274
-
275
-
276
-
277
 
278
 
279
 
@@ -286,9 +272,9 @@ The following illustration gives the resulting confusion matrix :
286
 
287
  <div style="position: relative; text-align: center;">
288
  <p style="margin: 0;">Normalized Confusion Matrix (precision)</p>
289
- <img src="figs/FLAIR-INC_RVBIE_resnet34_unet_15cl_norm_cm-precision.png" alt="drawing" style="width: 70%; display: block; margin: 0 auto;"/>
290
  <p style="margin: 0;">Normalized Confusion Matrix (recall)</p>
291
- <img src="figs/FLAIR-INC_RVBIE_resnet34_unet_15cl_norm_cm-recall.png" alt="drawing" style="width: 70%; display: block; margin: 0 auto;"/>
292
  </div>
293
 
294
 
 
7
  - landcover
8
  - IGN
9
  model-index:
10
+ - name: FLAIR-INC_rgbie_12cl_resnet34-unet
11
  results:
12
  - task:
13
  type: semantic-segmentation
 
17
  metrics:
18
  - name: mIoU
19
  type: mIoU
20
+ value: 62.716
21
  - name: Overall Accuracy
22
  type: OA
23
+ value: 76.509
24
  - name: Fscore
25
  type: Fscore
26
+ value: 75.907
27
  - name: Precision
28
  type: Precision
29
+ value: 76.525
30
  - name: Recall
31
  type: Recall
32
+ value: 75.714
33
 
34
  - name: IoU Buildings
35
  type: IoU
36
+ value: 82.564
37
  - name: IoU Pervious surface
38
  type: IoU
39
+ value: 54.149
40
  - name: IoU Impervious surface
41
  type: IoU
42
+ value: 73.807
43
  - name: IoU Bare soil
44
  type: IoU
45
+ value: 59.013
46
  - name: IoU Water
47
  type: IoU
48
+ value: 87.216
49
  - name: IoU Coniferous
50
  type: IoU
51
+ value: 61.591
52
  - name: IoU Deciduous
53
  type: IoU
54
+ value: 72.225
55
  - name: IoU Brushwood
56
  type: IoU
57
+ value: 31.187
58
  - name: IoU Vineyard
59
  type: IoU
60
+ value: 76.105
61
  - name: IoU Herbaceous vegetation
62
  type: IoU
63
+ value: 51.340
64
  - name: IoU Agricultural land
65
  type: IoU
66
+ value: 57.558
67
  - name: IoU Plowed land
68
  type: IoU
69
+ value: 45.840
 
 
 
 
 
 
70
 
71
  pipeline_tag: image-segmentation
72
  ---
 
85
  <br>
86
 
87
  <div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
88
+ <h1>FLAIR-INC_rgbie_12cl_resnet34-unet</h1>
89
+ <p>The general characteristics of this specific model <strong>FLAIR-INC_rgbie_12cl_resnet34-unet</strong> are :</p>
90
  <ul style="list-style-type:disc;">
91
  <li>Trained with the FLAIR-INC dataset</li>
92
  <li>RGBIE images (true colours + infrared + elevation)</li>
93
  <li>U-Net with a Resnet-34 encoder</li>
94
+ <li>12 class nomenclature : [building, pervious surface, impervious surface, bare soil, water, coniferous, deciduous, brushwood, vineyard, herbaceous, agricultural land, plowed land]</li>
95
  </ul>
96
  </div>
97
 
 
113
  Consequently, the model’s prediction would improve if the user images are similar to the original ones.
114
 
115
  _**Radiometry of input images**_ :
116
+ The BD ORTHO input images are distributed in 8-bit encoding format per channel. When traning the model, input normalization was performed (see section **Training Details**).
117
  It is recommended that the user apply the same type of input normalization while inferring the model.
118
 
119
  _**Multi-domain model**_ :
 
127
 
128
  _**Land Cover classes of prediction**_ :
129
  The orginial class nomenclature of the FLAIR Dataset encompasses 19 classes (See the [FLAIR dataset](https://huggingface.co/datasets/IGNF/FLAIR) page for details).
130
+ This model was trained to be coherent withe the FLAIR#1 scientific challenge in which contestants were evaluated of the first 12 classes of the nomenclature. Classes with label greater than 12 were desactivated during training.
131
+ As a result, the logits produced by the model are of size 19x1, but classes n° 13, 14, 15, 16, 17, 18 and 19 should appear at 0 in the logits and should not be present in the final argmax product.
132
 
133
 
134
 
135
  ## Bias, Risks, Limitations and Recommendations
136
 
137
  _**Using the model on input images with other spatial resolution**_ :
138
+ The FLAIR-INC_rgbie_12cl_resnet34-unet model was trained with fixed scale conditions. All patches used for training are derived from aerial images with 0.2 meters spatial resolution. Only flip and rotate augmentations were performed during the training process.
139
  No data augmentation method concerning scale change was used during training. The user should pay attention that generalization issues can occur while applying this model to images that have different spatial resolutions.
140
 
141
  _**Using the model for other remote sensing sensors**_ :
142
+ The FLAIR-INC_rgbie_12cl_resnet34-unet was trained with aerial images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)) that encopass very specific radiometric image processing.
143
  Using the model on other type of aerial images or satellite images may imply the use of transfer learning or domain adaptation techniques.
144
 
145
  _**Using the model on other spatial areas**_ :
146
+ The FLAIR-INC_rgbie_12cl_resnet34-unet model was trained on patches reprensenting the French Metropolitan territory.
147
  The user should be aware that applying the model to other type of landscapes may imply a drop in model metrics.
148
 
149
  ---
 
160
 
161
  ### Training Data
162
 
163
+ 218 400 patches of 512 x 512 pixels were used to train the **FLAIR-INC_rgbie_12cl_resnet34-unet** model.
164
  The train/validation split was performed patchwise to obtain a 80% / 20% distribution between train and validation.
165
  Annotation was performed at the _zone_ level (~100 patches per _zone_). Spatial independancy between patches is guaranted as patches from the same _zone_ were assigned to the same set (TRAIN or VALIDATION).
166
  The following number of patches were used for train and validation :
 
212
 
213
  #### Speeds, Sizes, Times
214
 
215
+ The FLAIR-INC_rgbie_12cl_resnet34-unet model was trained on a HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803).
216
  16 V100 GPUs were used ( 4 nodes, 4 GPUS per node). With this configuration the approximate learning time is 6 minutes per epoch.
217
 
218
+ FLAIR-INC_rgbie_12cl_resnet34-unet was obtained for num_epoch=65 with corresponding val_loss=0.55.
219
 
220
 
221
  <div style="position: relative; text-align: center;">
222
  <p style="margin: 0;">TRAIN loss</p>
223
+ <img src="FLAIR-INC_rgbie_12cl_resnet34-unet_train-loss.png" alt="TRAIN loss" style="width: 60%; display: block; margin: 0 auto;"/>
224
  <p style="margin: 0;">VALIDATION loss</p>
225
+ <img src="FLAIR-INC_rgbie_12cl_resnet34-unet_val-loss.png" alt="VALIDATION loss" style="width: 60%; display: block; margin: 0 auto;"/>
226
  </div>
227
 
228
 
 
237
  The TEST set corresponds to the reunion of the TEST set of scientific challenges FLAIR#1 and FLAIR#2. See the [FLAIR challenge page](https://ignf.github.io/FLAIR/) for more details.
238
 
239
  The choice of a separate TEST set instead of cross validation was made to be coherent with the FLAIR challenges.
240
+
 
241
 
242
  #### Metrics
243
 
244
+ With the evaluation protocol, the **FLAIR-INC_rgbie_12cl_resnet34-unet** have been evaluated to **OA=76.509%** and **mIoU=62.716%**.
 
245
 
246
  The following table give the class-wise metrics :
247
 
248
+ | Classes | IoU (%) | Fscore (%) | Precision (%) | Recall (%) |
249
+ | ------------------- | ----------|---------|---------|---------|
250
+ | building | 82.564 | 90.449 | 90.412 | 90.486 |
251
+ | pervious_surface | 54.149 | 70.255 | 70.723 | 69.794 |
252
+ | impervious_surface | 73.807 | 84.930 | 85.848 | 84.032 |
253
+ | bare_soil | 59.013 | 74.224 | 78.795 | 70.154 |
254
+ | water | 87.216 | 93.172 | 91.692 | 94.699 |
255
+ | coniferous | 61.591 | 76.231 | 79.304 | 73.387 |
256
+ | deciduous | 72.225 | 83.873 | 81.728 | 86.133 |
257
+ | brushwood | 31.187 | 47.546 | 57.056 | 40.754 |
258
+ | vineyard | 76.105 | 86.432 | 84.264 | 88.714 |
259
+ | herbaceous | 51.340 | 67.847 | 70.134 | 65.705 |
260
+ | agricultural_land | 57.558 | 73.063 | 67.249 | 79.978 |
261
+ | plowed_land | 45.840 | 62.863 | 61.094 | 64.737 |
262
+ | **average** | **62.716** | **75.907** | **76.525** | **75.714** |
 
 
 
 
 
 
263
 
264
 
265
 
 
272
 
273
  <div style="position: relative; text-align: center;">
274
  <p style="margin: 0;">Normalized Confusion Matrix (precision)</p>
275
+ <img src="FLAIR-INC_rgbie_12cl_resnet34-unet_confmat_norm-precision.png" alt="drawing" style="width: 70%; display: block; margin: 0 auto;"/>
276
  <p style="margin: 0;">Normalized Confusion Matrix (recall)</p>
277
+ <img src="FLAIR-INC_rgbie_12cl_resnet34-unet_confmat_norm-recall.png" alt="drawing" style="width: 70%; display: block; margin: 0 auto;"/>
278
  </div>
279
 
280