File size: 7,234 Bytes
1dbcffe
 
4fa13fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
733f88a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2923a3
 
 
 
6917e33
381ffb3
6917e33
8566969
 
 
 
 
 
 
733f88a
 
 
 
 
 
e2923a3
 
 
 
 
733f88a
 
 
 
 
381ffb3
 
 
733f88a
 
381ffb3
 
733f88a
 
 
 
 
381ffb3
 
733f88a
 
381ffb3
 
 
 
 
 
 
 
 
 
 
 
 
 
449174d
 
 
 
 
 
 
 
381ffb3
 
 
 
 
 
733f88a
 
 
31daefc
 
 
 
 
 
 
 
 
733f88a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
license: etalab-2.0
tags:
- pytorch
- segmentation
- point clouds
- aerial lidar scanning
- IGN
model-index:
- name: FRACTAL-LidarHD_7cl_randlanet
  results:
  - task:
      type: semantic-segmentation
    dataset:
      name: IGNF/FRACTAL
      type: point-cloud-segmentation-dataset
    metrics:
    - name: mIoU
      type: mIoU
      value: 77.2
    - name: IoU Other
      type: IoU
      value: 48.1
    - name: IoU Ground
      type: IoU
      value: 91.7
    - name: IoU Vegetation
      type: IoU
      value: 93.7
    - name: IoU Building
      type: IoU
      value: 90.0
    - name: IoU Water
      type: IoU
      value: 90.8
    - name: IoU Bridge
      type: IoU
      value: 63.5
    - name: IoU Permanent Structure
      type: IoU
      value: 59.9
---

<div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
  <h1>FRACTAL-LidarHD_7cl_randlanet</h1> 
  <p>The general characteristics of this specific model <strong>FRACTAL-LidarHD_7cl_randlanet</strong> are :</p>
  <ul style="list-style-type:disc;">
    <li>Trained with the FRACTAL dataset</li>
    <li>Aerial lidar point clouds, colorized with rgb + near-infrared, with high point density (~40 pts/m²)</li>
    <li>RandLa-Net architecture as implemented in the Myria3D library</li>
    <li>7 class nomenclature : [other, ground, vegetation, building, water, bridge, permanent structure]</li>
  </ul>
</div>

## Model Informations
- **Code repository:** https://github.com/IGNF/myria3d (V3.8)
- **Paper:** TBD
- **Developed by:** IGN
- **Compute infrastructure:** 
    - software: python, pytorch-lightning
    - hardware: in-house HPC/AI resources
- **License:** : Etalab 2.0

---

## Uses
The model was specifically trained and designed for the **semantic segmentation of aerial lidar point clouds from the Lidar HD program (2020-2025)**. 
The Lidar HD is an ambitious initiative that aim to obtain a 3D description of the French territory by 2026. 
While the model could be applied to other types of point clouds, [Lidar HD](https://geoservices.ign.fr/lidarhd) data have specific geometric specifications. Furthermore, the training data was colorized 
with very-high-definition aerial images from the ([BD ORTHO®](https://geoservices.ign.fr/bdortho)), which have their own spatial and radiometric specifications. Consequently, the model's prediction would improve for aerial lidar point clouds with similar densities and colorimetries than the original ones.

**_Data preprocessing_**: ?? keep ?

**_Multi-domain model_**: The FRACTAL dataset used for training covers 5 spatial domains from 5 southern regions of metropolitan France. 
The 250 km² of data in FRACTAL were sampled from an original 17440 km² area, and cover a wide diversity of landscapes and scenes. 
While large and diverse, this data only covers a fraction of the French territory, and the model should be used with adequate verifications when applied to new domains.
This being said, while domain shifts are frequent for aerial imageries due to different acquisition conditions and downstream data processing, 
the aerial lidar point clouds are expected to have more consistent characteristiques 
(density, range of acquisition angle, etc.) across spatial domains.

## Bias, Risks, Limitations and Recommendations

---

## How to Get Started with the Model

Model was trained in an open source deep learning code repository developped in-house: [github.com/IGNF/myria3d](https://github.com/IGNF/myria3d)). 
Inference is only supported in this library, and inference instructions are detailed in the code repository documentation.
Patched inference from large point clouds (e.g. 1 x 1 km Lidar HD tiles) is supported, with or without (by default) overlapping sliding windows. 
The original point cloud is augmented with several dimensions: a PredictedClassification dimension, an entropy dimension, and (optionnaly) class probability dimensions (e.g. building, ground...). 
For convenience and scalable model deployment, Myria3D comes with a Dockerfile.

---

## Training Details

The data comes from the Lidar HD program, more specifically from acquisition areas that underwent automated classification followed by manual correction (so-called "optimized Lidar HD").
It meets the quality requirements of the Lidar HD program, which accepts a controlled level of classification errors for each semantic class.

### Training Data

80,000 point cloud patches of 50 x 50 meters were used to train the **FRACTAL-LidarHD_7cl_randlanet** model.
10,000 additional patches were used for model validation. 

### Training Procedure

#### Preprocessing

Point clouds were preprocessed for training with point subsampling, filtering of artefacts points, on-the-fly creation of colorimetric features, and normalization of features and coordinates. 
For inference, a preprocessing as close as possible should be used. Refer to the inference configuration file, and to the Myria3D code repository (V3.8).

#### Training Hyperparameters
* Model architecture: RandLa-Net (implemented with the Pytorch-Geometric framework in [Myria3D](https://github.com/IGNF/myria3d/blob/main/myria3d/models/modules/pyg_randla_net.py))
* Augmentation :
  * VerticalFlip(p=0.5)
  * HorizontalFlip(p=0.5)
* Features:
  * Lidar: x, y, z, echo number (1-based numbering), number of echos, reflectance (a.k.a intensity)
  * Colors:
    * Original: RGB + Near-Infrared (colorization from aerial images by vertical pixel-point alignement)
    * Derived: average color = (R+G+B)/3 and NDVI.
* Input preprocessing:
  * grid sampling: 0.25 m
  * random sampling: 40,000 (if higher)
  * horizontal normalization: mean xy substraction
  * vertical normalization: min z substraction
  * coordinates normalization: division by 25 meters
  * basic occlusion model: nullify color channels if echo_number > 1
  * features scaling (0-1 range):
    * echo number and number of echos: division by 7
    * color (r, g, b, near-infrared, average color): division by 65280 (i.e. 255*256)
  * features normalization:
    * reflectance: log-normalization, standardization, clipping of amplitude above 3 standard deviations.
    * average color: same as reflectance. 
* Batch size: 10
* Number of epochs : 100 (min) - 150 (max)
* Early stopping : patience 6 and val_loss as monitor criterium
* Optimizer : Adam
* Schaeduler : mode = "min", factor = 0.5, patience = 20, cooldown = 5
* Learning rate : 0.004

#### Speeds, Sizes, Times

The **FRACTAL-LidarHD_7cl_randlanet** model was trained on an in-house HPC cluster. 
6 V100 GPUs were used (2 nodes, 3 GPUS per node). With this configuration the approximate learning time is 30 minutes per epoch.

The model was obtained for num_epoch=21 with corresponding val_loss=0.112.

<div style="position: relative; text-align: center;">
    <p style="margin: 0;">TRAIN loss</p>
    <img src="FRACTAL-LidarHD_7cl_randlanet-train_val_losses.excalidraw.png" alt="train and val losses" style="width: 60%; display: block; margin: 0 auto;"/>
</div>

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

#### Metrics


### Results

Samples of results


---

## Citation


**BibTeX:**

```

```


**APA:**
```

```

## Contact : TBD