---
license: etalab-2.0
tags:
- pytorch
- segmentation
- point clouds
- aerial lidar scanning
- IGN
model-index:
- name: FRACTAL-LidarHD_7cl_randlanet
results:
- task:
type: semantic-segmentation
dataset:
name: IGNF/FRACTAL
type: point-cloud-segmentation-dataset
metrics:
- name: mIoU
type: mIoU
value: 77.2
- name: IoU Other
type: IoU
value: 48.1
- name: IoU Ground
type: IoU
value: 91.7
- name: IoU Vegetation
type: IoU
value: 93.7
- name: IoU Building
type: IoU
value: 90.0
- name: IoU Water
type: IoU
value: 90.8
- name: IoU Bridge
type: IoU
value: 63.5
- name: IoU Permanent Structure
type: IoU
value: 59.9
---
FRACTAL-LidarHD_7cl_randlanet
The general characteristics of this specific model FRACTAL-LidarHD_7cl_randlanet are :
- Trained with the FRACTAL dataset
- Aerial lidar point clouds, colorized with rgb + near-infrared, with high point density (~40 pts/m²)
- RandLa-Net architecture as implemented in the Myria3D library
- 7 class nomenclature : [other, ground, vegetation, building, water, bridge, permanent structure]
## Model Informations
- **Code repository:** https://github.com/IGNF/myria3d (V3.8)
- **Paper:** TBD
- **Developed by:** IGN
- **Compute infrastructure:**
- software: python, pytorch-lightning
- hardware: in-house HPC/AI resources
- **License:** : Etalab 2.0
---
## Uses
The model was specifically trained and designed for the segmentation of aerial lidar point clouds from the Lidar HD program (2020-2025),
an ambitious initiative that aim to obtain a 3D description of the French territory by 2026.
While the model could be applied to other types of point clouds, Lidar HD data have specific geometric specifications. Furthermore, the training data was colorized
with very-high-definition aerial images from the ([BD ORTHO®](https://geoservices.ign.fr/bdortho)), which have their own spatial and radiometric specifications.
Consequently, the model's prediction would improve for aerial lidar point clouds with similar densities and colorimetries than the original ones.
**_Data preprocessing_**: Point clouds were preprocessed for training with point subsampling, filtering of artefacts points, on-the-fly creation of colorimetric features, and normalization of features and coordinates.
For inference, the same preprocessing should be used (refer to the inference configuration and to the code repository).
**_Inference library: Myria3D_**: Model was trained in an open source deep learning code reposiroty developped in-house, and inference is only supported in this library.
Myria3D comes with a Dockerfile as well as detailed documentation for inference.
Patched inference from large point clouds (e.g. 1 x 1 km Lidar HD tiles) is supported, with or without (by default) overlapping sliding windows.
The original point cloud is augmented with several dimensions: a PredictedClassification dimension, an entropy dimension, and (optionnaly) class probability dimensions (e.g. building, ground...).
Refer to Myria3D's documentation for custom settings.
**_Multi-domain model_**: The FRACTAL dataset used for training covers 5 spatial domains from 5 southern regions of metropolitan France.
The 250 km² of data in FRACTAL were sampled from an original 17440 km² area, and cover a wide diversity of landscapes and scenes.
While large and diverse, this data only covers a fraction of the French territory, and the model should be used with adequate verifications when applied to new domains.
This being said, while domain shifts are frequent for aerial imageries due to different acquisition conditions and downstream data processing,
the aerial lidar point clouds are expected to have more consistent characteristiques
(density, range of acquisition angle, etc.) across spatial domains.
## Bias, Risks, Limitations and Recommendations
---
## How to Get Started with the Model
Visit ([https://github.com/IGNF/FLAIR-1](https://github.com/IGNF/myria3d)) to use the model.
Fine-tuning and prediction tasks are detailed in the README file.
---
## Training Details
### Training Data
### Training Procedure
#### Preprocessing
#### Training Hyperparameters
#### Speeds, Sizes, Times
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Metrics
### Results
Samples of results
---
## Citation
**BibTeX:**
```
```
**APA:**
```
```
## Contact : TBD