--- license: etalab-2.0 tags: - pytorch - segmentation - point clouds - aerial lidar scanning - IGN model-index: - name: FRACTAL-LidarHD_7cl_randlanet results: - task: type: semantic-segmentation dataset: name: IGNF/FRACTAL type: point-cloud-segmentation-dataset metrics: - name: mIoU type: mIoU value: 77.2 - name: IoU Other type: IoU value: 48.1 - name: IoU Ground type: IoU value: 91.7 - name: IoU Vegetation type: IoU value: 93.7 - name: IoU Building type: IoU value: 90.0 - name: IoU Water type: IoU value: 90.8 - name: IoU Bridge type: IoU value: 63.5 - name: IoU Permanent Structure type: IoU value: 59.9 ---

FRACTAL-LidarHD_7cl_randlanet

The general characteristics of this specific model FRACTAL-LidarHD_7cl_randlanet are :

## Model Informations - **Code repository:** https://github.com/IGNF/myria3d (V3.8) - **Paper:** TBD - **Developed by:** IGN - **Compute infrastructure:** - software: python, pytorch-lightning - hardware: in-house HPC/AI resources - **License:** : Etalab 2.0 --- ## Uses The model was specifically trained and designed for the segmentation of aerial lidar point clouds from the Lidar HD program (2020-2025), an ambitious initiative that aim to obtain a 3D description of the French territory by 2026. While the model could be applied to other types of point clouds, Lidar HD data have specific geometric specifications. Furthermore, the training data was colorized with very-high-definition aerial images from the ([BD ORTHO®](https://geoservices.ign.fr/bdortho)), which have their own spatial and radiometric specifications. Consequently, the model's prediction would improve for aerial lidar point clouds with similar densities and colorimetries than the original ones. **Data preprocessing** Point clouds were preprocessed for training with point subsampling, filtering of artefacts points, on-the-fly creation of colorimetric features, and normalization of features and coordinates. For inference, the same preprocessing should be used (refer to the inference configuration and to the code repository). **Inference library: Myria3D** Model was trained in an open source deep learning code reposiroty developped in-house, and inference is only supported in this library. The library comes with a Dockerfile as well as detailed documentation for inference. Patched inference from large point clouds (e.g. 1 x 1 km Lidar HD tiles) is supported, with or without (default) overlapping sliding windows. The original point cloud is augmented with several dimensions: a PredictedClassification dimension, an entropy dimension, and (optionnaly) class probability dimensions (e.g. building, ground...). Refer to Myria3D's documentation for custom settings. ## Bias, Risks, Limitations and Recommendations --- ## How to Get Started with the Model Visit ([https://github.com/IGNF/FLAIR-1](https://github.com/IGNF/myria3d)) to use the model. Fine-tuning and prediction tasks are detailed in the README file. --- ## Training Details ### Training Data ### Training Procedure #### Preprocessing #### Training Hyperparameters #### Speeds, Sizes, Times ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Metrics ### Results Samples of results --- ## Citation **BibTeX:** ``` ``` **APA:** ``` ``` ## Contact : TBD