CharlesGaydon commited on
Commit
610f7f8
1 Parent(s): 316101d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -110,34 +110,34 @@ Point clouds were preprocessed for training with point subsampling, filtering of
110
  For inference, a preprocessing as close as possible should be used. Refer to the inference configuration file, and to the Myria3D code repository (V3.8).
111
 
112
  #### Training Hyperparameters
113
- * Model architecture: RandLa-Net (implemented with the Pytorch-Geometric framework in [Myria3D](https://github.com/IGNF/myria3d/blob/main/myria3d/models/modules/pyg_randla_net.py))
114
- * Augmentation :
115
- * VerticalFlip(p=0.5)
116
- * HorizontalFlip(p=0.5)
117
- * Features:
118
- * Lidar: x, y, z, echo number (1-based numbering), number of echos, reflectance (a.k.a intensity)
119
- * Colors:
120
- * Original: RGB + Near-Infrared (colorization from aerial images by vertical pixel-point alignement)
121
- * Derived: average color = (R+G+B)/3 and NDVI.
122
- * Input preprocessing:
123
- * grid sampling: 0.25 m
124
- * random sampling: 40,000 (if higher)
125
- * horizontal normalization: mean xy substraction
126
- * vertical normalization: min z substraction
127
- * coordinates normalization: division by 25 meters
128
- * basic occlusion model: nullify color channels if echo_number > 1
129
- * features scaling (0-1 range):
130
- * echo number and number of echos: division by 7
131
- * color (r, g, b, near-infrared, average color): division by 65280 (i.e. 255*256)
132
- * features normalization:
133
- * reflectance: log-normalization, standardization, clipping of amplitude above 3 standard deviations.
134
- * average color: same as reflectance.
135
- * Batch size: 10
136
- * Number of epochs : 100 (min) - 150 (max)
137
- * Early stopping : patience 6 and val_loss as monitor criterium
138
- * Optimizer : Adam
139
- * Schaeduler : mode = "min", factor = 0.5, patience = 20, cooldown = 5
140
- * Learning rate : 0.004
141
 
142
  #### Speeds, Sizes, Times
143
 
@@ -183,7 +183,7 @@ The following illustration gives the resulting confusion matrix :
183
 
184
  ### Results
185
 
186
- From test patches with at least 10k points (i.e. at least 4 pts/m²), we sample without cherry-picking,
187
  to match matches with the following metadata: a) URBAN, b) WATER & BRIDGE, c) OTHER_PARKING, d) BUILD_GREENHOUSE, e) HIGHSLOPE.
188
 
189
  <div style="position: relative; text-align: center;">
 
110
  For inference, a preprocessing as close as possible should be used. Refer to the inference configuration file, and to the Myria3D code repository (V3.8).
111
 
112
  #### Training Hyperparameters
113
+ - Model architecture: RandLa-Net (implemented with the Pytorch-Geometric framework in [Myria3D](https://github.com/IGNF/myria3d/blob/main/myria3d/models/modules/pyg_randla_net.py))
114
+ - Augmentation :
115
+ - VerticalFlip(p=0.5)
116
+ - HorizontalFlip(p=0.5)
117
+ - Features:
118
+ - Lidar: x, y, z, echo number (1-based numbering), number of echos, reflectance (a.k.a intensity)
119
+ - Colors:
120
+ - Original: RGB + Near-Infrared (colorization from aerial images by vertical pixel-point alignement)
121
+ - Derived: average color = (R+G+B)/3 and NDVI.
122
+ - Input preprocessing:
123
+ - grid sampling: 0.25 m
124
+ - random sampling: 40,000 (if higher)
125
+ - horizontal normalization: mean xy substraction
126
+ - vertical normalization: min z substraction
127
+ - coordinates normalization: division by 25 meters
128
+ - basic occlusion model: nullify color channels if echo_number > 1
129
+ - features scaling (0-1 range):
130
+ - echo number and number of echos: division by 7
131
+ - color (r, g, b, near-infrared, average color): division by 65280 (i.e. 255*256)
132
+ - features normalization:
133
+ - reflectance: log-normalization, standardization, clipping of amplitude above 3 standard deviations.
134
+ - average color: same as reflectance.
135
+ - Batch size: 10 (x 6 GPUs)
136
+ - Number of epochs : 100 (min) - 150 (max)
137
+ - Early stopping : patience 6 and val_loss as monitor criterium
138
+ - Optimizer : Adam
139
+ - Schaeduler : mode = "min", factor = 0.5, patience = 20, cooldown = 5
140
+ - Learning rate : 0.004
141
 
142
  #### Speeds, Sizes, Times
143
 
 
183
 
184
  ### Results
185
 
186
+ From test patches with at least 10k points (i.e. at least 4 pts/m²), we sample patches without cherry-picking,
187
  to match matches with the following metadata: a) URBAN, b) WATER & BRIDGE, c) OTHER_PARKING, d) BUILD_GREENHOUSE, e) HIGHSLOPE.
188
 
189
  <div style="position: relative; text-align: center;">