3dtest / projects /CENet /README.md
giantmonkeyTC
mm2
c2ca15f

CENet: Toward Concise and Efficient LiDAR Semantic Segmentation for Autonomous Driving

CENet: Toward Concise and Efficient LiDAR Semantic Segmentation for Autonomous Driving

Abstract

Accurate and fast scene understanding is one of the challenging task for autonomous driving, which requires to take full advantage of LiDAR point clouds for semantic segmentation. In this paper, we present a concise and efficient image-based semantic segmentation network, named CENet. In order to improve the descriptive power of learned features and reduce the computational as well as time complexity, our CENet integrates the convolution with larger kernel size instead of MLP, carefully-selected activation functions, and multiple auxiliary segmentation heads with corresponding loss functions into architecture. Quantitative and qualitative experiments conducted on publicly available benchmarks, SemanticKITTI and SemanticPOSS, demonstrate that our pipeline achieves much better mIoU and inference performance compared with state-of-the-art models. The code will be available at https://github.com/huixiancheng/CENet.

Introduction

We implement CENet and provide the results and pretrained checkpoints on SemanticKITTI dataset.

Usage

Training commands

In MMDetection3D's root directory, run the following command to train the model:

python tools/train.py projects/CENet/configs/cenet-64x512_4xb4_semantickitti.py

For multi-gpu training, run:

python -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node=${NUM_GPUS} --master_port=29506 --master_addr="127.0.0.1" tools/train.py projects/CENet/configs/cenet-64x512_4xb4_semantickitti.py

Testing commands

In MMDetection3D's root directory, run the following command to test the model:

python tools/test.py projects/CENet/configs/cenet-64x512_4xb4_semantickitti.py ${CHECKPOINT_PATH}

Results and models

NuScenes

Backbone Input resolution Mem (GB) Inf time (fps) mIoU Download
CENet 64*512 41.7 61.10 model | log
CENet 64*1024 26.8 62.20 model | log
CENet 64*2048 14.1 62.64 model | log

Note

  • We report point-based mIoU instead of range-view based mIoU
  • The mIoU is the best results during inference after each epoch training, which is consistent with official code
  • If your setting is different with our settings, we strongly suggest to enable auto_scale_lr to achieve comparable results.

Citation

@inproceedings{cheng2022cenet,
  title={Cenet: Toward Concise and Efficient Lidar Semantic Segmentation for Autonomous Driving},
  author={Cheng, Hui--Xian and Han, Xian--Feng and Xiao, Guo--Qiang},
  booktitle={2022 IEEE International Conference on Multimedia and Expo (ICME)},
  pages={01--06},
  year={2022},
  organization={IEEE}
}

Checklist

  • Milestone 1: PR-ready, and acceptable to be one of the projects/.

    • Finish the code

    • Basic docstrings & proper citation

    • Test-time correctness

    • A full README

  • Milestone 2: Indicates a successful model implementation.

    • Training-time correctness

  • Milestone 3: Good to be a part of our core package!

    • Type hints and docstrings

    • Unit tests

    • Code polishing

    • Metafile.yml

  • Move your modules into the core package following the codebase's file hierarchy structure.

  • Refactor your modules into the core package following the codebase's file hierarchy structure.