# PointRend: Image Segmentation as Rendering Alexander Kirillov, Yuxin Wu, Kaiming He, Ross Girshick [[`arXiv`](https://arxiv.org/abs/1912.08193)] [[`BibTeX`](#CitingPointRend)]

In this repository, we release code for PointRend in Detectron2. PointRend can be flexibly applied to both instance and semantic segmentation tasks by building on top of existing state-of-the-art models. ## Quick start and visualization This [Colab Notebook](https://colab.research.google.com/drive/1isGPL5h5_cKoPPhVL9XhMokRtHDvmMVL) tutorial contains examples of PointRend usage and visualizations of its point sampling stages. ## Training To train a model with 8 GPUs run: ```bash cd /path/to/detectron2/projects/PointRend python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpus 8 ``` ## Evaluation Model evaluation can be done similarly: ```bash cd /path/to/detectron2/projects/PointRend python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint ``` # Pretrained Models ## Instance Segmentation #### COCO
Mask
head
Backbone lr
sched
Output
resolution
mask
AP
mask
AP*
model id download
PointRend R50-FPN 224×224 36.2 39.7 164254221 model | metrics
PointRend R50-FPN 224×224 38.3 41.6 164955410 model | metrics
PointRend R101-FPN 224×224 40.1 43.8 model | metrics
PointRend X101-FPN 224×224 41.1 44.7 model | metrics
AP* is COCO mask AP evaluated against the higher-quality LVIS annotations; see the paper for details. Run `python detectron2/datasets/prepare_cocofied_lvis.py` to prepare GT files for AP* evaluation. Since LVIS annotations are not exhaustive, `lvis-api` and not `cocoapi` should be used to evaluate AP*. #### Cityscapes Cityscapes model is trained with ImageNet pretraining.
Mask
head
Backbone lr
sched
Output
resolution
mask
AP
model id download
PointRend R50-FPN 224×224 35.9 164255101 model | metrics
## Semantic Segmentation #### Cityscapes Cityscapes model is trained with ImageNet pretraining.
Method Backbone Output
resolution
mIoU model id download
SemanticFPN + PointRend R101-FPN 1024×2048 78.9 202576688 model | metrics
## Citing PointRend If you use PointRend, please use the following BibTeX entry. ```BibTeX @InProceedings{kirillov2019pointrend, title={{PointRend}: Image Segmentation as Rendering}, author={Alexander Kirillov and Yuxin Wu and Kaiming He and Ross Girshick}, journal={ArXiv:1912.08193}, year={2019} } ``` ## Citing Implicit PointRend If you use Implicit PointRend, please use the following BibTeX entry. ```BibTeX @InProceedings{cheng2021pointly, title={Pointly-Supervised Instance Segmentation, author={Bowen Cheng and Omkar Parkhi and Alexander Kirillov}, journal={ArXiv}, year={2021} } ```