mAP drop

#1
by mhyatt000 - opened

I tried to reproduce the results mentioned on this model card. The received mAP does not match the claimed mAP in the model card.

  • Claimed mAP: 36.1
  • Recieved mAP: 32.0

Here are the details for my validation:

  • I instantiate pre-trained model with transformers.pipeline() and use COCO API to calculate AP from detection bboxes.
  • Evaluation was performed on macOS CPU.
  • Dataset was downloaded from cocodataset.org

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.320
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.513
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.327
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.114
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.340
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.523
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.276
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.404
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.416
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.162
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.444
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.655

Hi,

We have reproduced YOLOS numbers in the open object detection leaderboard: https://huggingface.co/spaces/hf-vision/object_detection_leaderboard.

There are a lot of details involved in evaluation, see this blog post: https://huggingface.co/blog/object-detection-leaderboard

Sign up or log in to comment