

\section*{Rebuttal}
\small

We would like to thank the Reddit commenters, labmates, emailers, and passing shouts in the hallway for their lovely, heartfelt words. If you, like me, are reviewing for ICCV then we know you probably have 37 other papers you could be reading that you'll invariably put off until the last week and then have some legend in the field email you about how you really should finish those reviews execept it won't entirely be clear what they're saying and maybe they're from the future? Anyway, this paper won't have become what it will in time be without all the work your past selves will have done also in the past but only a little bit further forward, not like all the way until now forward. And if you tweeted about it I wouldn't know. Just sayin.

Reviewer \#2 AKA Dan Grossman (lol blinding who does that) insists that I point out here that our graphs have not one but two non-zero origins. You're absolutely right Dan, that's because it looks way better than admitting to ourselves that we're all just here battling over 2-3\% mAP. But here are the requested graphs. I threw in one with FPS too because we look just like super good when we plot on FPS.

%\begin{figure}[t]
%\vspace{-4mm}
%\includegraphics[width=\linewidth]{timemap50.pdf}
%\caption{Zero-axis charts are probably more %intellectually honest...}
%\label{chart}
%\end{figure}


Reviewer \#4 AKA JudasAdventus on Reddit writes ``Entertaining read but the arguments against the MSCOCO metrics seem a bit weak''. Well, I always knew you would be the one to turn on me Judas. You know how when you work on a project and it only comes out alright so you have to figure out some way to justify how what you did actually was pretty cool? I was basically trying to do that and I lashed out at the COCO metrics a little bit. But now that I've staked out this hill I may as well die on it.

See here's the thing, mAP is already sort of broken so an update to it should maybe address some of the issues with it or at least justify why the updated version is better in some way. And that's the big thing I took issue with was the lack of justification. For \textsc{Pascal} VOC, the IOU threshold was ''set deliberately low to account for inaccuracies in bounding boxes in the ground truth data`` \cite{pascal}. Does COCO have better labelling than VOC? This is definitely possible since COCO has segmentation masks maybe the labels are more trustworthy and thus we aren't as worried about inaccuracy. But again, my problem was the lack of justification.

%\begin{figure}[t]
%\vspace{-4mm}
%\includegraphics[width=\linewidth]{fpsmap50.pdf}
%\caption{...and we can still screw with the variables to %make ourselves look good!}
%\label{chart}
%\end{figure}

The COCO metric emphasizes better bounding boxes but that emphasis must mean it de-emphasizes something else, in this case classification accuracy. Is there a good reason to think that more precise bounding boxes are more important than better classification? A miss-classified example is much more obvious than a bounding box that is slightly shifted.

mAP is already screwed up because all that matters is per-class rank ordering. For example, if your test set only has these two images then according to mAP two detectors that produce these results are JUST AS GOOD:

\begin{figure}[h]
\includegraphics[width=\linewidth]{baddet2}
\caption{These two hypothetical detectors are perfect according to mAP over these two images. They are both perfect. Totally equal.}
\label{chart}
\end{figure}

Now this is OBVIOUSLY an over-exaggeration of the problems with mAP but I guess my newly retconned point is that there are such obvious discrepancies between what people in the ``real world'' would care about and our current metrics that I think if we're going to come up with new metrics we should focus on these discrepancies. Also, like, it's already mean average precision, what do we even call the COCO metric, average mean average precision?

Here's a proposal, what people actually care about is given an image and a detector, how well will the detector find and classify objects in the image. What about getting rid of the per-class AP and just doing a global average precision? Or doing an AP calculation per-image and averaging over that?

Boxes are stupid anyway though, I'm probably a true believer in masks except I can't get YOLO to learn them.
