\section{How We Do}


\begin{table}[b]
\begin{minipage}{\textwidth}
\tablestyle{4pt}{1.05}
\begin{tabular}{l|c|x{22}x{22}x{22}|x{22}x{22}x{22}}
 & backbone
 & AP & AP$_{50}$ & AP$_{75}$
 & AP$_S$ & AP$_M$ &  AP$_L$\\ [.1em]
\shline
\emph{Two-stage methods} & & & & & & & \\
 ~Faster R-CNN+++ \cite{resnet} & ResNet-101-C4
  & 34.9 & 55.7 & 37.4 & 15.6 & 38.7 & 50.9\\
 ~Faster R-CNN w FPN \cite{lin2017feature} & ResNet-101-FPN
  & 36.2 & 59.1 & 39.0 & 18.2 & 39.0 & 48.2\\
 ~Faster R-CNN by G-RMI \cite{huang2017speed} & Inception-ResNet-v2 \cite{szegedy2017inception}
  & 34.7 & 55.5 & 36.7 & 13.5 & 38.1 & 52.0\\
 ~Faster R-CNN w TDM \cite{shrivastava2016beyond} & Inception-ResNet-v2-TDM
  & 36.8 & 57.7 & 39.2 & 16.2 & 39.8 & \bd{52.1}\\
\hline
\emph{One-stage methods} & & & & & & & \\
 ~YOLOv2 \cite{redmon2017yolo9000} & DarkNet-19 \cite{redmon2017yolo9000}
  & 21.6 & 44.0 & 19.2 & 5.0 & 22.4 & 35.5 \\
 ~SSD513 \cite{liu2016ssd,fu2017dssd} & ResNet-101-SSD
  & 31.2 & 50.4 & 33.3 & 10.2 & 34.5 & 49.8 \\
 ~DSSD513 \cite{fu2017dssd} & ResNet-101-DSSD
  & 33.2 & 53.3 & 35.2 & 13.0 & 35.4 & 51.1 \\
 ~RetinaNet \cite{focal} & ResNet-101-FPN
  & 39.1 & 59.1 & 42.3 & 21.8 & 42.7 & 50.2 \\
 ~RetinaNet \cite{focal} & ResNeXt-101-FPN
  & \bd{40.8} & \bd{61.1} & \bd{44.1} & \bd{24.1} & \bd{44.2} & 51.2 \\
  ~YOLOv3 $608 \times 608$ & Darknet-53
  & 33.0 & 57.9 & 34.4 & 18.3 & 35.4 & 41.9 \\
\end{tabular}
\vspace{1mm}
\caption{I'm seriously just stealing all these tables from \cite{focal} they take soooo long to make from scratch. Ok, YOLOv3 is doing alright. Keep in mind that RetinaNet has like $3.8\x$ longer to process an image. YOLOv3 is much better than SSD variants and comparable to state-of-the-art models on the AP$_{50}$ metric.}
\label{results}
\end{minipage}
\end{table}


YOLOv3 is pretty good! See table \ref{results}. In terms of COCOs weird average mean AP metric it is on par with the SSD variants but is $3\x$ faster. It is still quite a bit behind other models like RetinaNet in this metric though.

\enlargethispage{-15\baselineskip}


However, when we look at the ``old'' detection metric of mAP at IOU$=.5$ (or AP$_{50}$ in the chart) YOLOv3 is very strong. It is almost on par with RetinaNet and far above the SSD variants. This indicates that YOLOv3 is a very strong detector that excels at producing decent boxes for objects. However, performance drops significantly as the IOU threshold increases indicating YOLOv3 struggles to get the boxes perfectly aligned with the object.

In the past YOLO struggled with small objects. However, now we see a reversal in that trend. With the new multi-scale predictions we see YOLOv3 has relatively high AP$_S$ performance. However, it has comparatively worse performance on medium and larger size objects. More investigation is needed to get to the bottom of this.

When we plot accuracy vs speed on the AP$_{50}$ metric (see figure \ref{chart}) we see YOLOv3 has significant benefits over other detection systems. Namely, it's faster and better.

\begin{figure*}[]
\hspace{-11mm}
\vspace{-8mm}

\includegraphics[width=1.16\linewidth]{map50.pdf}
\caption{Again adapted from the \cite{focal}, this time displaying speed/accuracy tradeoff on the mAP at .5 IOU metric. You can tell YOLOv3 is good because it's very high and far to the left. Can you cite your own paper? Guess who's going to try, this guy $\rightarrow$ \cite{yolov3}. Oh, I forgot, we also fix a data loading bug in YOLOv2, that helped by like 2 mAP. Just sneaking this in here to not throw off layout.}
\label{chart}
\end{figure*}
