\section{Things We Tried That Didn't Work}

We tried lots of stuff while we were working on YOLOv3. A lot of it didn't work. Here's the stuff we can remember.

\textbf{Anchor box $x,y$ offset predictions.} We tried using the normal anchor box prediction mechanism where you predict the $x,y$ offset as a multiple of the box width or height using a linear activation. We found this formulation decreased model stability and didn't work very well.

\textbf{Linear $x,y$ predictions instead of logistic.} We tried using a linear activation to directly predict the $x,y$ offset instead of the logistic activation. This led to a couple point drop in mAP.

\textbf{Focal loss.} We tried using focal loss. It dropped our mAP about 2 points. YOLOv3 may already be robust to the problem focal loss is trying to solve because it has separate objectness predictions and conditional class predictions. Thus for most examples there is no loss from the class predictions? Or something? We aren't totally sure.

\textbf{Dual IOU thresholds and truth assignment.} Faster R-CNN uses two IOU thresholds during training. If a prediction overlaps the ground truth by .7 it is as a positive example, by $[.3 - .7]$ it is ignored, less than .3 for all ground truth objects it is a negative example. We tried a similar strategy but couldn't get good results.

We quite like our current formulation, it seems to be at a local optima at least. It is possible that some of these techniques could eventually produce good results, perhaps they just need some tuning to stabilize the training.