In image classification tasks, the evaluation of models’ robustness to increased dataset shifts with a probabilistic framework is very well studied. However, object detection (OD) tasks pose other challenges for uncertainty estimation and evaluation. For example, one needs to evaluate both the quality of the label uncertainty (i.e., what?) and spatial uncertainty (i.e., where?) for a given bounding box, but that evaluation cannot be performed with more traditional average precision metrics (e.g., mAP).
In this talk, we will discuss how to adapt well-established object detection models to generate uncertainty estimations by introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop). We will also discuss how such techniques could be extended to a broad class of embedded vision tasks to improve robustness.