Out-of-Distribution Detection with Adversarial Outlier Exposure

Code: Here · Paper: Here

Our paper Out-of-Distribution Detection with Adversarial Outlier Exposure has been accepted at the CVPR workshop for Safe Artificial Intelligence for All Domains (SAIAD).

The experiments in the paper were mostly conducted by Thomas Botschen, who is currently a masters student at our lab.

Abstract §

Machine learning models typically perform reliably only on inputs drawn from the distribution they were trained on, making Out-of-Distribution (OOD) detection essential for safety-critical applications. While exposing models to example outliers during training is one of the most effective ways to enhance OOD detection, recent studies suggest that synthetically generated outliers can also act as regularizers for deep neural networks. In this paper, we propose an augmentation scheme for synthetic outliers that regularizes a classifier’s energy function by adversarially lowering the outliers’ energy during training. We demonstrate that our method improves OOD detection performance and adversarial robustness on OOD data on several image classification benchmarks. Additionally, we show that our approach preserves in-distribution generalization.

Poster for AOE (PDF).

Poster for AOE (PDF).


Last Updated: 06 Jun. 2025
Categories: Anomaly Detection
Tags: CVPR · Generative Models · Anomaly Detection