Language Models as Reasoners for Out-of-Distribution Detection

Paper: Here

Our paper, Language Models as Reasoners for Out-of-Distribution Detection, was presented at the Workshop on AI Safety Engineering (WAISE) 2024 and received the best paper award by popular vote.

It constitutes an extension of our idea of Out-of-Distribution Detection with Logical Reasoning, where we replaced the prolog-based reasoning component with an LLM.

Abstract §

Deep neural networks (DNNs) are prone to making wrong predictions with high confidence for data that does not stem from their training distribution. Consequentially, out-of-distribution (OOD) detection is important in safety-critical applications, as it identifies such inputs. Using prior knowledge about the training distribution through formal constraints has shown promise in enhancing OOD detection. However, developing and maintaining formal knowledge bases can be cumbersome. Large language models (LLMs) have recently excelled in various natural language processing tasks. In this study, we investigate the use of LLMs for OOD detection, where domain constraints are expressed in natural language. Our results indicate that LLMs can outperform random guessing by leveraging general world knowledge learned during training. Moreover, LLMs can par with methods based on formal constraints when supplemented with domain-specific constraints articulated in natural language.

Presentation §

The presentation slides are available here.


Last Updated: 17 Sep. 2024
Categories: Anomaly Detection · Neuro-Symbolic
Tags: WAISE · Anomaly Detection · Large Language Models · Neuro-Symbolic