Conformal prediction in AI

Conformal prediction offers a solution in uncertainty quantification and simultaneously provides a method for instance classification.

Based on the task, we get a set denoting a classification region instead of single predictions. During regression we get prediction intervals while during classification we get a set of possible classes.

Source: https://towardsdatascience.com/conformal-prediction-for-machine-learning-classification-from-the-ground-up-a12fcf6860d0

Conformal prediction fosters transparency in machine learning models. By explicitly acknowledging and addressing uncertainty, it enhances the interpretability of predictions. This transparency is vital in domains where decisions impact human lives, such as healthcare or autonomous systems. Stakeholders can better understand the potential variability in predictions and factor it into their decision-making processes.

Exploring some advantages, this method is model-agnostic, so no matter the model’s architecture it can be proved useful. Additionally, it is a non-parametric method, without taking into account any possible underlying data distribution, which consequently means that no re-training of the model is required. Finally, its flexibility to be utilised in many different tasks like regression, image classification, tabular classification etc. is a very strong aspect of conformal prediction.

 

[1] Angelopoulos, A. N., & Bates, S. (2021). A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification. CoRR, abs/2107.07511. Retrieved from https://arxiv.org/abs/2107.07511