Skip to content
/ quam Public

Quantification of Uncertainty with Adversarial Models

License

Notifications You must be signed in to change notification settings

ml-jku/quam

Repository files navigation

Quantification of Uncertainty with Adversarial Models

Kajetan Schweighofer * 1, Lukas Aichberger * 1, Mykyta Ielanskyi * 1, Günter Klambauer 1, Sepp Hochreiter 1 2

1 ELLIS Unit Linz and LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria
2 Institute of Advanced Research in Artificial Intelligence (IARAI)
* Joint first author


This is the official repository to recreate the experiments of the paper "Quantification of Uncertainty with Adversarial Models".

The paper is available on arxiv


What is QUAM?

Quantification of Uncertainty with Adversarial Models (QUAM) searches for adversarial models (not adversarial examples!) to better estimate the epistemic uncertainty, the uncertainty about chosen model parameters. Adversarial models predict differently for a specific test point, but explain the training data similarly well. High uncertainties are detected by different adversarial models that assign a test point to different classes. As a result, the true class remains ambiguous.

We illustrate quantifying the predictive uncertainty of a given, pre-selected model (blue), a classifier for images of cats and dogs. For each of the input images, we search for adversarial models (orange) that make different predictions than the given, pre-selected model while explaining the training data equally well (having a high likelihood). The adversarial models found for an image of a dog or a cat still make similar predictions (low epistemic uncertainty), while the adversarial model found for an image of a lion makes a highly different prediction (high epistemic uncertainty), as features present in images of both cats and dogs can be utilized to classify the image of a lion.

Reproducing the Experiments

Installing Dependencies

conda install --file environment.yaml
conda activate quam

Experiments on Synthetic Datasets

All experiments on synthetic datasets are available as notebooks in the notebooks folder.

Run MNIST Benchmarks

chmod +x reproduce_mnist_experiments.sh
./reproduce_mnist_experiments.sh

Run ImageNet Benchmarks

To reproduce the results for the ImageNet benchmarks, check the corresponding section in the technical manual.

About

Quantification of Uncertainty with Adversarial Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published