LoRA-Ensemble / README.md
MichelleHalbheer's picture
Update README.md
c40a886 verified
metadata
license: cc-by-4.0

LoRA-Ensemble: Uncertainty Modelling for Self-attention Networks

Michelle Halbheer, Dominik J. Mühlematter, Alexander Becker, Dominik Narnhofer, Helge Aasen, Konrad Schindler and Mehmet Ozgur Turkoglu - 2024

Pretrained models

This repository contains the pretrained models corresponding to the code we released on GitHub. The usage of the models with our pipeline is described on GitHub. This repository only contains the models for our final experiments per dataset, not, however, for all intermediate results.

Citation

If you find our work useful or interesting or use our code, please cite our paper as follows

@misc{
  title = {LoRA-Ensemble: Uncertainty Modelling for Self-attention Networks},
  author = {Halbheer, Michelle and M\"uhlematter, Dominik Jan and Becker, Alexander and Narnhofer, Dominik and Aasen, Helge and Schindler, Konrad and Turkoglu, Mehmet Ozgur}
  year = {2024}
  note = {arXiv: <arxiv code>}
}

CIFAR-100

The table below shows the evaluation results obtained using different methods. Each method was trained five times with varying random seeds.

Method (ViT) Accuracy ECE Settings name* Model weights*
Single Network 76.6±0.276.6\pm0.2 0.144±0.0010.144\pm0.001 CIFAR100_settings_explicit Deep_Ensemble_ViT_base_32_1_members_CIFAR100_settings_explicit<seed>.pt
Single Network with LoRA 79.6±0.279.6\pm0.2 0.014±0.003\textbf{0.014}\pm0.003 CIFAR100_settings_LoRA LoRA_Former_ViT_base_32_1_members_CIFAR100_settings_LoRA<seed>.pt
MC Dropout 77.1±0.577.1\pm0.5 0.055±0.0020.055\pm0.002 CIFAR100_settings_MCDropout MCDropout_ViT_base_32_16_members_CIFAR100_settings_MCDropout<seed>.pt
Explicit Ensemble 79.8±0.2\underline{79.8}\pm0.2 0.098±0.0010.098\pm0.001 CIFAR100_settings_explicit Deep_Ensemble_ViT_base_32_16_members_CIFAR100_settings_explicit<seed>.pt
LoRA-Ensemble 82.5±0.1\textbf{82.5}\pm0.1 0.035±0.001\underline{0.035}\pm0.001 CIFAR100_settings_LoRA LoRA_Former_ViT_base_32_16_members_CIFAR100_settings_LoRA<seed>.pt

* Settings and model names are followed by a number in the range 1-5 indicating the used random seed.

HAM10000

The table below shows the evaluation results obtained using different methods. Each method was trained five times with varying random seeds.

Method (ViT) Accuracy ECE Settings name* Model weights*
Single Network 84.3±0.584.3\pm0.5 0.136±0.0060.136\pm0.006 HAM10000_settings_explicit Deep_Ensemble_ViT_base_32_1_members_HAM10000_settings_explicit<seed>.pt
Single Network with LoRA 83.2±0.783.2\pm0.7 0.085±0.0040.085\pm0.004 HAM10000_settings_LoRA LoRA_Former_ViT_base_32_1_members_HAM10000_settings_LoRA<seed>.pt
MC Dropout 83.7±0.483.7\pm0.4 0.099±0.007\underline{0.099}\pm0.007 HAM10000_settings_MCDropout MCDropout_ViT_base_32_16_members_HAM10000_settings_MCDropout<seed>.pt
Explicit Ensemble 85.7±0.3\underline{85.7}\pm0.3 0.106±0.0020.106\pm0.002 HAM10000_settings_explicit Deep_Ensemble_ViT_base_32_16_members_HAM10000_settings_explicit<seed>.pt
LoRA-Ensemble 88.0±0.2\textbf{88.0}\pm0.2 0.037±0.002\textbf{0.037}\pm0.002 HAM10000_settings_LoRA LoRA_Former_ViT_base_32_16_members_HAM10000_settings_LoRA<seed>.pt

* Settings and model names are followed by a number in the range 1-5 indicating the used random seed.