RedTachyon commited on
Commit
363b627
1 Parent(s): 88feacb

Upload folder using huggingface_hub

Browse files
EcuwtinFs9/18_image_0.png ADDED

Git LFS Details

  • SHA256: f1ec732b2272ff7e6031f04309111dfb6f6c94e73952fa19c41c99f7980fab1a
  • Pointer size: 129 Bytes
  • Size of remote file: 9.53 kB
EcuwtinFs9/1_image_0.png ADDED

Git LFS Details

  • SHA256: 2eab08cddb11e58e77dd633966eb75dac9b5d6c400c4016efcbe4e36210856fd
  • Pointer size: 130 Bytes
  • Size of remote file: 22.7 kB
EcuwtinFs9/20_image_0.png ADDED

Git LFS Details

  • SHA256: 97bdb1992e572da7df90e029fce7d4cda7343dc278b81b863d35c184ab0c2b3c
  • Pointer size: 131 Bytes
  • Size of remote file: 129 kB
EcuwtinFs9/21_image_0.png ADDED

Git LFS Details

  • SHA256: 0259e5be41b430ead47b8bc03ef4aa32ed1aca20917dfd1713500c18fd409ab9
  • Pointer size: 131 Bytes
  • Size of remote file: 133 kB
EcuwtinFs9/22_image_0.png ADDED

Git LFS Details

  • SHA256: d0c3cc574ad4dcf8b67595d26cbc3d077511cee495be1e2b16b9dd256401b925
  • Pointer size: 131 Bytes
  • Size of remote file: 142 kB
EcuwtinFs9/23_image_0.png ADDED

Git LFS Details

  • SHA256: 0969f959dce59da00bcac3975bd7596b03ee01b1e4f30a5b2d06b0a0c68560a7
  • Pointer size: 131 Bytes
  • Size of remote file: 132 kB
EcuwtinFs9/24_image_0.png ADDED

Git LFS Details

  • SHA256: f9d250a04d281a05e0833ada14e3406b6a78db5556c9b49dca9f5ac5ba98bac8
  • Pointer size: 130 Bytes
  • Size of remote file: 80.2 kB
EcuwtinFs9/24_image_1.png ADDED

Git LFS Details

  • SHA256: be528ea32ceb6a03de33c877e6bcfbc39edc202310b583a7d54b45205c894ad0
  • Pointer size: 131 Bytes
  • Size of remote file: 115 kB
EcuwtinFs9/25_image_0.png ADDED

Git LFS Details

  • SHA256: ff53cc00717a60301e1812e2c10b113dc1e68f754c42124c2e863354cb569178
  • Pointer size: 131 Bytes
  • Size of remote file: 124 kB
EcuwtinFs9/26_image_0.png ADDED

Git LFS Details

  • SHA256: 72c5d357a96c85b8cd77ea3239054cdb8f3c1b35e8543415dca3385e2fc059ce
  • Pointer size: 131 Bytes
  • Size of remote file: 211 kB
EcuwtinFs9/28_image_0.png ADDED

Git LFS Details

  • SHA256: a84c93ce93be17b8cb7161fb3ad8afce636e2ec1cc6f45baa5e566b0ebdf30b5
  • Pointer size: 130 Bytes
  • Size of remote file: 36.9 kB
EcuwtinFs9/28_image_1.png ADDED

Git LFS Details

  • SHA256: 1bd3d1cd468385b769d6010701c34ff08454b091cb5c2f94f92f57dabe12fe16
  • Pointer size: 130 Bytes
  • Size of remote file: 23.7 kB
EcuwtinFs9/32_image_0.png ADDED

Git LFS Details

  • SHA256: 1541460d826db0ac9bba2570134fbf6e7e2d815274464f7ae8d111fdf3688b6d
  • Pointer size: 130 Bytes
  • Size of remote file: 69 kB
EcuwtinFs9/32_image_1.png ADDED

Git LFS Details

  • SHA256: 8fcea4344e27fe181fffd3706455ed9ba0cd8e29adbaedd358fca7d2ef5eb9c7
  • Pointer size: 130 Bytes
  • Size of remote file: 59.6 kB
EcuwtinFs9/33_image_0.png ADDED

Git LFS Details

  • SHA256: e18ef42adf7639db1e01c977b3a6ad5b63d79e8ebc0265a40815f40af5b4480e
  • Pointer size: 130 Bytes
  • Size of remote file: 63 kB
EcuwtinFs9/3_image_0.png ADDED

Git LFS Details

  • SHA256: 0af71d25dc967f46a23be172a2313a6a801cb6f2cd237223c6e369a3ec79c07f
  • Pointer size: 130 Bytes
  • Size of remote file: 26 kB
EcuwtinFs9/4_image_0.png ADDED

Git LFS Details

  • SHA256: a42bbacd16989984c57cb8347c437169fae3c8633c666c34aaebd131ff4bbb91
  • Pointer size: 130 Bytes
  • Size of remote file: 18.5 kB
EcuwtinFs9/5_image_0.png ADDED

Git LFS Details

  • SHA256: 17720bb77b33305510cfc5f3dc048c8cc46e4cb70b4351445d5cecb644ce5034
  • Pointer size: 130 Bytes
  • Size of remote file: 91 kB
EcuwtinFs9/7_image_0.png ADDED

Git LFS Details

  • SHA256: a81a7c09323f165fe2d6856747f6d3cb6524f1820a86d6cf2b7b996ba6a81234
  • Pointer size: 130 Bytes
  • Size of remote file: 66.4 kB
EcuwtinFs9/8_image_0.png ADDED

Git LFS Details

  • SHA256: b87fc0a5ff2b53464bb1a054d3d0302b36c6d1351d3ed833a7ffdb70eaf0709d
  • Pointer size: 130 Bytes
  • Size of remote file: 19.5 kB
EcuwtinFs9/EcuwtinFs9.md ADDED
@@ -0,0 +1,924 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Approximations To The Fisher Information Metric Of Deep Generative Models For Out-Of-Distribution Detection
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ Likelihood-based deep generative models such as score-based diffusion models and variational autoencoders are state-of-the-art machine learning models approximating high-dimensional distributions of data such as images, text, or audio. One of many downstream tasks they can be naturally applied to is out-of-distribution (OOD) detection. However, seminal work by Nalisnick et al. which we reproduce showed that deep generative models consistently infer higher log-likelihoods for OOD data than data they were trained on, marking an open problem. In this work, we analyse using the gradient of a data point with respect to the parameters of the deep generative model for OOD detection, based on the simple intuition that OOD data should have larger gradient norms than training data. We formalise measuring the size of the gradient as approximating the Fisher information metric. We show that the Fisher information matrix (FIM) has large absolute diagonal values, motivating the use of chi-square distributed, layer-wise gradient norms as features. We combine these features to make a simple, model-agnostic and hyperparameter-free method for OOD detection which estimates the joint density of the layer-wise gradient norms for a given data point. We find that these layer-wise gradient norms are weakly correlated, rendering their combined usage informative, and prove that the layer-wise gradient norms satisfy the principle of (data representation) invariance. Our empirical results indicate that this method outperforms the Typicality test for most deep generative models and image dataset pairings.
8
+
9
+ ## 1 Introduction
10
+
11
+ Neural networks can be highly confident but incorrect when given inputs different to the distribution of data they were trained on (Szegedy et al., 2014; Nguyen et al., 2015). While domain generalisation Zhou et al.
12
+
13
+ (2021) and domain adaptation (Garg et al., 2023; Ganin et al., 2016) methods tackle this problem by learning machine learning systems which are robust or can actively adapt to domain shift, there may be scenarios where for specific data points this domain shift is too severe to draw reliable inferences. Identifying and possibly filtering out anomalies or *out-of-distribution (OOD)* inputs before deploying the model in the wild is a viable strategy in such cases, especially for safety-critical applications (Ulmer et al., 2020; Stilgoe, 2020; Baur et al., 2021).
14
+
15
+ Deep generative models such as variational autoencoders Kingma & Welling (2014), normalising flows Papamakarios et al. (2021), autoregressive models Van den Oord et al. (2016); Salimans et al. (2017) and diffusion models Sohl-Dickstein et al. (2015b); Ho et al. (2020) are an important family of models in machine learning which allow us to generate high-quality samples from high-dimensional, multi-modal conditional or unconditional distributions in domains such as images, videos, text and speech. Many current state-of-the-art methods are probabilistic: they approximate the data log-likelihood, the likelihood of a data sample given the learned model parameters under the data distribution, or a lower-bound thereof. This renders them a natural candidate for the task of OOD detection as they 'out-of-the-box' provide an OOD metric Bishop (1994) (i.e. the approximated data likelihood) which they use the training objective. However, Nalisnick et al. (2019a); Choi et al. (2018) showed that many of the above mentioned classes of probabilistic deep generative models consistently infer *higher* log-likelihoods for data points drawn from OOD datasets (non-training data) than for in-distribution (training data) samples. In Fig. 2 we replicated their results with a Glow model, and show
16
+
17
+ ![1_image_0.png](1_image_0.png)
18
+
19
+ Figure 1: *Gradients from certain layers are* highly informative for OOD detection. We select two highly informative neural network layers of a deep generative model with parameters θi, θj from a Glow Kingma &
20
+ Dhariwal (2018) model trained on CIFAR-10 and plot the L
21
+ 2
22
+
23
+ -norm of the gradients fθj =
24
+ ∇θj l(x) for in-distribution (CIFAR-10)
25
+ and out-of-distribution (SVHN) samples x.
26
+
27
+ The gradient norm of these two layers allows to separate in- and out-of-distribution samples.
28
+
29
+ that score-based diffusion models are likewise affected by the phenomenon. This result is very surprising given that generative models are trained to maximise the log-likelihood of the training data, and are able to generate high-fidelity, diverse samples from the training distribution. This marks an open problem for deep generative models, and renders the direct use of their estimated likelihood in out-of-distribution detection infeasible. It also questions what deep generative models learn during training, and how they generalise.
30
+
31
+ Related work tackled this problem from two angles: explaining why log-likelihood estimates of these models fail to discriminate, Kirichenko et al. (2020); Zhang et al. (2021a); Le Lan & Dinh (2021); Caterini &
32
+ Loaiza-Ganem (2022), and proposing likelihood-based OOD detection methods and adaptations of existing ones which may overcome these shortcomings Ren et al. (2019); Choi et al. (2018); Hendrycks et al. (2018);
33
+ Liu et al. (2020); Havtorn et al. (2021); Nalisnick et al. (2019b). In §4) we will analyse the previous work on gradient-based OOD detection, linking independent discoveries of other authors Nguyen et al. (2019); Kwon et al. (2020); Choi et al. (2021); Bergamin et al. (2022).
34
+
35
+ Motivation and Intuition. This paper presents an alternative approach for OOD detection which we motivate in the following. Consider the example of a (linear) regression model fitted on some (in-distribution) training data. If we now include an outlier in our training data and refit the model, the outlier will have a lot of influence on our estimate of the model's parameters compared to other in-distribution data points. One way to formalise this intuition is using the *hat-value*: The *hat-value* is defined as the derivative dyˆ
36
+ dy of the model's prediction yˆ with respect to a given (dependent) data point y. It describes the leverage of a single data point, which is used for fitting the model, on the model's prediction for that data point after fitting.
37
+
38
+ We extend this intuition of OOD to deep learning by considering the gradient of the log-likelihood of a deep generative model with respect to its parameters, also known as the *score*. If a neural network converged to a local (or global) minimum, we expect the gradient of the likelihood with respect to the model's parameters to be flat for training data points. Hence, the score is small in norm, and performing an optimiser step for a training data point would not change its parameters much. If after many epochs of training we were now to present the model an OOD data point which it has not seen before, we expect the gradient—resembling the
39
+ 'hat-value of a neural network'—to be steep: the norm of the score would be large, just like hat values are large (in absolute value) for OOD data. An optimizer step with an OOD data point would change the neural network's parameters a lot. It is this intuition which motivates us to theoretically analyse the use of the gradient for OOD detection. In preview of our analyses, in Fig. 1 (and later more thoroughly in Fig. 4) we will precisely observe that (layer-wise) gradient norms are in general larger for OOD than for in-distribution data, which enables the use of gradients for OOD detection. Code to reproduce our experimental results is publicly available on GitHub [anonymised during submission] 1.
40
+
41
+ 1https://github.com/anonymous_authors
42
+ Our contributions are as follows: (a) We analyse the use of the gradient of a data point with respect to the parameters of a deep generative model for OOD detection, and formalise this as approximating the Fisher information metric, a natural way of measuring the size of the gradient. (b) We show that the Fisher information matrix (FIM) has large absolute diagonal values, motivating the use of layer-wise gradient norms which are chi-square distributed as a possible approximation. Our theoretical results show that layer-wise gradients satisfy the principle of (data representation) invariance Le Lan & Dinh (2021), a desirable property for OOD methods. We also find that these layer-wise gradient norms are weakly correlated, making their combined usage more informative. (c) We propose a first simple, model-agnostic and hyperparameter-free method which estimates the joint density of layer-wise gradient norms for a given data point. In our experiments, we find that this method outperforms the Typicality test for most deep generative models and image dataset pairings.
43
+
44
+ ## 2 Current Methods For Ood Detection
45
+
46
+ In this section, we define the OOD detection problem, describe the open problem of using deep generative models for OOD detection and how the input representation may explain this, and how a gradient-based method can be a compelling approach which is invariant to the input representation.
47
+
48
+ ## 2.1 Ood Detection: Problem Formulation
49
+
50
+ Given training data x1 *. . .* xN drawn from a distribution p over the input space X ⊆ R
51
+ D, we define the problem of OOD detection as assigning an OOD score S(x) to each x ∈ X such that points with low OOD
52
+ scores are semantically similar to points sampled from p. OOD detection is *unsupervised* if it is not given class label information at training time.
53
+
54
+ The specific problem we are interested in is leveraging recent advances in deep generative models for unsupervised OOD detection. Here a deep generative model p θis trained to approximate the distribution of some training data x1 *. . .* xN ∼ p, and S is a statistic derived from p θ(such as the model likelihood Nalisnick et al. (2019b), a latent variable hierarchy Schirrmeister et al. (2020); Havtorn et al. (2021), or combinations thereof Morningstar et al. (2021)).
55
+
56
+ In order to evaluate an OOD detection method, one is required to select semantically dissimilar surrogate out-distributions (e.g. a different dataset) to test against. Previous work has sought to define OOD detection as a generic test against data sampled from any differing distribution Hendrycks & Gimpel (2017). Our additional requirement that the out-distribution is semantically dissimilar is motivated by recent theoretical work by Zhang et al. (2021a) showing that a single-sample test against all out-distributions is impossible.
57
+
58
+ ## 2.2 Likelihood-Based Methodology For Unsupervised Ood Detection
59
+
60
+ Likelihood thresholding. Bishop (1994) proposed using the learned model's negative log likelihood as an OOD score S(x) = − log p θ(x). In their seminal paper, Nalisnick et al. empirically demonstrated that this approach fails for a wide variety of deep generative models (Nalisnick et al., 2019a). In particular they showed that certain image datasets such as SVHN are assigned systemically higher likelihoods than other image datasets such as CIFAR10, independent of the training distribution. We replicate this result (for a Glow model, a type of normalising flow, and for the first time a denoising diffusion model) in Figure 2. In their follow up work Nalisnick et al. argue that, in the example of a standard Gaussian of large dimension D,
61
+ samples close to the origin should be classified as OOD as the Gaussian annulus result Blum et al. (2020)
62
+ demonstrates that the vast majority of samples from have a distance of √D from the origin. Generalising this to generative models, they argue that samples with likelihoods much higher than likelihoods of in-distribution samples must be semantically atypical (Nalisnick et al., 2019b). They use this to motivate OOD scoring based on the likelihood being too high or too low, defining the typicality Cover & Thomas (1991) score as S(x) = | log p θ(x) − Hˆ | , where Hˆ is the average log-likelihood on some held-out training data.
63
+
64
+ Likelihood ratios. The likelihood assigned by deep generative models has been shown to strongly correlate with complexity metrics such as the compression ratio achieved by simple image compression algorithms
65
+
66
+ ![3_image_0.png](3_image_0.png)
67
+
68
+ Figure 2: *Counter-intuitive properties of likelihood-based generative models.* Histogram of the negative log-likelihoods inferred from a Diffusion Ho et al. (2020) model [Left] and a Glow Kingma & Dhariwal (2018)
69
+ model [right] trained on one of four image datasets (corresponding to the four subplots) and evaluated on the test set of all four datasets, respectively. For diffusion models we use the negative log-likelihood from one step of the diffusion process p θ(x0|x1). For both models we scale the log-likelihoods by the dimensionality of the data, in this case 3 × 32 × 32. This Figure replicates the results in the seminal paper by Nalisnick et al. (2019a), noting that our results for diffusion models are novel. We find that the training dataset has a counter-intuitively small impact on the ordering of the datasets as ranked by log-likelihood.
70
+ Serrà et al. (2020), and likelihoods from other generative models trained on highly diverse image distributions Schirrmeister et al. (2020), with the highest likelihoods being assigned to constant images. This is somewhat expected, as for discrete data the negative log likelihood is directly proportional to the number of bits needed to encode the data under arithmetic coding. To add to these findings, in Appendix A.3 we use a very simple complexity metric T V , the total variation achieved by considering the image as a vector in [0, 1]784, to show that the whole of MNIST is contained in a set of bounded complexity with volume (Lebesgue measure) 10−116.
71
+
72
+ Thus a model needs to only assign a very low prior probability mass to this set for high likelihoods to be achieved, demonstrating the important connection between volume, complexity and model likelihoods which we hence discuss in §2.3.
73
+
74
+ Ren et al. (2019) argue in favour of using likelihood ratio tests in order to factor out the influence of the
75
+ "background likelihood", the model's bias towards assigning high likelihoods to images with low complexity. In practice, this requires modelling the background likelihood via corruption of training data Ren et al. (2019),
76
+ out-of-the-box and neural compressors Serrà et al. (2020); Zhang et al. (2021b) or the levels of a model's latent variable heirarchy Schirrmeister et al. (2020); Havtorn et al. (2021), leading to restrictions for the data modalities or models to which the method can be applied to. In general there is a limited number of use cases whereby one can pre-specify the OOD distribution p OOD they are interested in well enough that they can evaluate the likelihood of the data under this OOD distribution log p OOD(x) (which is necessary to evaluate the likelihood), without access to samples from this distribution at which point classification becomes a better-studied option.
77
+
78
+ ## 2.3 Representation Dependence Of The Likelihood
79
+
80
+ Le Lan & Dinh (2021) emphasise that the definition of likelihood requires choosing a method of assigning volumes to the input space X . Specifically, datapoints could be represented as belonging to some other input space T , linked via a smooth invertible coordinate transformation T : *X → T* . The model probability density for a given datapoint x ∈ X , which we denote p θ X (x), will thus differ from the probability density p θ T
81
+ (t) of
82
+
83
+ ![4_image_0.png](4_image_0.png)
84
+
85
+ Figure 3: *The log-likelihood heavily depends on data representation* Le Lan & Dinh (2021). Here we plot the first two samples of the CIFAR10 dataset and the difference in Bits Per Dimension (BPD) induced by changing from an RGB to an HSV colour model:
86
+
87
+ $$\Delta_{B P D}^{R G B\to H S V}={\frac{\log_{2}p_{R G B}(\mathbf{x})-\log_{2}p_{H S V}(\mathbf{x})}{3\times32\times32}}.$$
88
+ $$(1)$$
89
+
90
+ In Appendix B.1, we provide experimental details and inFig.
91
+
92
+ 8 replicate this for the first 20 samples, where we observe
93
+ ∆RGB→HSV
94
+ BPD values ranging from 0.18 to 1.76 the corresponding point t = T(x) by a factor of the Jacobian determinant of T (Le Lan & Dinh, 2021):
95
+
96
+ $$p_{T}^{\theta}(t)=p_{\lambda}^{\theta}(x)\,\left|\frac{\partial T}{\partial x}\right|^{-1}\,.$$
97
+ . (1)
98
+ The volume element ∂T
99
+ ∂x
100
+
101
+ −1describes the change of volumes local to x as it is passed through T. This term can grow or shrink exponentially with the dimensionality of the problem, making its effect counter-intuitively large. As an empirical example in the case of image distributions, in Fig. 3 and Appendix B.1 Fig 8 we consider the case of a change of color model T
102
+ RGB→HSV from a Red-Green-Blue (RGB) to Hue-Saturation-Value
103
+ (HSV) representation. We compute the induced change in bits per dimension as a scaled log-value of the volume element ∆RGB→HSV
104
+ BPD =1 3×32×32 log
105
+
106
+ ∂T RGB→HSV
107
+ ∂x
108
+ and report values for 20 non-cherry picked CIFAR-10 images ranging from 0.18 to 1.76. For comparison the average BPD reported in the seminal paper by Nalisnick et al. (2019a) was 3.46 on CIFAR10, compared to 2.39 on SVHN when evaluating with the same model. Hence, if we use the likelihood for OOD detection, whether we classify a sample as OOD or not may flip for some samples merely by changing how the data is represented.
109
+
110
+ Motivated by the strong impact of the volume element, Le Lan & Dinh (2021) propose a principle of
111
+ (representation) invariance: given a perfect model p θ
112
+ ∗of the data distribution, the outcome of an unsupervised OOD detection method should not depend on how we represent the input space X . In theory likelihood ratios are representation-invariant (Le Lan & Dinh, 2021), however in practice the method used to generate the background distribution often re-introduces dependence on the representation. For example Ren et al.
113
+
114
+ (2019) propose to generate the background distribution by re-sampling randomly chosen pixels as independent uniform distributions, re-introducing the notion of volume.
115
+
116
+ ## 2.4 Invariance Of The Gradient Under Invertible Transformations
117
+
118
+ To achieve a representation invariant OOD score (Le Lan & Dinh, 2021), we are thus motivated to quotient out the effect of the volume element in Eq. (8). We now present our first theoretical contribution, which shows that methods based on the gradient of the log-likelihood do precisely this.
119
+
120
+ Proposition 1. Let p θ X (x) and p θ T
121
+ (t) be two probability density functions corresponding to the same model distribution p θ being represented on two different measure spaces X and T . Suppose these representations encode the same information, i.e. there exists a smooth, invertible reparameterization T : *X → T* such that for x ∈ X and t ∈ T representing the same point we have T(x) = t. Then, the gradient vector ∇θ(log p θ) is invariant to the choice of representation, and in particular, ∇θ(log p θ T
122
+ )(t) = ∇θ(log p θ X )(x).
123
+
124
+ Proof. See Appendix A.1. We prove analagous results for variational lower bounds (e.g. the ELBO of a VAE) in Appendix A.2.
125
+
126
+ Remark 1. Training a generative model p θ0 with initialisation parameters θ0 with log-likelihood as the loss via gradient-descent produces training trajectories θ0, θ1*, . . .* θN which are representation-invariant.
127
+
128
+ The interpretation of the above results is subtle. We would like to caution the reader by noting it does not mean that the inductive biases are discarded when the gradient is computed as inductive biases pertaining to
129
+
130
+ ![5_image_0.png](5_image_0.png)
131
+
132
+ Figure 4: *Layer-wise gradients of the log-likelihood (the score) are highly informative for OOD detection.*
133
+ Their size differs by orders of magnitudes between layers, and they are not strictly correlated, rendering layer-wise gradients (in contrast to the full gradient) discriminatory features for OOD detection. In each row, we randomly select two layers with parameters θi, θj from a Glow Kingma & Dhariwal (2018) model [Top]
134
+ or a Diffusion model Ho et al. (2020) [Bottom], which have 1353 and 276 layers, respectively. The models are trained on CelebA, a dataset that has proved challenging for OOD detection in previous work Nalisnick et al. (2019b). We then evaluate this model on batches x1:B (B = 5) drawn from the in-distribution and OOD test datasets and compute the squared layer-wise L
135
+ 2-norm of the gradients of the log-likelihood with respect to the parameters of the layer, i.e. fθj
136
+ (x1:B) =
137
+ ∇θj
138
+ (PB
139
+ b=1 l(xb))
140
+
141
+ 2 2
142
+ . [Left and Middle] shows the two layer-wise gradients separately, [Right] shows their interaction in a scatter plot. In Appendix B Figures 9
143
+ - 11, we provide our complete results, showing more layers from three likelihood-based generative models, each trained and evaluated on five datasets.
144
+ distances between data points are frequently encoded in the parameter space. Further, remark 1 may explain why the likelihood can still be used to train deep generative models and allow them to generate convincing samples when using a gradient-based optimisation algorithm, even though the likelihood value itself appears uninformative for detecting if data is in-distribution.
145
+
146
+ ## 3 Methodology
147
+
148
+ In this section, we develop a mathematically-principled method for gradient-based OOD detection.
149
+
150
+ ## 3.1 Layer-Wise Gradients Are Highly Informative And Differ In Size By Orders Of Magnitudes
151
+
152
+ We are now interested in formulating a method which uses the intuitively plausible (see §1) and data representation-invariant (see §2.4) score ∇θl(x) = ∇θ{log p θ}(x) for OOD detection. A naïve approach would be to measure the size of the score vector by computing the L
153
+ 2 norm ∥∇θl(x)∥
154
+ 2 2 of the gradient Nguyen et al.
155
+
156
+ (2019). We can view this L
157
+ 2 norm as the directional derivative of the log-likelihood log p θin the direction of its own gradient ∇θl(x), which can be intuited as a measure of how much the model can learn about the given datapoint with one small gradient update.
158
+
159
+ In the following, we analyse this idea, demonstrating its limitations: We empirically find that the size of the norm of the score vector is dominated by specific neural network layers which the overall gradient norm cannot capture. In Fig. 4, we train deep generative models (here: Glow Kingma & Dhariwal (2018) and diffusion models Ho et al. (2020)) on a training dataset (here: CelebA). We then draw a batch of items from different evaluation datasets and compute the squared *layer-wise* L
160
+ 2-norm of the gradients of the log-likelihood of a deep generative model with respect to the parameters θj of the corresponding layer, i.e.
161
+
162
+ fθj
163
+ (x1:B) =
164
+ ∇θj
165
+ (PB
166
+ b=1 l(xb))
167
+
168
+ 2 2
169
+ . The histrogrammes in the left two columns plot fθj
170
+ (x1:B) for each layer separately, the plots in the rightmost column shows their interaction in a scatterplot.
171
+
172
+ Two points are worth noting: We observe that for a given neural network layer (and different batches),
173
+ the gradients are of a similar size, but *across* layers, the scale of the layer-wise gradient norms differs by orders of magnitudes. In particular, taking the norm over the entire score vector would overshadow the signal of layers with a smaller norm by those on much larger magnitudes. Second, note that the layer-wise gradient norms do not strongly correlate for randomly selected layers. In particular, one may find two layers with corresponding features fθj
174
+ (x1:B) and fθk
175
+ (x1:B) which allow us to separate training (in-distribution)
176
+ from evaluation (OOD) data points with a line in this latent space. Considering an example, when fixing fθ1
177
+ (x1:B) ≈ −7.5, large negative values of fθ2
178
+ (x1:B) are in-distribution, and as they become more positive, they correspond to the out-of-distribution datasets CIFAR-10 and ImageNet32, respectively, with very high probability. This renders the layer-wise information superior over the overall gradient for use as discriminative features in OOD detection. We present our complete results in Appendix B Figs. 9-11, showing further layers (histogrammes), also for other deep generative models (VAEs Kingma & Welling (2014)) and training datasets (SVHN, CelebA, GTSRB, CIFAR-10 & ImageNet32).
179
+
180
+ ## 3.2 The Fisher Information Metric: A Principled Way Of Measuring The Size Of The Gradient
181
+
182
+ Having identified the limitations of using the L
183
+ 2 norm of the gradient, a perhaps mathematically more natural way to measure the score vector's size is to use the norm induced by the *Fisher information metric*
184
+ ∥∇θl(x)∥*F IM* (Radhakrishna Rao, 1948), defined as
185
+
186
+ $$\left\|\nabla_{\mathbf{\theta}}l(\mathbf{x})\right\|_{FIM}^{2}=\nabla_{\mathbf{\theta}}l(\mathbf{x})^{T}F_{\mathbf{\theta}}^{-1}\nabla_{\mathbf{\theta}}l(\mathbf{x}),\hskip28.452756ptF_{\mathbf{\theta}}=E_{\mathbf{y}\sim p^{\mathbf{\theta}}}(\nabla_{\mathbf{\theta}}l(\mathbf{y})\nabla_{\mathbf{\theta}}l(\mathbf{y})^{T})\tag{2}$$
187
+
188
+ where Fθ is called the *Fisher Information Matrix (FIM)*. Intuitively, the FIM re-scales the gradients to give more equal weighting to the parameters which typically have smaller gradients, and thus the Fisher information metric accounts for and is independent of how the parameters are scaled. This in theory prevents a dependence on representation in the gradient space.
189
+
190
+ The value ∥∇θl(x)∥
191
+ 2 F IM is called the score statistic, which Rao (1948) showed to follow a χ 2 distribution with |θ| degrees of freedom, assuming the model parameters θ are maximum likelihood estimates. Thus the score statistic can be used with a statistical test known as the *score test* Radhakrishna Rao (1948), as was used by Choi et al. (2021); Bergamin et al. (2022). Deep generative models with a likelihood-based objective perform a form of approximate maximum-likelihood estimation.
192
+
193
+ ## 3.3 Approximating The Fisher Information Matrix
194
+
195
+ In practise, the full FIM has P × P entries, where P = |θ| is the number of parameters of the deep generative model. This means that is too large to store in memory, and would furthermore be too expensive to compute and invert. For example our glow implementation has P ≈ 44 million parameters, and thus the FIM would require ≈ 7, 700 terabytes to store using a float32 representation. To develop a computable method, we therefore need to find a way to approximate the FIM. What would be a good choice for this approximation? –
196
+ This problem is non-trivial due to its dimensionality. We start answering this question by computing the FIM in Eq. (2) restricted to a subset of parameters in two layers with parameters θ1, θ2 of a Glow Kingma &
197
+
198
+ ![7_image_0.png](7_image_0.png)
199
+
200
+ Figure 5: *The layer-wise FIM has large absolute diagonal values*. We randomly select two layers θi and θj from a Glow model trained on CelebA, and randomly select max(50, |θj |) weights from each layer. We then compute slices of the FIM using the method described in Equation (3) and plot the results, with dark blue colours at coordinates (*α, β*) corresponding to larger values for the corresponding element of the FIM. In order to maintain visual fidelity of the plot when weights between layers vary by orders of magnitudes, we normalise row α by a factor of √Fαα where Fαα indicates the element of the FIM at coordinates (*α, α*), and likewise for the columns, which could be equivalently formulated as re-scaling the model parameters by this factor. The same plots using diffusion models and of the raw values Fαβ (without row and column-wise normalisation) are presented in Appendix B.3, Figures 14 & 15.
201
+ Dhariwal (2018) model trained on CelebA, using the Monte-Carlo (MC) approximation2
202
+
203
+ $$F_{\mathbf{\theta}_{j}}=E_{\mathbf{y}\sim p^{\mathbf{\theta}}}(\nabla_{\mathbf{\theta}_{j}}l(\mathbf{y})\nabla_{\mathbf{\theta}_{j}}l(\mathbf{y})^{T})\approx\frac{1}{N}\sum_{i=1}^{N}\nabla_{\mathbf{\theta}_{i}}l(\mathbf{y}^{(i)})\nabla_{\mathbf{\theta}_{i}}l(\mathbf{y}^{(i)})^{T},\qquad\mathbf{y}^{(i)}\sim p^{\mathbf{\theta}},\tag{3}$$
204
+
205
+ where ∇θj refers to taking the gradient with respect to the parameters θj in layer j and y
206
+ (i) are samples drawn from the generative model p θ. This computation is infeasible for larger layers of the network which may be highly informative for OOD detection, demonstrating the need for a more principled approximation technique. In Fig. 5 we illustrate the resulting (restricted) FIM estimate for two layers with N = 1024. Further layers of this and other models, and when trained on other datasets are presented in Appendix B.3, Figures 5 - 15.
207
+
208
+ We observe an interesting pattern of *diagonal dominance*: The diagonal elements are significantly larger in absolute value, on average approximately five times the size of the off-diagonal elements. Hence, a seemingly
209
+ 'crude', yet as turns out highly efficient approximation of the layer-wise FIM is to approximate as a multiple of the identity matrix, which corresponds to computing the *layer-wise* L
210
+ 2 norm of the gradient. This reduces the cost from inverting an arbitrary P × P with potentially large P to approximating one value for each of the layers θ1, θ2 *. . .* θJ of the model.
211
+
212
+ ## 3.4 A Method For Exploiting Layer-Wise Gradients
213
+
214
+ We are now interested in operationalising our observations so far into an algorithm that can be used in practice.
215
+
216
+ 2Note that our choice to use a MC approximation is just for the sake of being able to compute Fθ; we here do not make any further (and more principled) approximations or assumptions.
217
+
218
+ ![8_image_0.png](8_image_0.png)
219
+
220
+ Figure 6: The L
221
+ 2*-norms of layer-wise gradients have little correlation.* We select layers with parameters θi, θj and measure the correlation of the logarithm gradient L
222
+ 2-norms log fθi
223
+ (x). Binning these correlations by the distance between the layers |i − j| and averaging across correlations of this distance gives the above plot. We note that there is a strong correlation in L
224
+ 2-norm between adjacent layers, but that this correlation quickly decays for both in-distribution and out-of-distribution data. We hypothesise that this enables our approximation of the FIM which assumes independence across layers to provide good performance.
225
+ In addition to the diagonal dominance phenomenon enabling a layer-wise approximation via a diagonal matrix, recall that layer-wise gradients contain more information than the overall gradient norm as the scale of the gradient norms differs by orders of magnitudes. We are therefore motivated to consider each layer θ1, θ2 *. . .* θJ
226
+ in our model separately and combine the results as an OOD score in the second step. Specifically, if we select a layer θj we can consider a restricted model where only the parameters in this layer are variable, and the other layers are frozen. We can approximate the score statistic (2) on this restricted model, whose parameters are more homogeneous in nature. In practice, we take the score vector for a layer ∇θj l(x) and attempt to approximate∇θj l(x)
227
+ 2 F IM, which should follow a χ 2test with |θj | degrees of freedom for in-distribution data. As discussed in §3.3, we approximate the FIM restricted to this layer as a multiple of the identity. For a batch of B (possibly equal to 1) of data points x1:B we define features fθj
228
+ , which via our identity-matrix approximation should be proportional to∇θj l(x1:B)
229
+ 2 F IM, by
230
+
231
+ $$f_{\mathbf{\theta}_{j}}(\mathbf{x}_{1:B})=\left\|{\boldsymbol{\nabla}}\mathbf{\theta}_{j}\left(\sum_{b=1}^{B}l(\mathbf{x}_{b}))\right)\right\|_{2}^{2}.$$
232
+
233
+ $$\left(4\right)$$
234
+
235
+ . (4)
236
+ Given that these layer-wise L
237
+ 2 norms fθj should be proportional to a χ 2 distributed variable with a large degree of freedom, we expect log fθj to be normal-distributed Bartlett & Kendall (1946). In Fig. 4 (further results in Appendix B) we observe a good fit of log fθj to a Normal distribution, empirically validating this holds in spite of our approximations.
238
+
239
+ This gives rise to a natural method of combining the layer-wise L
240
+ 2 norms: we simply fit Normal distributions to each log-feature log fθj independently, and then use the joint density as an "in-distribution" score. Algorithms 1 to 3 summarise our proposed method. As with other unsupervised OOD detection methods (Nalisnick et al., 2019b), we assume the existence of a small fit set of data held-out during training to accurately fit to each feature fj . Note that in practise our method is very straight-forward to implement, requiring only a few lines of PyTorch code. Many other methods could possibly be constructed from our theoretical and empirical insights, and we will discuss potential future work in §6.
241
+
242
+ In Appendix B.5 we observe a mild performance improvement, uniformly across datasets, with the joint density approach in comparison to using Fisher's method Fisher (1938) when combining these statistics using z-scores. We hypothesise that this could be due to the density being more robust to correlation between adjacent layers as noted in Fig. 6. Our presented method does not enjoy the full invariance under rescaling of the model parameters as the true score statistic (see §3.2). However, in Appendix A.4 we show that it does satisfy invariance when rescaling each layer individually, justifying our use of the density in this setting.
243
+
244
+ Our method satisfies the desiderata of the principle of invariance (see §4), is hyperparameter-free, and is applicable to any data modality and model with a differentiable estimate of the log-likelihood.
245
+
246
+ Algorithm 1 Algorithm for computing features Require: Deep generative model M, with parameters θ1, θ2*, . . .* θJ in each of its J layers.
247
+
248
+ function gradient features(x1 *. . .* xB)
249
+ for xb in batch do l(xb) ← M(xb) ▷ Compute the log-likelihood end for vθ ← ∇θ(l(x1) + *· · ·* + l(xB)) ▷ Compute the gradient via backpropagation for j ← 1 *. . . J* in layers do fj ←vθj 2 2▷ Store the layer-wise L
250
+ 2 norms end for end function Algorithm 2 Algorithm for training models Require: train dataset and held-out fit dataset Train a deep generative model M, with parameters θ1, θ2*, . . .* θJ in each of its J layers.
251
+
252
+ for Batch x n 1
253
+ . . . x n B in fit dataset do f n 1
254
+ . . . f n J ← gradient features(x n 1
255
+ . . . x n B)
256
+ end for for j ← 1 *. . . J* in layers do µj ← mean(log f 1 j
257
+ . . . log f N
258
+ j
259
+ )
260
+ σ 2 j ← variance(log f 1 j
261
+ . . . log f N
262
+ j
263
+ ) ▷ Fit Gaussians to logarithmic features end for Algorithm 3 Algorithm for detecting OOD data Given new batch of samples y1 *. . .* yB
264
+ f ← gradient features(y1 *. . .* yB)
265
+ S(y1 *. . .* yB) = − log N (log f; µ; Diag(σ 2)) ▷ Set OOD score to be the Gaussian negative log likelihood
266
+
267
+ ## 3.5 Application To Diffusion Models
268
+
269
+ A denoising diffusion model Sohl-Dickstein et al. (2015a); Ho et al. (2020) uses a chain of latent variables
270
+ {xn}
271
+ t=T
272
+ t=0 achieved by gradually adding noise (represented by the conditional distributions q(xt|xt−1)) to the initial distribution of images x = x0 until the final latent variable xT is approximately Gaussian with mean zero and identity variance. The inverse process is then learned by a model approximating p θ(xt−1|xt).
273
+
274
+ Diffusion models have gained in popularity due to their ability to produce samples with high visual fidelity and semantic coherence, making them a natural candidate for OOD detection Graham et al. (2023). Nonetheless, the full variational lower bound on the log-likelihood defined by Ho et al. (2020) is expensive to compute as it requires running |T| inference steps on the network modelling p θ(xt−1|xt). For our setup, |T| = 1000 and running a full-forward/backward pass requires roughly 1 minute of compute per sample. Thus, we choose one component of the variational lower bound, namely the one-step log-likelihood, l(x) = l(x0) = Ex1∼q(x1|x0)log p θ(x0|x1)
275
+ We refer to Appendix B.4 for computational details, noting that our implementation using one sample from q(x1|x0) only requires 1 pass on of inference on p θ(x0|x1). Despite this value being very different to the intractable full log-likelihood p θ(x0), in Figure 2 we observe the same open problem and phenomenon as Nalisnick et al. (2019a) reported for the full likelihood estimates of other deep generative models. In Appendix B.4 we perform an ablation study on this one-step component of the variational lower bound used, finding the result that for both our method and the typicality method Nalisnick et al. (2019b), the components which are computed with less noise added to the image xt used as input to the model p θ(xt−1|xt) are more informative for OOD detection, which could intuitively be understood as the noised image xt itself being more informative in this regard.
276
+
277
+ ## 4 Related Work
278
+
279
+ In this section we review related work which uses gradients for unsupervised OOD detection.
280
+
281
+ In concurrent work to ours, Choi et al. (2021); Bergamin et al. (2022) each present an approximation to Rao's score test (Radhakrishna Rao, 1948). They independently approached the problem from the directions of training on the given sample of OOD data Xiao et al. (2020) and application of tests from classical statistics, respectively. These methods use approximations of the FIM from the field of optimization Amari
282
+ (1998); Tieleman et al. (2012); Kingma & Ba (2015), whereas we use a simpler approximation tailored to the task of unsupervised OOD detection, and complement this with our empirical observations of the FIM in §3.3. Bergamin et al. (2022) compute a score test across the whole model by approximating the FIM as a diagonal, with elements Fαα = (∂α log p θ(x))2 + ϵ for a small hyperparameter ϵ = 10−8, which which is used in optimization for its damping effect Martens (2020) and helps to mitigate numerical instabilities when dividing by Fαα. Our method differs in that it explicitly encodes the layer-homogeneity of the model (whereby parameters in the same layer have similar gradient sizes and perform similar functions in the network), and the predicted chi-square distribution of the score. We also note that layers which contain squared gradient values that are << 10−8(see Appendix B.3 Figures 14 and 15) would have their information nullified without careful tuning of ϵ, this can further be observed in 10 and 11 where there are entire layers which provide informativity for OOD detection and and have an L
283
+ 2 norm of < 10−8. Choi et al. (2021) split the problem of OOD detection layer-wise, but use the more complex EKFAC George et al. (2018) algorithm to account for dependencies between adjacent parameters. After some normalisation and additional processing steps, the authors compute the ROSE metric by taking the maximum feature over some pre-selected subset of layers.
284
+
285
+ Our method differs as it uses a holistic score influenced by all the model layers.
286
+
287
+ Nguyen et al. (2019) are interested in using VAEs to detect anomalous web traffic in a semi-supervised setting, measuring the difference between a test gradient and labelled anomalous gradients. Our method differs in that does not require anomalous examples. Kwon et al. (2020) computes a cosine similarity between the gradients in the decoder of a VAE and the average gradients observed during training as their OOD metric of choice.
288
+
289
+ In this work, we advocate for using the *size* of the gradient vector rather than its angle as done in Kwon et al.
290
+
291
+ (2020): our intuition is that, for a well-trained model evaluated on in-distribution data, we are close to a local minimum where the gradient is flat and the variance of the angle of the gradient vector is high. In particular, when averaging over samples from the model, we have that Ex∼pθ∇θ(log p θ)(x) = 0 as a distribution minimises its own cross-entropy. In their supplementary, Nalisnick et al. (2019b) note that the Maximum Mean and Kernelized Stein Discrepancy tests they use to benchmark their typicality test only achieve good performance when using the inner product of the parameter gradients k(xi, xj ) = ∇θl(xi)
292
+ T ∇θl(xj ), leading them to use the inner product with respect to the data in their experiments k(xi, xj ) = ∇xl(xi)
293
+ T ∇xl(xj ).
294
+
295
+ Our method differs from using inner product with respect to the parameters in that it allows for information to be used from all the layers, rather than a few dominant layers, as we discuss in §3.1.
296
+
297
+ To the best of our knowledge, no previous work has connected these works bar Bergamin et al. (2022)'s citation of Choi et al. (2021). The theoretical grounding which we provide in Proposition 1 §A.1 may explain why multiple other authors have independently found efficacy in unsupervised OOD detection with gradient information.
298
+
299
+ For completeness, in Appendix C we review previous work on the use of gradients of classifiers for the task of supervised OOD detection.
300
+
301
+ ## 5 Experimental Benchmark
302
+
303
+ In this section, we benchmark our OOD detection method against the typicality test. We postpone detailed description of our datasets and models to Appendix D.
304
+
305
+ We follow the consensus of previous literature Nalisnick et al. (2019a) to evaluate our method on distribution pairs: training a generative image models on one image distribution and testing against a surrogate outdistribution. We choose five natural image datasets SVHN, CelebA, GTSRB, CIFAR-10 and ImageNet32 used in previous literature Serrà et al. (2020) and evaluate on all dataset pairings. To the best of our knowledge this makes our evaluation more extensive than any previously published work in unsupervised OOD detection, a field where rigorous evaluation is especially important as erroneously high performance can be achieved by selective reporting of or fine-tuning hyperparameters to certain out-distributions.
306
+
307
+ | test ↓ train → | SVHN | CelebA | GTSRB | CIFAR-10 | ImageNet32 | |
308
+ |---------------------------------|--------|----------|---------|------------|--------------|--------|
309
+ | typicality (B = 1) | SVHN | - | 0.8735 | 0.3469 | 0.8599 | 0.8915 |
310
+ | CelebA | 0.9989 | - | 0.6506 | 0.3680 | 0.2857 | |
311
+ | GTSRB | 0.9261 | 0.8201 | - | 0.6708 | 0.5548 | |
312
+ | CIFAR-10 | 0.9829 | 0.7733 | 0.6423 | - | 0.4147 | |
313
+ | ImageNet32 | 0.9952 | 0.9251 | 0.8057 | 0.7249 | - | |
314
+ | SVHN | - | 0.9880 | 0.9858 | 0.8747 | 0.8010 | |
315
+ | CelebA | 0.9823 | - | 0.9262 | 0.5155 | 0.2997 | |
316
+ | GTSRB | 0.9537 | 1.0000 | - | 0.7546 | 0.9967 | |
317
+ | CIFAR-10 | 0.9658 | 0.9462 | 0.9126 | - | 0.4377 | |
318
+ | ImageNet32 | 0.9976 | 0.9876 | 0.9683 | 0.7375 | - | |
319
+ | ours (B = 1) typicality (B = 5) | SVHN | - | 0.9899 | 0.6119 | 0.9961 | 0.9983 |
320
+ | CelebA | 1.0000 | - | 0.9786 | 0.4737 | 0.4293 | |
321
+ | GTSRB | 0.9997 | 0.8987 | - | 0.6639 | 0.6138 | |
322
+ | CIFAR-10 | 1.0000 | 0.9082 | 0.9613 | - | 0.4894 | |
323
+ | ImageNet32 | 1.0000 | 0.9974 | 0.9954 | 0.9013 | - | |
324
+ | SVHN | - | 0.9997 | 1.0000 | 0.9989 | 0.9976 | |
325
+ | CelebA | 0.9997 | - | 1.0000 | 0.9525 | 0.8514 | |
326
+ | GTSRB | 0.9996 | 0.9999 | - | 0.9596 | 0.9999 | |
327
+ | CIFAR-10 | 0.9992 | 0.9970 | 1.0000 | - | 0.6712 | |
328
+ | ImageNet32 | 1.0000 | 0.9995 | 1.0000 | 0.9480 | - | |
329
+ | ours (B = 5) | | | | | | |
330
+
331
+ Table 1: Comparison of the AUROC values (higher is better) of our method to the typicality test Nalisnick et al. (2019b) for batch sizes B = 1, 5. We train Glow Kingma & Dhariwal (2018) models on five natural image datasets (columns) and evaluate the ability of the model-method combination to reject the other datasets (rows). Bold indicates the element-wise higher value comparing both methods.
332
+
333
+ In Tables 1 & 2 we compare our method against the typicality test Nalisnick et al. (2019b) using the Area Under Receiver Operating Curve (AUROC) statistic on both single sample (B = 1) batch size (B = 5) OOD
334
+ detection. We choose typicality as it is, to the best of our knowledge, the most performant method which is both model-agnostic and hyper parameter free. Performance of unsupervised OOD detection can vary greatly depending on the model and even the image resizing algorithm applied make the inputs of uniform size Bergamin et al. (2022). To mitigate this problem we directly compare to our implmentation of Nalisnick et al. (2019b) using the same models and dataset implementations where they exist.
335
+
336
+ For Glow models (Table 1) our method outperforms typicality on most dataset pairings, whereas for diffusion models (Table 2) neither method dominates, although our method achieves higher average AUROC. We hypothesise that the comparative advantage our method enjoys for Glow models could be related to the model having more layers (1353 vs. 276) leading to more gradient features being available. In Appendix E table 7 we note poor performance for both methods when applied to a VAE model with poor sample quality, indicating that how well the model captures the dataset is the main factor driving performance of the downstream OOD detection method.
337
+
338
+ Single sample (B = 1) performance for both methods was lower for models trained on the semantically diverse datasets CIFAR-10 and ImageNet32. We mainly choose to include these datasets as in-distributions for consistency with prior work, and we would like to question the implicit assumption made by these prior works that an unsupervised method trained on these datasets *should* consistently reject images from other natural image datasets. There is no obvious, meaningful semantic boundary to distinguish a natural image from CIFAR-10 and ImageNet32, and thus it is not clear a even a human would outperform a random baseline.
339
+
340
+ As noted in §4 our method is similar to those presented in concurrent works Choi et al. (2021); Bergamin et al. (2022), we choose to use our method as a representative of this class of methods so that we may use our compute resources to robustly investigate their performance over such a wide range of distribution pairings and models. We make no claim of superior performance over these methods.
341
+
342
+ | test ↓ train → | SVHN | CelebA | GTSRB | CIFAR-10 | ImageNet32 | |
343
+ |---------------------------------|--------|----------|---------|------------|--------------|--------|
344
+ | typicality (B = 1) | SVHN | - | 0.9357 | 0.4661 | 0.9007 | 0.8777 |
345
+ | CelebA | 0.9990 | - | 0.3860 | 0.3409 | 0.2837 | |
346
+ | GTSRB | 0.9335 | 0.8197 | - | 0.6981 | 0.5624 | |
347
+ | CIFAR-10 | 0.9920 | 0.6968 | 0.4855 | - | 0.4142 | |
348
+ | ImageNet32 | 0.9986 | 0.8759 | 0.6759 | 0.7443 | - | |
349
+ | SVHN | - | 0.9903 | 0.8526 | 0.5574 | 0.6214 | |
350
+ | CelebA | 0.9551 | - | 0.5466 | 0.5655 | 0.3571 | |
351
+ | GTSRB | 0.8691 | 0.9684 | - | 0.5622 | 0.5530 | |
352
+ | CIFAR-10 | 0.9535 | 0.9639 | 0.5786 | - | 0.4710 | |
353
+ | ImageNet32 | 0.9363 | 0.9818 | 0.6651 | 0.5763 | - | |
354
+ | ours (B = 1) typicality (B = 5) | SVHN | - | 0.9978 | 0.7943 | 0.9975 | 0.9961 |
355
+ | CelebA | 1.0000 | - | 0.7642 | 0.3156 | 0.3621 | |
356
+ | GTSRB | 0.9998 | 0.8336 | - | 0.6809 | 0.5765 | |
357
+ | CIFAR-10 | 1.0000 | 0.7808 | 0.8332 | - | 0.4488 | |
358
+ | ImageNet32 | 1.0000 | 0.9866 | 0.9675 | 0.9266 | - | |
359
+ | SVHN | - | 1.0000 | 0.9970 | 0.8457 | 0.9561 | |
360
+ | CelebA | 0.9908 | - | 0.8552 | 0.7734 | 0.4202 | |
361
+ | GTSRB | 0.9716 | 0.9997 | - | 0.7325 | 0.9007 | |
362
+ | CIFAR-10 | 0.9895 | 0.9992 | 0.8104 | - | 0.5733 | |
363
+ | ImageNet32 | 0.9862 | 1.0000 | 0.9309 | 0.8532 | - | |
364
+ | ours (B = 5) | | | | | | |
365
+
366
+ Table 2: *Diffusion models* Comparison of the AUROC values (larger values are better) of our method to the typicality test Nalisnick et al. (2019b) for batch sizes B = 1, 5. We train Diffusion Ho et al. (2020) models on five natural image datasets (as columns) and evaluate the ability of the model-method combination to reject the other datasets (as rows). Bold indicates the element-wise higher value comparing both methods.
367
+
368
+ ## 6 Conclusion
369
+
370
+ We analysed an approximation to the Fisher information metric for OOD detection. Our work has two key limitations: First, while we have provided the most extensive empirical benchmark of deep generative models, OOD and in-distribution datasets, datasets beyond images and for instance large language models should be tested. Second, while we focused on comparing it to the best performing, model-agnostic, hyperparameter-free OOD method, further empirical benchmarking against other methods should be conducted. Future work should investigate other, potentially more computationally expensive methods for approximating the Fisher information metric and its use in OOD detection.
371
+
372
+ ## References
373
+
374
+ Shun-ichi Amari. Natural gradient works efficiently in learning. *Neural Computation*, 10(2):251–276, 1998.
375
+
376
+ M. S. Bartlett and D. G. Kendall. The statistical analysis of variance-heterogeneity and the logarithmic transformation. *Supplement to the Journal of the Royal Statistical Society*, 8(1):128–138, 1946.
377
+
378
+ Christoph Baur, Stefan Denner, Benedikt Wiestler, Nassir Navab, and Shadi Albarqouni. Autoencoders for unsupervised anomaly segmentation in brain mr images: a comparative study. *Medical Image Analysis*, 69:
379
+ 101952, 2021.
380
+
381
+ Sima Behpour, Thang Doan, Xin Li, Wenbin He, Liang Gou, and Liu Ren. Gradorth: A simple yet efficient out-of-distribution detection with orthogonal projection of gradients, arXiv, 2023.
382
+
383
+ Federico Bergamin, Pierre-Alexandre Mattei, Jakob Drachmann Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Soren Hauberg, and Jes Frellsen. Model-agnostic out-of-distribution detection using combined statistical tests. In *International Conference on Artificial Intelligence and Statistics*. PMLR, 2022.
384
+
385
+ Christopher M Bishop. Novelty detection and neural network validation. *IEE Proceedings-Vision, Image and* Signal processing, 141(4):217–222, 1994.
386
+
387
+ Avrim Blum, John Hopcroft, and Ravindran Kannan. *Foundations of Data Science*. Cambridge University Press, 2020.
388
+
389
+ Anthony L. Caterini and Gabriel Loaiza-Ganem. Entropic issues in likelihood-based OOD detection. In Melanie F. Pradier, Aaron Schein, Stephanie Hyland, Francisco J. R. Ruiz, and Jessica Z. Forde (eds.),
390
+ Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops. 13 Dec 2022.
391
+
392
+ Hyunsun Choi, Eric Jang, and Alexander A Alemi. Waic, but why? generative ensembles for robust anomaly detection. *arXiv preprint arXiv:1810.01392*, 2018.
393
+
394
+ Jaemoo Choi, Changyeon Yoon, Jeongwoo Bae, and Myungjoo Kang. Robust out-of-distribution detection on deep probabilistic generative models, arXiv, 2021.
395
+
396
+ Thomas M. Cover and Joy A. Thomas. *Asymptotic Equipartition Property*, chapter 3, pp. 57–69. John Wiley
397
+ & Sons, Ltd, 1991. ISBN 9780471748823.
398
+
399
+ Antonio De Maio, Steven M. Kay, and Alfonso Farina. On the invariance, coincidence, and statistical equivalence of the glrt, rao test, and wald test. *IEEE Transactions on Signal Processing*, 58(4):1967–1979, 2010.
400
+
401
+ R. A. Fisher. *Statistical methods for research workers*. Oliver and Boyd, 1938. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky.
402
+
403
+ Domain-adversarial training of neural networks. *Journal of Machine Learning Research*, 17:1–35, 2016.
404
+
405
+ Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Sivaraman Balakrishnan, and Zachary C Lipton.
406
+
407
+ Rlsbench: Domain adaptation under relaxed label shift. *arXiv preprint arXiv:2302.03020*, 2023.
408
+
409
+ Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, and Pascal Vincent. Fast approximate natural gradient descent in a kronecker factored eigenbasis. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*.
410
+
411
+ 2018.
412
+
413
+ Mark S. Graham, Walter H.L. Pinaya, Petru-Daniel Tudosiu, Parashkev Nachev, Sebastien Ourselin, and Jorge Cardoso. Denoising diffusion models for out-of-distribution detection. In *Proceedings of the IEEE/CVF*
414
+ Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2023.
415
+
416
+ Jakob D. Havtorn, Jes Frellsen, Søren Hauberg, and Lars Maaløe. Hierarchical vaes know what they don't know. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning. 18–24 Jul 2021.
417
+
418
+ Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In *International Conference on Learning Representations*, 2017.
419
+
420
+ Dan Hendrycks, Mantas Mazeika, and Thomas G. Dietterich. Deep anomaly detection with outlier exposure.
421
+
422
+ CoRR, abs/1812.04606, 2018.
423
+
424
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
425
+
426
+ Conor Igoe, Youngseog Chung, Ian Char, and Jeff Schneider. How useful are gradients for ood detection really?, arXiv, 2022.
427
+
428
+ Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
429
+
430
+ Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of The 33rd International Conference on Machine Learning, 2014.
431
+
432
+ Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems. 2018.
433
+
434
+ Polina Kirichenko, Pavel Izmailov, and Andrew G Wilson. Why normalizing flows fail to detect out-ofdistribution data. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in* Neural Information Processing Systems. 2020.
435
+
436
+ Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, and Ghassan AlRegib. Backpropagated gradient representations for anomaly detection. In *Proceedings of the European Conference on Computer Vision*
437
+ (ECCV), 2020.
438
+
439
+ Charline Le Lan and Laurent Dinh. Perfect density models cannot guarantee anomaly detection. *Entropy*, 23
440
+ (12), 2021.
441
+
442
+ Erich Leo Lehmann, Joseph P Romano, and George Casella. *Testing statistical hypotheses*, volume 3. Springer, 1986.
443
+
444
+ Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In *International Conference on Learning Representations*, 2018.
445
+
446
+ Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection.
447
+
448
+ Advances in neural information processing systems, 33:21464–21475, 2020.
449
+
450
+ James Martens. New insights and perspectives on the natural gradient method. *Journal of Machine Learning* Research, 21(146):1–76, 2020.
451
+
452
+ Warren Morningstar, Cusuh Ham, Andrew Gallagher, Balaji Lakshminarayanan, Alex Alemi, and Joshua Dillon. Density of states estimation for out of distribution detection. In Arindam Banerjee and Kenji Fukumizu (eds.), *Proceedings of The 24th International Conference on Artificial Intelligence and Statistics*.
453
+
454
+ 13–15 Apr 2021.
455
+
456
+ Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? In *International Conference on Learning Representations*,
457
+ 2019a.
458
+
459
+ Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, and Balaji Lakshminarayanan. Detecting out-ofdistribution inputs to deep generative models using typicality, arXiv, 2019b.
460
+
461
+ Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2015.
462
+
463
+ Quoc Phong Nguyen, Kar Wai Lim, Dinil Mon Divakaran, Kian Hsiang Low, and Mun Choon Chan. Gee:
464
+ A gradient-based explainable variational autoencoder for network anomaly detection. In *2019 IEEE*
465
+ Conference on Communications and Network Security (CNS). IEEE, 2019.
466
+
467
+ George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. The Journal of Machine Learning Research, 22(1):2617–2680, 2021.
468
+
469
+ C. Radhakrishna Rao. Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation. *Mathematical Proceedings of the Cambridge Philosophical Society*,
470
+ 44(1):50–57, 1948.
471
+
472
+ Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems. 2019.
473
+
474
+ Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. *arXiv preprint arXiv:1701.05517*, 2017.
475
+
476
+ Robin Schirrmeister, Yuxuan Zhou, Tonio Ball, and Dan Zhang. Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*. 2020.
477
+
478
+ Joan Serrà, David Álvarez, Vicenç Gómez, Olga Slizovskaia, José F. Núñez, and Jordi Luque. Input complexity and out-of-distribution detection with likelihood-based generative models. In *International Conference on* Learning Representations, 2020.
479
+
480
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 07–09 Jul 2015a. PMLR.
481
+
482
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International conference on machine learning*. PMLR, 2015b.
483
+
484
+ Jack Stilgoe. *Who Killed Elaine Herzberg?*, pp. 1–6. Springer International Publishing, Cham, 2020. ISBN
485
+ 978-3-030-32320-2.
486
+
487
+ Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. January 2014. 2nd International Conference on Learning Representations, ICLR 2014 ; Conference date: 14-04-2014 Through 16-04-2014.
488
+
489
+ Tijmen Tieleman, Geoffrey Hinton, et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. *COURSERA: Neural networks for machine learning*, 4(2):26–31, 2012.
490
+
491
+ Dennis Ulmer, Lotta Meijerink, and Giovanni Cinà. Trust issues: Uncertainty estimation does not enable reliable ood detection on medical tabular data. In Emily Alsentzer, Matthew B. A. McDermott, Fabian Falck, Suproteem K. Sarkar, Subhrajit Roy, and Stephanie L. Hyland (eds.), *Proceedings of the Machine* Learning for Health NeurIPS Workshop. 11 Dec 2020.
492
+
493
+ Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. *Advances in neural information processing systems*, 29, 2016.
494
+
495
+ Zhisheng Xiao, Qing Yan, and Yali Amit. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.),
496
+ Advances in Neural Information Processing Systems. 2020.
497
+
498
+ Lily Zhang, Mark Goldstein, and Rajesh Ranganath. Understanding failures in out-of-distribution detection with deep generative models. In *International Conference on Machine Learning*. PMLR, 2021a.
499
+
500
+ Mingtian Zhang, Andi Zhang, and Steven McDonagh. On the out-of-distribution generalization of probabilistic image modelling. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.),
501
+ Advances in Neural Information Processing Systems. 2021b.
502
+
503
+ Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization in vision: A
504
+ survey. *arXiv preprint arXiv:2103.02503*, 2021.
505
+
506
+ # Appendix For Approximations To The Fisher Information Metric Of Deep Generative Models For Out-Of-Distribution Detection
507
+
508
+ ## A Proofs And Additional Theoretical Results A.1 Proof Of Proposition 1
509
+
510
+ Proposition 1. Let p θ X (x) and p θ T
511
+ (t) be two probability density functions corresponding to the same model distribution p θ being represented on two different measure spaces X and T . Suppose these representations encode the same information, i.e. there exists a smooth, invertible reparameterization T : *X → T* such that for x ∈ X and t ∈ T representing the same point we have T(x) = t. Then, the gradient vector ∇θ(log p θ) is invariant to the choice of representation, and in particular, ∇θ(log p θ T
512
+ )(t) = ∇θ(log p θ X )(x).
513
+
514
+ Proof. Via the change-of-variables formula, we obtain
515
+
516
+ $$p_{T}^{\theta}({\mathbf{t}})=p_{\mathcal{X}}^{\theta}({\mathbf{x}})\,\left|{\frac{\partial T^{-1}}{\partial{\mathbf{x}}}}\right|.$$
517
+
518
+ Applying the logarithm on both sides provides
519
+
520
+ $$\log p_{\mathcal{T}}^{\theta}(t)=\log p_{\mathcal{X}}^{\theta}(x)+\log\left\vert{\frac{\partial T^{-1}}{\partial x}}\right\vert,$$
521
+
522
+ and hence ∇θ(log p θ T
523
+ )(t) = ∇θ(log p θ X )(x) as required.
524
+
525
+ The smoothness assumption could be relaxed by considering the pull-back measure P
526
+ θ X ◦ T
527
+ −1 = P
528
+ θ T
529
+ and the corresponding change-of-variables formula for Radon-Nikodym derivatives, however we omit this for brevity and relevance. This result also trivially extends to the likelihood proxy we use for diffusion models log p θ(x0|x1).
530
+
531
+ ## A.2 Representation-Invariance Of Variational Lower-Bound Gradients
532
+
533
+ Assume the same setup as in A.1, but this time with a variational Bayesian method such as a Variational AutoEncoder Kingma & Welling (2014) with latent variable given by z, decoder probability density p θ X (x|z)
534
+ and encoder probability density q ϕ(z|x), noting that the decoder probability density is that which depends on X . The Evidence Lower Bound on the log-likelihood p θ X (x) is given by
535
+
536
+ $$E L B O_{\chi}^{\theta,\phi}(\mathbf{x})=\mathbb{E}_{\mathbf{z}\sim q^{\phi}(\mathbf{z}|\mathbf{x})}\left(\log{\frac{p_{\chi}^{\theta}(\mathbf{x},\mathbf{z})}{q^{\phi}(\mathbf{z}|\mathbf{x})}}\right)$$
537
+
538
+ We can then state a similar representation invariance for the ELBO.
539
+
540
+ Proposition 2. Let *ELBO*θ,ϕ Xbe the ELBO of a VAE, and let *ELBO*θ,ϕ T(t) be the ELBO under a change of variables with invertible mapping T : *X → T* , corresponding to two sets X and T . Then, the gradient
541
+ ∇θ(*ELBO*θ,ϕ T)(t) is invariant to T.
542
+
543
+ **Proof.** Noting that $p_{T}^{\theta}(\mathbf{t},\mathbf{z})=p_{\mathcal{X}}^{\theta}(\mathbf{x},\mathbf{z})\left|\frac{\partial\mathbf{x}^{-1}}{\partial\mathbf{x}}\right|$ while $q^{\phi}(\mathbf{z}|\mathbf{t})=q^{\phi}(\mathbf{z}|\mathbf{x})$
544
+ $$E L B O_{T}^{\theta,\phi}(\mathbf{t})=E L B O_{\chi}^{\theta,\phi}(\mathbf{x})+\log\left|{\frac{\partial T^{-1}}{\partial\mathbf{x}}}\right|$$
545
+ and taking the gradient wrt θ and ϕ gives the result that the gradient of the ELBO with respect to the VAE's parameters is representation-invariant.
546
+
547
+ 3To those familiar with the Borel–Kolmogorov paradox this condition may seem non-obvious, but we can derive it from the
548
+
549
+ fact that T does not require input from z, and thus q ϕ(z|t) = q ϕ(z,t)
550
+ qϕ(t)=
551
+ q ϕ(z,x) ∂T−1
552
+ ∂x
553
+
554
+ qϕ(x) ∂T−1
555
+ ∂x
556
+
557
+ = q ϕ(z|x)
558
+
559
+ ## A.3 Lebesgue Measure Of A Set Of Bounded Total Variation
560
+
561
+ Proposition 3. For x ∈ R
562
+ d, define the total variation to be T V (x) = |x1| +Pd i=2|xi − xi−1|. Let E(α) be the set of d-length arrays whose total variation is bounded by α:
563
+
564
+ $$E(\alpha)=\{\mathbf{x}\in\mathbb{R}^{d}:T V(\mathbf{x})<\alpha\}.$$
565
+
566
+ The Lebesgue measure of this set is given by E(α) = (2α)
567
+ $\mathbf{a})^{\text{d}}$
568
+
569
+ ## Γ(D+1) . Proof.
570
+
571
+ Consider the volume-preserving transformation (x1, x2 . . . xd) 7→ (x1, t2 *. . . t*d), where ti = xi − xi−1. We thus see that the volume of E(α) is equivalent to the volume of the d-ball in the ℓ 1-metric, with a standard result:
572
+
573
+ $$\mu(E(\alpha))=\mu(\{(x_{1},t_{2}\ldots t_{d}):|x_{1}|+|t_{2}|+\cdots+|t_{d}|\}<\alpha)=\frac{(2\alpha)^{d}}{\Gamma(d+1)}.$$
574
+
575
+ ![18_image_0.png](18_image_0.png)
576
+
577
+ Application to MNIST We can naïvely apply this result to MNIST images y ∈ [0, 1]28×28 by setting d = 282 and drawing a snake pattern through our images as illustrated in Figure 7, setting yij = x28(j−1)+(−1)j+1(i−14)+14. Computing this numerically for the whole MNIST dataset, we see that α = 102.9 is sufficiently large such that the whole MNIST dataset is contained in E(α), which we can compute has an approximate measure of E(α) ≈ 10−116.76 ≤ 10−116. Note that this is not the tightest bound one could give; for example vertical variations are neglected and membership in E(α) does not restrict xi from drifting outside the set [0, 1].
578
+
579
+ Figure 7: A visual illustration of the snake pattern used to unravel an MNIST image y ∈ [0, 1]28×28 into a string of values x ∈ [0, 1]784
580
+
581
+ ## A.4 (Weak) Parameterisation In-Variance Of Our Method
582
+
583
+ Let Θ, Φ be two parameter spaces of the same model p, linked by the smooth invertible reparameterisation P : Θ → Φ, such that for ϕ = P(θ) we have p θ = p ϕ. In this setting, one can derive that the Fisher Information Metric Radhakrishna Rao (1948) is invariant under P, ie that for all x1, x2 ∈ X we have
584
+ ∇θl(x1)F
585
+ −1 θ ∇θl(x2)
586
+ T = ∇ϕl(x1)F
587
+ −1 ϕ ∇ϕl(x2)
588
+ T(see (2) for our notation). As we merely approximate the FIM in our method we cannot make the same guarantee for all P, we can however prove a similar result if P linearly rescales the layers:
589
+ Proposition 4. As in §3.4 Let θ1, θ2 *. . .* θJ be the layers of our model. P : Θ → Φ be a smooth invertible reparameterisation of our model which linearly rescales the layers, ie P(θ1, θ2 *. . .* θJ ) = d1θ1, d2θ2 *. . . d*J θJ
590
+ for some constants d1, d2 *. . . d*J v ∈ R. Then, the resulting anomaly score of our method is invariant under P.
591
+
592
+ ## Proof.
593
+
594
+ Using the same notation as in §3.4, let f Θ
595
+ 1
596
+ . . . f Θ
597
+ J
598
+ and f Φ
599
+ 1
600
+ . . . f Φ
601
+ J be our layer-wise gradient L
602
+ 2 norm features under Θ and Φ respectively (see equation 4). Then, for all datapoints x and layers j we have:
603
+
604
+ $$f_{j}^{\Theta}(\mathbf{x})=\left\|\nabla_{\mathbf{\theta}_{j}}l(\mathbf{x})\right\|^{2}=\left\|d_{j}\nabla_{\mathbf{\phi}_{j}}l(\mathbf{x})\right\|^{2}=d_{j}^{2}f_{j}^{\Phi}(\mathbf{x}).$$
605
+
606
+ Taking the logarithm and writing in vectorized form gives that:
607
+
608
+ $$\left({\bar{5}}\right)$$
609
+ $$\log f^{\Theta}(x)=2\log d+\log f^{\Phi}(x)$$
610
+ $$({\mathfrak{h}})$$
611
+ Φ(x) (6)
612
+ In particular, if we let µ Θ, µ Φ and σ 2Θ,σ 2Φ be the corresponding sample mean and variances for Θ and Φ in algorithm 3, we see that µ Θ = 2 log d + µ Φ and σ 2Θ = σ 2Φ = σ 2. Hence via translation invariance of the normal distribution, our metric will be invariant under P.
613
+
614
+ ## A.5 Comparison To Classical Invariance Properties
615
+
616
+ In classical statistics, there is a separate and notion of invariance that is incompatible with that proposed by Le Lan & Dinh (2021). The setup proposed by Lehmann et al. (1986) is one in which we consider a group of transformations from the input space to itself g : X → X which are sufficiently narrow such that there is a corresponding group of transformations to the parameter space g¯ : θ → θ that counteract the effect of the transformation, formally defined by:
617
+
618
+ $$\mathbb{P}_{\mathcal{X}}^{\bar{g}\theta}=\mathbb{P}_{\mathcal{X}}^{\theta}\circ g^{-1},$$
619
+ $$\left(7\right)$$
620
+ $$(8)$$
621
+
622
+ −1, (7)
623
+ with the analogy to the change-of-variables formula being that:
624
+
625
+ $$p_{\mathcal{X}}^{\theta}(g\mathbf{x})=p_{\mathcal{X}}^{\bar{\theta}^{-1}\theta}(\mathbf{x})\,\left|\frac{\partial g\mathbf{x}}{\partial\mathbf{x}}\right|^{-1}.\tag{14.14}$$
626
+
627
+ One example of where this setup is applicable is applying dilations to an input space of a multivariate normal distribution, whereby any linear dilation of the input space can be counteracted by a dilation of the covariance matrix. This is not the case for the arbitrary transformations f considered in Proposition 1 of Le Lan & Dinh (2021), which we cannot guarantee to be counteracted by some transformation of a generative model's parameters. Even the simple example of the non-linear RGB-HSV transformation we give in §2.3 can only approximately be counteracted by changing the generative model's parameters. In contrast, the setup proposed by Le Lan & Dinh (2021), implicitly considers transformations from arbitrary measure spaces f : X → f(X), and considers the pullback:
628
+
629
+ $$\mathbb{P}_{f(X)}^{\theta}=\mathbb{P}_{X}^{\theta}\circ f^{-1}.$$
630
+ $$(9)$$
631
+
632
+ −1. (9)
633
+ The presence of the counteracting parameter transformation in equation (7) leads to a ∂g¯
634
+ −1θ
635
+ ∂θ term in the score vector (equation (6) of De Maio et al. (2010)):
636
+
637
+ $$\nabla_{\mathbf{\theta}}(\log p^{\theta}_{\cal X})(g\mathbf{x})=\nabla_{\mathbf{\theta}}(\log p^{\beta^{-1}\theta}_{\cal X})(\mathbf{x})=\frac{\partial\bar{g}^{-1}\theta}{\partial\theta}\nabla_{\mathbf{\theta}}(\log p^{\theta}_{\cal X})(\mathbf{x}).\tag{10}$$
638
+
639
+ Nonetheless, De Maio et al. (2010) derive that the score test statistic is invariant under this setup too. This is a consequence of the score test statistic being *both* invariant in the setup of Le Lan & Dinh (2021) and satisfying strong parameterisation invariance, which is algebraically expressed by the ∂g¯
640
+ −1θ
641
+ ∂θ term in eq (10)
642
+ being cancelled in re-parameterisation of the FIM.
643
+
644
+ ## B Additional Experimental Details And Results
645
+
646
+ B.1 RGB-HSV representation dependence
647
+
648
+ ![20_image_0.png](20_image_0.png)
649
+
650
+ Figure 8: *The log-likelihood heavily depends on data representation* We extend Figure 3 to the first 20 examples of the CIFAR10 dataset and their values of ∆RGB→HSV
651
+ BPD as defined in Eq. 11. We note values of ∆RGB→HSV
652
+ BPD
653
+ between 0.18 to 1.76, indicating a large difference of the induced change in likelihoods.
654
+ Here we compute the change in Bits Per Dimension (BPD) ∆RGB→HSV
655
+ BPD for the first 20 samples of the CIFAR10 test dataset, defined as:
656
+
657
+ $$\Delta_{HFD}^{RGB\to HSV}=\frac{1}{3\times32\times32}\log_{2}\frac{d\mu_{HSV}}{d\mu_{RGB}}=\frac{\log_{2}p_{RGB}(\mathbf{x})-\log_{2}p_{HSV}(\mathbf{x})}{3\times32\times32},\tag{11}$$
658
+
659
+ where µHSV is the Lebesgue measure in HSV-space, µRGB is the Lebesgue measure in RGB-space, and pHSV and pRGB are corresponding probability density functions for any distribution defined over the set of images. We compute the Radon-Nikodym derivative dµHSV
660
+ dµRGB
661
+ in 11 by computing the pixel-wise Jacobian determinants of the RGB-HSV transformation T
662
+ RGB→HSV : R
663
+ 3 → R
664
+ 3. In order to make the comparison fair, we dequantaize pixel xij ∈ R
665
+ 3 add a small amount of normally distributed noise ϵij ∼ N (0, I3×3), ie we set x˜ij = xij +
666
+ ϵij 255 . We then note that the full RGB-HSV transformation factors as a RGB-HSV transformation across the pixels, and thus its Jacobian determinant factors as:
667
+
668
+ $$\log\frac{d\mu_{H S V}}{d\mu_{R G B}}=\sum_{1\leq i,j\leq32}\log\left|\frac{\partial T}{\partial x_{i j}}\right|_{\bar{x}_{i j}}.$$
669
+
670
+ ## B.2 Replications Of Fig. 4
671
+
672
+ In figures 9- 11 we provide robust replications of Figure 4 using randomly chosen layers. The layers are sorted with the right-hand layer being the "deepest" (ie. the closest to the latent variables). We observe that the gradients are more separated for models trained on the semantically distinct datasets SVHN, Ce1ebA & GTSRB, mirroring the superior performance our method achieves in these cases.
673
+
674
+ Please note that large parts of the gradient distributions from the OOD datasets have been cropped out to keep the plots legible.
675
+
676
+ B.2.1 Glow models
677
+
678
+ ![21_image_0.png](21_image_0.png)
679
+
680
+ Figure 9: Replication of Fig. 4 for 4 randomly selected layers out of 1353 from Glow models trained on SVHN,
681
+ CelebA, GTSRB, CIFAR-10 and ImageNet32 respectively
682
+
683
+ ![22_image_0.png](22_image_0.png)
684
+
685
+ ## B.2.2 Diffusion Models
686
+
687
+ Figure 10: Replication of Fig. 4 for 4 randomly selected layers out of 276 from Diffusion models trained on SVHN, CelebA, GTSRB, CIFAR-10 and ImageNet32 respectively
688
+
689
+ ## B.2.3 Vaes
690
+
691
+ We note generally less separation with our VAE models, mirroring the poorer performance we attain with them in appendix E
692
+
693
+ ![23_image_0.png](23_image_0.png)
694
+
695
+ Figure 11: Replication of Fig. 4 for 4 randomly selected layers out of 48 from vae models trained on SVHN,
696
+ CelebA, GTSRB, CIFAR-10 and ImageNet32 respectively
697
+
698
+ ## Additional Plots Of The Fim B.3
699
+
700
+ ![24_image_0.png](24_image_0.png)
701
+
702
+ B.3.1 Windows into the FIM of a Glow model
703
+
704
+ Figure 12: The strong diagonal of the FIM for glow models we replicate Fig. 5 using three more randomly selected layers from our Glow model trained on CelebA.
705
+
706
+ ![24_image_1.png](24_image_1.png)
707
+
708
+ ## B.3.2 Windows Into The Fim Of A Diffusion Model
709
+
710
+ Figure 13: The strong diagonal of the FIM for diffusion models. We replicate Fig. 5 with a diffusion model.
711
+
712
+ As before, we randomly select two layers 0 from a diffusion model trained on Ce1ebA and plot the an approximation of the FIM, this time using the gradients of the one-step log-likelihood. Note that the first layer selected has fewer than 50 weights, so we plot its entire layer-wise FIM. Again, we normalise the rows and columns by the diagonal values (Fac to enable cross-layer comparison.
713
+
714
+ ## B.3.3 Raw, Single-Layer Fims Of Glow Models
715
+
716
+ ![25_image_0.png](25_image_0.png)
717
+
718
+ Figure 14: Raw layer-wise FIMs for glow models. For each row of plots, we randomly select 4 layers 0{ from glow models trained on (going from top to bottom) SVHN, Ce1ebA, GTSRB, CIFAR-10 and ImageNet.32. We then plot the raw FIM Foo values for max(50, |0|) weights in these layers, using a separate colorbar per layer to account for the fact that the absolute sizes of the FIM elements vary by orders of magnitudes from layer to layer.
719
+
720
+ ## B.3.4 Raw, Single-Layer Fims Of Diffusion Models
721
+
722
+ ![26_image_0.png](26_image_0.png)
723
+
724
+ Figure 15: We replicate Figure 14 with a diffusion model (again using the gradient of the 1-timestep variational lower bound as a stand-in for the gradient of the log-likelihood), noting a qualitative difference in the appearance of the layers.
725
+
726
+ ## B.4 Ablation Study On Likelihood Proxies For Diffusion Models
727
+
728
+ In this section, we discuss the use of different parts of a diffusion variational lower bound for anomaly detection. Before doing so, for completeness we present a condensed version of the theory presented in Ho et al. (2020), using the same notation.
729
+
730
+ ## B.4.1 Derivation Of Diffusion Process
731
+
732
+ Let our forward process be q(xt|xt−1), with prior of the true image distribution q(x0) and our learned reverse process be p θ(xt−1|xt) with Gaussian prior p(xT ) ∼ N (0, I). A variational lower bound on the model log-likelihood of can be derived Ho et al. (2020) as:
733
+
734
+ $$\mathbb{E}[-\log p^{\theta}(\mathbf{x}_{0})]\leq\mathbb{E}[L_{T}+\sum_{t>1}L_{t-1}+L_{0}]$$ where $L_{t}=\begin{cases}t=0&-\log p^{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})\\ 0<t<T&KL(q(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{x}_{0})||p^{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1}))\\ t=T&KL(q(\mathbf{x}_{T}|\mathbf{x}_{0})||p(\mathbf{x}_{T}))\end{cases}$.
735
+ $$(12)$$
736
+ $$\quad(13)$$
737
+ $$\left(14\right)$$
738
+
739
+ ,
740
+ Ho et al. (2020) also derive that, if we parameterise our forward process as adding some normally distributed noise ϵ to xt−1, and our reverse process as normally predicting this noise from xt via a network ϵ θ(xt, t), for 0 ≤ *t < T*, Lt can be computed as a squared error:
741
+
742
+ $$\mathbf{E}L_{t-1}=K_{t}\mathbf{E}_{\mathbf{x}_{t}\sim q(\mathbf{x}_{0})}\left\|\mathbf{\epsilon}-\mathbf{\epsilon}^{\theta}(\mathbf{x}_{t},t)\right\|^{2}+C_{t}.$$
743
+ 2+ Ct. (14)
744
+ Here kt and Ct are constants independent of x0, x1 *. . .* xT and θ, which can thus be omitted from computations.
745
+
746
+ To compute the expectation, we use one sample from the reverse process xt ∼ q(x0), motivated by our findings in section B.4.4 which show little to no performance gains from using five samples. Thus, we define our set of likelihood proxies as in (14) as Lt(x) =ϵ − ϵ θ(xt+1, t + 1) for a single sample of noise ϵ from the reverse process xt+1 ∼ q(x). Note that computing Lt(x) requires only one pass through the network, making it very efficient to compute. In section B.4.3 we do an ablation study on the value of t used, motivating our choice of L0 in the application our method.
747
+
748
+ ## B.4.2 On Representation Dependence In Diffusion Models
749
+
750
+ When considering Le Lan & Dinh (2021)'s results pertaining to representation dependence in the context of diffusion models, we arrive at the interesting question as to whether the choice of representation should be considered to affect the underlying distribution of the forward process q. Clearly the value we use in our method, L0 = p θ(x0|x1), is representation-dependent. In the strict sense, the values Lt for t > 0 aren't representation dependent, unless representation dependence is also considered to affect q, in which case this becomes more ambiguous. In figure 16 we report the negative result that the values of Lt for t > 0 also follow the pattern from Nalisnick et al. (2019a), whereby structured OOD data has higher values for Lt. We defer further debate on this issue to future work.
751
+
752
+ ## B.4.3 Ablation Study On The Value Of T **Used For Anomaly Detection With Diffusion Models**
753
+
754
+ In this section, we evaluate using different values of t for the likelihood proxy l(x) = Lt−1(x) which we use as input for our anomaly detection method and typicality Nalisnick et al. (2019a). We summarise our results in Figure 17 by plotting the average AUROC acheived for each method across all 20 dataset pairings. In table 3 we provide more granular results with the AUROC for each pairing individually. We note the intuitive result that the performance of our method gradually decays as t increases, corresponding to more noise being added to the sample fed into the network. Overall, the average performance of our method at t = 1 is higher than
755
+
756
+ ![28_image_0.png](28_image_0.png)
757
+
758
+ Figure 16: L+= follows the pattern from Nalisnick et al. (2019a) for a variety of t values. We replicate figure 2 for t = 64 [Left] and t = 512 [Right] using batch size B = 5 (we use this batch size for reasons of limited compute.)
759
+
760
+ ![28_image_1.png](28_image_1.png)
761
+
762
+ Figure 17: For diffusion models, Lt_1 is most informative for anomaly detection for low values of t. We compute the AUROC values for all in/out dataset distribution pairings using t = 2ª for n = 0,1 ... 9 and batch size B = 5 (for reasons of limited compute).
763
+ that of typicality which achieves its maximum performance at t = 32 (out of our model's maximum timestep of T = 1000). To ease compute requirements, we use batch size B = 5 for all experiments.
764
+
765
+ | SVHN | CelebA | GTSRB | CIFAR-10 | ImageNet32 | | |
766
+ |------------------------------------|----------|---------|------------|--------------|--------|--------|
767
+ | typicality (t = 8) | SVHN | - | 0.9937 | 0.7122 | 0.9975 | 0.9969 |
768
+ | CelebA | 1.0000 | - | 0.8816 | 0.3443 | 0.5904 | |
769
+ | GTSRB | 0.9990 | 0.8108 | - | 0.6680 | 0.7214 | |
770
+ | CIFAR-10 | 1.0000 | 0.8684 | 0.9172 | - | 0.4852 | |
771
+ | ImageNet32 | 1.0000 | 0.9869 | 0.9845 | 0.8800 | - | |
772
+ | SVHN | - | 1.0000 | 0.9833 | 0.8214 | 0.9838 | |
773
+ | CelebA | 0.9932 | - | 0.9284 | 0.8250 | 0.3942 | |
774
+ | GTSRB | 0.9822 | 0.9998 | - | 0.7025 | 0.7485 | |
775
+ | CIFAR-10 | 0.9940 | 0.9998 | 0.8750 | - | 0.5071 | |
776
+ | ImageNet32 | 0.9934 | 1.0000 | 0.9624 | 0.9046 | - | |
777
+ | ours (t = 8) typicality (t = 64) | SVHN | - | 0.9813 | 0.4779 | 0.9970 | 0.9982 |
778
+ | CelebA | 1.0000 | - | 0.9746 | 0.3288 | 0.6622 | |
779
+ | GTSRB | 0.9966 | 0.7805 | - | 0.6914 | 0.8543 | |
780
+ | CIFAR-10 | 1.0000 | 0.9228 | 0.9798 | - | 0.5388 | |
781
+ | ImageNet32 | 1.0000 | 0.9907 | 0.9965 | 0.8059 | - | |
782
+ | SVHN | - | 1.0000 | 0.9801 | 0.9345 | 0.9603 | |
783
+ | CelebA | 0.9786 | - | 0.8856 | 0.6389 | 0.5063 | |
784
+ | GTSRB | 0.9762 | 0.9990 | - | 0.7842 | 0.7150 | |
785
+ | CIFAR-10 | 0.9825 | 0.9996 | 0.7911 | - | 0.5039 | |
786
+ | ImageNet32 | 0.9850 | 0.9999 | 0.9255 | 0.8001 | - | |
787
+ | ours (t = 64) typicality (t = 512) | SVHN | - | 0.7125 | 0.5117 | 0.9574 | 0.9839 |
788
+ | CelebA | 0.9997 | - | 0.9984 | 0.3592 | 0.4543 | |
789
+ | GTSRB | 0.9549 | 0.7854 | - | 0.7207 | 0.8194 | |
790
+ | CIFAR-10 | 0.9997 | 0.9815 | 0.9971 | - | 0.5154 | |
791
+ | ImageNet32 | 0.9998 | 0.9908 | 0.9984 | 0.6338 | - | |
792
+ | ours (t = 512) | SVHN | - | 0.9962 | 0.9712 | 0.8089 | 0.7857 |
793
+ | CelebA | 0.7654 | - | 0.9396 | 0.5205 | 0.5272 | |
794
+ | GTSRB | 0.6870 | 0.9496 | - | 0.7770 | 0.6256 | |
795
+ | CIFAR-10 | 0.6295 | 0.9844 | 0.9326 | - | 0.4660 | |
796
+ | ImageNet32 | 0.6635 | 0.9757 | 0.8904 | 0.5432 | - | |
797
+
798
+ Table 3: auc values for typicality [top] and ours [bottom], batch size 5 applied to diffusion models for varied timesteps t = 8, 64, 512
799
+
800
+ ## B.4.4 Ablation Study On Multiple Q-Samples For Anomaly Detection With Diffusion Models
801
+
802
+ In this section, we investigate if any performance improvement can be achieved by using multiple samples from q to estimate L0, ie l(x) = l(x0) = Ex1∼q(x1|x0)log p θ(x0|x1) EL0 ∝ Ex1∼q(x0)
803
+ ϵ − ϵ θ(x1, 1). Specifically, we take n = 5 q-samples ϵ
804
+ (1)*. . .* ϵ
805
+ (5), x
806
+ (1)
807
+ 1*. . .* x
808
+ (5)
809
+ 1to define our likelihood proxy as:
810
+
811
+ $$l(\mathbf{x})={\frac{1}{5}}\sum_{i=1}^{5}\left\|\mathbf{\epsilon}^{(i)}-\mathbf{\epsilon}^{\mathbf{\theta}}(\mathbf{x_{1}^{(i)}},1)\right\|.$$
812
+
813
+ The AUROC values using batch size B = 5 for our method and typicality Nalisnick et al. (2019b) are in table 4. We note little to no performance gain for our method or typicality, motivating our use of n = 1 q-sample in our implementation for efficiency.
814
+
815
+ | | SVHN | CelebA | GTSRB | CIFAR-10 | ImageNet32 | |
816
+ |------------|--------|----------|---------|------------|--------------|--------|
817
+ | typicality | SVHN | - | 0.9251 | 0.9668 | 0.9824 | 0.9926 |
818
+ | CelebA | 0.9981 | - | 0.6328 | 0.5312 | 0.5500 | |
819
+ | GTSRB | 0.9961 | 0.6768 | - | 0.6291 | 0.4716 | |
820
+ | CIFAR-10 | 0.9685 | 0.7890 | 0.4130 | - | 0.7174 | |
821
+ | ImageNet32 | 0.9959 | 0.5954 | 0.6098 | 0.7923 | - | |
822
+ | ours | SVHN | - | 0.9984 | 0.9930 | 0.9873 | 0.9756 |
823
+ | CelebA | 0.8938 | - | 0.9746 | 0.8140 | 0.6952 | |
824
+ | GTSRB | 0.8222 | 0.9823 | - | 0.9367 | 0.8728 | |
825
+ | CIFAR-10 | 0.9683 | 0.9744 | 0.8922 | - | 0.5666 | |
826
+ | ImageNet32 | 0.9797 | 0.9793 | 0.9188 | 0.7485 | - | |
827
+
828
+ Table 4: auc values for typicality [top] and ours [bottom], batch size 5 applied to a diffusion model using 5 q-samples average performance for typicality: 0.7617, 25/50/75 quantiles: 0.6062 / 0.7532 / 0.9720 average performance for ours: 0.8987, 25/50/75 quantiles: 0.8601 / 0.9525 / 0.9794
829
+
830
+ ## B.5 Using Fisher'S Method In The Place Of Density Estimation
831
+
832
+ In this section, we briefly investigate the use of Fisher's method Fisher (1938) to compute the final anomaly score when using the gradient L
833
+ 2-norm statistics f ℓ which we define in §3.4. Specifically, if we modify our method by defining q ℓ(x) = min(Φ(f ℓ(x)), 1 − Φ(f ℓ(x)) to be the ℓ-th p-value from a two-tailed z-test and our final anomaly score as:
834
+
835
+ $$S=-\sum_{\ell=1}^{L}\log(q^{\ell}({\boldsymbol{x}})).$$
836
+
837
+ | | SVHN | CelebA | GTSRB | CIFAR-10 | ImageNet32 | |
838
+ |----------------|--------|----------|---------|------------|--------------|--------|
839
+ | ours (Fisher) | SVHN | - | 0.9808 | 0.9494 | 0.8358 | 0.7514 |
840
+ | CelebA | 0.9633 | - | 0.8125 | 0.5022 | 0.2686 | |
841
+ | GTSRB | 0.9321 | 0.9772 | - | 0.7016 | 0.4482 | |
842
+ | CIFAR-10 | 0.9398 | 0.9250 | 0.8119 | - | 0.4203 | |
843
+ | ImageNet32 | 0.9899 | 0.9734 | 0.9165 | 0.6969 | - | |
844
+ | ours (density) | SVHN | - | 0.9880 | 0.9858 | 0.8747 | 0.8010 |
845
+ | CelebA | 0.9823 | - | 0.9262 | 0.5155 | 0.2997 | |
846
+ | GTSRB | 0.9537 | 1.0000 | - | 0.7546 | 0.9967 | |
847
+ | CIFAR-10 | 0.9658 | 0.9462 | 0.9126 | - | 0.4377 | |
848
+ | ImageNet32 | 0.9976 | 0.9876 | 0.9683 | 0.7375 | - | |
849
+
850
+ In table 5 for brevity we only report our results for a Glow model with B = 1, but note the same pattern across all models. We note a small performance detriment across all dataset pairings from using Fisher's method, motivating our use of density estimation.
851
+ Table 5: auc values for ours (Fisher's method) [top] and ours (density estimation) [bottom], batch size 1 applied to glow average performance for Fisher's method: 0.7898, 25/50/75 quantiles: 0.7004 / 0.8761 / 0.9529 average performance for density estimation: 0.8516, 25/50/75 quantiles: 0.7894 / 0.9500 / 0.9863
852
+
853
+ | Method | MNIST | Omniglot |
854
+ |----------------------------------------------|---------|------------|
855
+ | WAIC | 0.766 | 0.796 |
856
+ | S using PixelCNN++ and FLIF | 0.967 | 1.000 |
857
+ | PixelCNN Gradient norms (OneClassSVM) (ours) | 0.979 | 1.000 |
858
+ | S using Glow and FLIF | 0.998 | 1.000 |
859
+ | Glow Gradient norms (OneClassSVM) (ours) | 0.819 | 1.000 |
860
+
861
+ Table 6: Table of results comparing the performance of our method on for a model trained on FashionMNIST
862
+ at detecting OOD grayscale images to the performances of the S-score reported in Serrà et al. (2020) and Watanabe-Akaike Information Criterion reported in Choi et al. (2018)
863
+
864
+ ## C Supervised Gradient-Based Methodology For Classifiers
865
+
866
+ For completeness, we discuss classifier based OOD detection methods using the gradient, noting that these methods are given label-information at train time and our representation-invariance result does not directly translate over to this paradigm.
867
+
868
+ In order to compute gradients without a target label, This approach is hence supervised with respect to in-distribution and OOD labels which it requires. Liang et al. (2018) propose a method called ODIN which uses the gradient with respect to the *data*: They backpropagate gradients to the input data to see how much an input perturbation can change the softmax output of a classifier, following the intuition that OOD inputs may be more sensitive and prone to a larger variation in the output distribution. Igoe et al. (2022) are critical of the use of classifier gradients, instead advocating that most information can be recovered from the layer representations. Behpour et al. (2023) propose projection of the gradient onto the space generated by in-distribution gradients, motivated as in Kwon et al. (2020) by the informativity of the gradient angle for OOD detection.
869
+
870
+ ![32_image_0.png](32_image_0.png)
871
+
872
+ Figure 18: *Samples from Glow models*. Samples from the Glow models used in our Experiments, trained on
873
+
874
+ ![32_image_1.png](32_image_1.png)
875
+
876
+ SVHN, CelebA, GTSRB, CIFAR-10 and ImageNet32 respectively from top to bottom.
877
+
878
+ Figure 19: *Samples from diffusion models*. Samples from the denoising diffusion models used in our Experiments, trained on SVHN, CelebA, GTSRB, CIFAR-10 and ImageNet32 respectively from top to bottom.
879
+
880
+ ## D Code, Models
881
+
882
+ Our *Glow* implementation derives from a repository https://github.com/y0ast/Glow-PyTorch replicating the one used in Nalisnick et al. (2019a), with the only difference being we use a batch size of 64 in training rather than 512. See Figure 18 for samples from our models.
883
+
884
+ Our diffusion model implementation derives from a PyTorch transcription at https://github.com/
885
+ lucidrains/denoising-diffusion-pytorch of that described in Ho et al. (2020). We train using Adam with a learning rate of 3e
886
+ −4for 10 epochs. Our model has T = 1000 timesteps which are uniformly sampled from in training and the U-Net backbone has dimension multiplicities of (1, 2, 4, 8). See Fig 19 for samples from our models.
887
+
888
+ Our code is available with pre-trained models and pre-computed gradient L
889
+ 2 norms at https://github.
890
+
891
+ com/anonymous_authors.
892
+
893
+ ![33_image_0.png](33_image_0.png)
894
+
895
+ Figure 20: *Samples from VAE models*. Samples from the denoising diffusion models used in our Experiments, trained on SVHN, CelebA, GTSRB, CIFAR-10 and ImageNet32 respectively from top to bottom.
896
+
897
+ | test ↓ train → | SVHN | CelebA | GTSRB | CIFAR-10 | ImageNet32 | |
898
+ |--------------------|--------|----------|---------|------------|--------------|--------|
899
+ | typicality (B = 1) | SVHN | - | 0.5680 | 0.4302 | 0.5664 | 0.5933 |
900
+ | CelebA | 0.4569 | - | 0.3729 | 0.5085 | 0.4868 | |
901
+ | GTSRB | 0.5901 | 0.6290 | - | 0.6484 | 0.6061 | |
902
+ | CIFAR-10 | 0.4223 | 0.4742 | 0.3454 | - | 0.4922 | |
903
+ | ImageNet32 | 0.4429 | 0.4831 | 0.3736 | 0.5047 | - | |
904
+ | ours (B = 1) | SVHN | - | 0.6693 | 0.5541 | 0.5904 | 0.5740 |
905
+ | CelebA | 0.4329 | - | 0.4337 | 0.4685 | 0.4588 | |
906
+ | GTSRB | 0.5920 | 0.6581 | - | 0.6570 | 0.6629 | |
907
+ | CIFAR-10 | 0.4343 | 0.5826 | 0.5123 | - | 0.4864 | |
908
+ | ImageNet32 | 0.4582 | 0.5941 | 0.5048 | 0.5187 | - | |
909
+ | typicality (B = 5) | SVHN | - | 0.9978 | 0.7943 | 0.9975 | 0.9961 |
910
+ | CelebA | 1.0000 | - | 0.7642 | 0.3156 | 0.3621 | |
911
+ | GTSRB | 0.9998 | 0.8336 | - | 0.6809 | 0.5765 | |
912
+ | CIFAR-10 | 1.0000 | 0.7808 | 0.8332 | - | 0.4488 | |
913
+ | ImageNet32 | 1.0000 | 0.9866 | 0.9675 | 0.9266 | - | |
914
+ | SVHN | - | 1.0000 | 0.9970 | 0.8457 | 0.9561 | |
915
+ | CelebA | 0.9908 | - | 0.8552 | 0.7734 | 0.4202 | |
916
+ | GTSRB | 0.9716 | 0.9997 | - | 0.7325 | 0.9007 | |
917
+ | CIFAR-10 | 0.9895 | 0.9992 | 0.8104 | - | 0.5733 | |
918
+ | ImageNet32 | 0.9862 | 1.0000 | 0.9309 | 0.8532 | - | |
919
+ | ours (B = 5) | | | | | | |
920
+
921
+ ## E Results For Poorly Performant Vae
922
+
923
+ Table 7: *VAE models* Comparison of the AUROC values (larger values are better) of our method to the typicality test Nalisnick et al. (2019b) for batch sizes B = 1, 5. We train VAE Kingma & Welling (2014)
924
+ models on five natural image datasets (as columns) and evaluate the ability of the model-method combination to reject the other datasets (as rows). Our VAE Kingma & Welling (2014) implementation uses entirely convolutional layers, in Figure 20 we note that the samples produced approximate the colour palate of the train datasets well but have poor semantic coherence. In table 7 we note poor performance for both our methods using this models as a backbone.
EcuwtinFs9/EcuwtinFs9_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 34,
6
+ "ocr_stats": {
7
+ "ocr_pages": 7,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 7,
10
+ "ocr_engine": "surya"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 32,
14
+ "code": 0,
15
+ "table": 7,
16
+ "equations": {
17
+ "successful_ocr": 35,
18
+ "unsuccessful_ocr": 4,
19
+ "equations": 39
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }