RedTachyon
commited on
Commit
•
c679409
1
Parent(s):
d625350
Upload folder using huggingface_hub
Browse files- jFi4dXEOdN/11_image_0.png +3 -0
- jFi4dXEOdN/12_image_0.png +3 -0
- jFi4dXEOdN/12_image_1.png +3 -0
- jFi4dXEOdN/13_image_0.png +3 -0
- jFi4dXEOdN/14_image_0.png +3 -0
- jFi4dXEOdN/14_image_1.png +3 -0
- jFi4dXEOdN/jFi4dXEOdN.md +1236 -0
- jFi4dXEOdN/jFi4dXEOdN_meta.json +25 -0
jFi4dXEOdN/11_image_0.png
ADDED
Git LFS Details
|
jFi4dXEOdN/12_image_0.png
ADDED
Git LFS Details
|
jFi4dXEOdN/12_image_1.png
ADDED
Git LFS Details
|
jFi4dXEOdN/13_image_0.png
ADDED
Git LFS Details
|
jFi4dXEOdN/14_image_0.png
ADDED
Git LFS Details
|
jFi4dXEOdN/14_image_1.png
ADDED
Git LFS Details
|
jFi4dXEOdN/jFi4dXEOdN.md
ADDED
@@ -0,0 +1,1236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Variable Complexity Weighted-Tempered Gibbs Samplers For Bayesian Variable Selection
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
A subset weighted-tempered Gibbs Sampler (subset-wTGS) has been recently introduced by Jankowiak to reduce the computation complexity per MCMC iteration in high-dimensional applications where the exact calculation of the posterior inclusion probabilities (PIP) is not essential. However, the Rao-Backwellized estimator associated with this sampler has a very high variance as the ratio between the signal dimension, P, and the number of conditional PIP estimations is large. In this paper, we design a new subset-wTGS where the expected number of computations of conditional PIPs per MCMC iteration can be much smaller than P. Different from the subset-wTGS and wTGS, our sampler has a variable complexity per MCMC iteration. We provide an upper bound on the variance of an associated RaoBlackwellized estimator for this sampler at a finite number of iterations, T, and show that the variance is O PS
|
8 |
+
2 log T
|
9 |
+
Tfor any given dataset where S is the expected number of conditional PIP computations per MCMC iteration.
|
10 |
+
|
11 |
+
## 1 Introduction
|
12 |
+
|
13 |
+
Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a known function. MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics (Kasim et al., 2019), computational biology, (Gupta & Rawlings, 2014), and linear models (Truong, 2022). Monte Carlo algorithms have been very popular over the last decade (Hesterberg, 2002; Robert & Casella, 2005). Many practical problems in statistical signal processing, machine learning and statistics, demand fast and accurate procedures for drawing samples from probability distributions that exhibit arbitrary, non-standard forms (Andrieu et al.,
|
14 |
+
2004; Fitzgerald, 2001; Read et al., 2012). One of the most popular Monte Carlo methods are the families of Markov chain Monte Carlo (MCMC) algorithms (Andrieu et al., 2004; Robert & Casella, 2005) and particle filters (Bugallo et al., 2007). Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference (Wills & Schön, 2023). The MCMC techniques generate a Markov chain with a pre-established target probability density function as invariant density (Liang et al.,
|
15 |
+
2010).
|
16 |
+
|
17 |
+
Gibbs sampler (GS) is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations from a specific multivariate probability distribution. This sequence can be used to approximate the joint distribution, the marginal distribution of one of the variables, or some subset of the variables. It can be also used to compute the expected value (integral) of one of the variables (Bishop, 2006; Bolstad, 2010).
|
18 |
+
|
19 |
+
GS is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from.
|
20 |
+
|
21 |
+
The GS algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.
|
22 |
+
|
23 |
+
GS is commonly used as a means of statistical inference, especially Bayesian inference. However, pure Markov chain based schemes (i.e., ones which simulate from precisely the right target distribution with no need for subsequent importance sampling correction) have been far more successful. This is because MCMC methods are usually much more scalable to high-dimensional situations, whereas importance sampling weight variances tend to grow (often exponentially) with dimension. (Zanella & Roberts, 2019) proposed a natural way to combine the best of MCMC and importance sampling in a way that is robust in high-dimensional contexts and ameliorates the slow mixing which plagues many Markov chain based schemes. The proposed scheme is called Tempered Gibbs Sampler (TGS), involving component-wise updating rule like Gibbs Sampling
|
24 |
+
(GS), with improved mixing properties and associated importance weights which remain stable as dimension increases. Through an appropriately designed tempering mechanism, TGS circumvents the main limitations of standard GS, such as the slow mixing introduced by strong posterior correlations. It also avoids the requirement to visit all coordinates sequentially, instead iteratively making state-informed decisions as to which coordinate should be next updated.
|
25 |
+
|
26 |
+
TGS has been applied to Bayesian Variable Selection (BVS) problem, observing multiple orders of magnitude improvements compared to alternative Monte Carlo schemes (Zanella & Roberts, 2019). Since TGS updates each coordinate with the same frequency, in a BVS context, this may be inefficient as the resulting sampler would spend most iterations updating variables that have low or negligible posterior inclusion probability, especially when the number of covariates, P, gets large. A better solution, called weighted Tempered Gibbs Sampling (wTGS) (Zanella & Roberts, 2019), updates more often components with a larger inclusion probability, thus having a more focused computational effort. However, despite the intuitive appeal of this approach to BVS problem, approximating the resulting posterior distribution can be computationally challenging. A principal reason for this is the astronomical size of the model space whenever there more than a few dozen covariates. To scale the high-dimensional regime, (Jankowiak, 2023) has recently introduced an efficient MCMC scheme whose cost per iteration can be significantly reduced compared to wTGS. The main idea is to introduce an auxiliary variable S ⊂ {1, 2, · · · , P} that controls which conditional posterior inclusion probabilites (PIPs) are computed in a given MCMC iteration. By choosing the size S of S to be much less than P, we can reduce the computational complexity significantly. However, this scheme contains some weaknesses such as the Rao-Blackwellized estimator associated with this sampler has a very high variance when P/S is large and the number of MCMC iterations, T, is small. In addition, generating the auxiliary random set which is uniformly distributed over PS
|
27 |
+
subsets in the subset wTGS algorithm (Jankowiak, 2023)
|
28 |
+
requires very long running time. In this paper, we design a new subset wTGS called variable complexity wTGS (VC-wTGS) and apply this algorithm to BVS in the linear regression model. More specifically, we consider the linear regression Y = Xβ+Z where β = (β0, β1*, . . . , β*P −1)
|
29 |
+
Tis controlled by an inclusion vector (γ0, γ1, · · · , γP −1). We design a Rao-Blackwellized estimator associated with VC-wTGS for *posterior inclusion probabilities* or PIPs, where PIP(i) := p(γi = 1|D) ∈ [0, 1], and D = {*X, Y* } is the observed dataset. Experiments show that our scheme converges to PIPs very fast for simulated datasets and that the variance of the Rao-Blackwellized estimator can be much smaller than the subset wTGS (Jankowiak, 2023) when P/S is very high for MNIST dataset. More specifically, our contributions include:
|
30 |
+
- We propose a new subset wTGS, called VC-wTGS, where the expected number of conditional PIP
|
31 |
+
computations per MCMC can be much smaller than the signal dimension.
|
32 |
+
|
33 |
+
- We analyse the variance of an associated Rao-Blackwellized estimator at each finite number of MCMC iterations. We show that this variance is Olog T
|
34 |
+
TPS
|
35 |
+
2for any given dataset.
|
36 |
+
|
37 |
+
- We provide some experiments on a simulated dataset (multivariate Gaussian dataset) and the real dataset (MNIST). Experiments show that our estimator can have a better variance than the subset wTGS-based estimator (Jankowiak, 2023) at high P/S for the same number of MCMC iterations T.
|
38 |
+
|
39 |
+
Although we limit our application to the linear regression model for the simplicity of computations of the conditional PIPs in experiments, our subset wTGS can be applied to other BVS models. However, we need to change the method to estimate the conditional PIPs for each model. See (148) and Appendix E for the method that is used to estimate the conditional PIPs for the linear regression model.
|
40 |
+
|
41 |
+
## 2 Preliminaries 2.1 Mathematical Backgrounds
|
42 |
+
|
43 |
+
Let a Markov chain {Xn}∞
|
44 |
+
n=1 on a state space S with transition kernel Q(*x, dy*) and the initial state X1 ∼ ν, where S is a Polish space in R. In this paper, we consider the Markov chains which are irreducible and positive-recurrent, so the existence of a stationary distribution π is guaranteed. An irreducible and recurrent Markov chain on an infinite state-space is called Harris chain (Tuominen & Tweedie, 1979). A Markov chain is called *reversible* if the following detailed balance condition is satisfied:
|
45 |
+
|
46 |
+
$$\pi(d x)Q(x,d y)=\pi(d y)Q(y,d x),\qquad\forall x,y\in{\mathcal{S}}.$$
|
47 |
+
π(dx)Q(*x, dy*) = π(dy)Q(y, dx), ∀x, y ∈ S. (1)
|
48 |
+
Define
|
49 |
+
|
50 |
+
$$(1)$$
|
51 |
+
$$\begin{array}{r l}{d(t):=\operatorname*{sup}_{x\in{\mathcal{S}}}d_{\mathrm{TV}}(Q^{t}(x,\cdot),\pi)}\\ {t_{\operatorname*{mix}}(\varepsilon):=\operatorname*{min}\{t:d(t)\leq\varepsilon\},}\end{array}$$
|
52 |
+
$$\left(2\right)$$
|
53 |
+
$$\left({\mathrm{3}}\right)$$
|
54 |
+
$$\left(4\right)$$
|
55 |
+
t(x, ·), π) (2)
|
56 |
+
tmix(ε) := min{t : d(t) ≤ ε}, (3)
|
57 |
+
and
|
58 |
+
|
59 |
+
$$\tau_{\mathrm{min}}:=\operatorname*{inf}_{0\leq\varepsilon\leq1}t_{\mathrm{mix}}(\varepsilon)\bigg(\frac{2-\varepsilon}{1-\varepsilon}\bigg)^{2},\quad t_{\mathrm{mix}}:=t_{\mathrm{mix}}(1/4).$$
|
60 |
+
|
61 |
+
Let L2(π) be the Hilbert space of complex valued measurable functions on S that are square integrable w.r.t.
|
62 |
+
|
63 |
+
π. We endow L2(π) with inner product ⟨*f, g*⟩ := Rfg∗dπ, and norm ∥f∥2,π := ⟨*f, f*⟩
|
64 |
+
1/2 π . Let Eπ be the associated averaging operator defined by (Eπ)(*x, y*) = π(y), ∀x, y ∈ S, and
|
65 |
+
|
66 |
+
$$\lambda=\|Q-E_{\pi}\|_{L_{2}(\pi)\to L_{2}(\pi)},$$
|
67 |
+
$$\left(5\right)$$
|
68 |
+
$$(6)$$
|
69 |
+
λ = ∥Q ��� Eπ∥L2(π)→L2(π), (5)
|
70 |
+
where ∥B∥L2(π)→L2(π) = maxv:∥v∥2,π=1 ∥Bv∥2,π. Q can be viewed as a linear operator on L2(π), denoted by Q, defined as (Qf)(x) := EQ(x,·)(f), and the reversibility is equivalent to the self-adjointness of Q. The operator Q acts on measures on the left, creating a measure µQ, that is, for every measurable subset A of S, µQ(A) := Rx∈S Q(*x, A*)µ(dx). For a Markov chain with stationary distribution π, we define the *spectrum* of the chain as
|
71 |
+
|
72 |
+
$$S_{2}:=\big\{\xi\in\mathbb{C}:(\xi\mathbf{I}-\mathbf{Q}){\mathrm{~is~not~invertible~on~}}L_{2}(\pi)\big\}.$$
|
73 |
+
S2 := ξ ∈ C : (ξI − Q) is not invertible on L2(π) . (6)
|
74 |
+
It is known that λ = 1 − γ
|
75 |
+
∗(Paulin, 2015), where
|
76 |
+
|
77 |
+
$$\gamma^{*}:={\begin{cases}1-\operatorname*{sup}\{|\xi|:\xi\in{\mathcal{S}}_{2},\xi\neq1\},\\ \quad\quad\quad{\mathrm{if~eigenvalue~}}1{\mathrm{~has~multiplicity~}}1,\\ 0,\quad\quad\quad{\mathrm{otherwise}}\end{cases}}$$
|
78 |
+
|
79 |
+
is the *the absolute spectral gap* of the Markov chain. The absolute spectral gap can be bounded by the mixing time tmix of the Markov chain by the following expression:
|
80 |
+
|
81 |
+
$$\left({\frac{1}{\gamma^{*}}}-1\right)\log2\leq t_{\mathrm{mix}}\leq{\frac{\log(4/\pi_{*})}{\gamma_{*}}},$$
|
82 |
+
|
83 |
+
where π∗ = minx∈S πx is the *minimum stationary probability*, which is positive if Qk > 0 (entry-wise positive)
|
84 |
+
for some k ≥ 1. See (Wolfer & Kontorovich, 2019) for more detailed discussions. In (Combes & Touati, 2019; Wolfer & Kontorovich, 2019), the authors provided algorithms to estimate tmix and γ
|
85 |
+
∗from a single trajectory.
|
86 |
+
|
87 |
+
Let M(S) be a measurable space on S and define
|
88 |
+
|
89 |
+
$$\mathcal{M}_{2}:=\left\{\nu\ \ \text{defined on}\ \ \mathcal{M}(\mathcal{S}):\nu<<\pi,\left\|\frac{d\nu}{d\pi}\right\|_{2}<\infty\right\},\tag{8}$$
|
90 |
+
|
91 |
+
where *∥ · ∥*2 is the standard L2 norm in the Hilbert space of complex valued measurable functions on S.
|
92 |
+
|
93 |
+
$$\mathbf{\Pi}(7)$$
|
94 |
+
|
95 |
+
## 2.2 Problem Set-Up
|
96 |
+
|
97 |
+
Consider the linear regression Y = Xβ + Z ∈ R
|
98 |
+
N where β = (β0, β1*, . . . , β*P −1)
|
99 |
+
T, Z = (Z0, Z1*, . . . , Z*P −1)
|
100 |
+
T,
|
101 |
+
and X ∈ R
|
102 |
+
N×P which is a designed matrix. Denote γ by the vector (γ0, γ1, · · · , γP −1) where each γi ∈ {0, 1}
|
103 |
+
controls whether the coefficient βi and the i-th covariate are included (γi = 1) or excluded (γi = 0) from the model. Let βγ be the restriction of β to the coordinates in γ and |γ| ∈ {0, 1, 2, · · · , P} be the total number of included covariates. In addition, the following are assumed:
|
104 |
+
|
105 |
+
- inclusion variables: γi ∼ Bern(h)
|
106 |
+
- noise variance: σ 2 γ ∈ InvGamma12 ν0, 1 2 ν0λ0
|
107 |
+
|
108 |
+
- coefficients: βγ ∼ N (0, σ2 γ τ
|
109 |
+
−1I|γ|)
|
110 |
+
- noise distributions: Zi ∼ N (0, σ2 γ
|
111 |
+
)
|
112 |
+
for all i = 0, 1, · · · , P − 1. The hyperparameter h ∈ (0, 1) controls the overall level of sparsity; in particular hP is the expected number of covariates included a priori. The |γ| coefficients βγ ∈ R
|
113 |
+
|γ| are governed by the standard Gaussian prior with precision proportional to τ > 0.
|
114 |
+
|
115 |
+
An attractive feature of the model is that it explicitly reasons about variable inclusion and allows us to define *posterior inclusion probabilities* or PIPs, where
|
116 |
+
|
117 |
+
$$\mathbf{P}\mathbf{I}\mathbf{P}(i):=p(\gamma_{i}=1|{\mathcal{D}})\in[0,1],$$
|
118 |
+
$$({\mathfrak{g}})$$
|
119 |
+
|
120 |
+
PIP(i) := p(γi = 1|D) ∈ [0, 1], (9)
|
121 |
+
and D = {*X, Y* } is the observed dataset.
|
122 |
+
|
123 |
+
## 3 Main Results 3.1 Introduction To Subset Wtgs
|
124 |
+
|
125 |
+
In this subsection, we review the subset wTGS which was proposed by (Jankowiak, 2023). Let P =
|
126 |
+
{1, 2, · · · , P} and PS be the set of all subsets of cardinality S of P. Consider the sample space P×{0, 1}
|
127 |
+
P ×PS
|
128 |
+
and define the following (unnormalized) target distribution on this sample space:
|
129 |
+
|
130 |
+
f(γ, i, S) := p(γ|D) 1 2 η(γ−i) p(γi|γ−i, D) U(S|i, A). (10)
|
131 |
+
Here, S ranges over all the subsets of {1, 2, · · · , P} of some size S ∈ {0, 1, · · · , P} that also contain a fixed
|
132 |
+
'anchor' set A ⊂ {1, 2, · · · , P} of size *A < S*, and η(·) is some weighting function. Moreover, U(S|i, A) is the uniform distribution over the all size S subsets of {1, 2, · · · , P} that contain both i and A.
|
133 |
+
|
134 |
+
In practice, the set A can be chosen during burn-in. Subset wTGS proceeds by defining a sampling scheme for the target distribution (10) that utilizes Gibbs updates w.r.t. i and S and Metropolized-Gibbs update w.r.t. γi.
|
135 |
+
|
136 |
+
- i**-updates:** Marginalizing i from (10) yields
|
137 |
+
|
138 |
+
$${\mathrm{yields}}$$
|
139 |
+
$$f(\gamma,{\mathcal{S}})=p(\gamma|{\mathcal{D}})\phi(\gamma,{\mathcal{S}})$$
|
140 |
+
f(γ, S) = p(γ|D)ϕ(γ, S) (11)
|
141 |
+
$$\mathrm{Trginali}\mathrm{ein}$$
|
142 |
+
|
143 |
+
where we define
|
144 |
+
|
145 |
+
$$(11)$$
|
146 |
+
$$\phi(\gamma,{\mathcal{S}}):=\sum_{i\in{\mathcal{S}}}{\frac{{\frac{1}{2}}\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},{\mathcal{D}})}}{\mathcal{U}}({\mathcal{S}}|i,{\mathcal{A}})$$
|
147 |
+
$$\left(12\right)$$
|
148 |
+
|
149 |
+
and have leveraged that U(S|i, A) = 0 if i /∈ S. Crucially, computing ϕ(γ, S) is Θ(S) instead of Θ(P). We can do Gibbs updates w.r.t. i using the distribution
|
150 |
+
|
151 |
+
$$f(i|\gamma,{\mathcal{S}})\sim\frac{\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},{\mathcal{D}})}{\mathcal{U}}({\mathcal{S}}|i,{\mathcal{A}}).$$
|
152 |
+
$$(13)$$
|
153 |
+
|
154 |
+
- γ**-updates:** Just as for *wT GS* we utilized Metropolized -Gibbs updates w.r.t. γi that result in deterministic flips γi → 1 − γi. Likewise the marginal f(i) is proportional to PIP(i) + εP
|
155 |
+
so that the sampler focuses computational efforts on large PIP covariates (Jankowiak, 2023).
|
156 |
+
|
157 |
+
- S**-updates:** S is updated with Gibbs moves, *S ∼ U*(·|i, A). For the full algorithm, see the Algorithm 1.
|
158 |
+
|
159 |
+
Algorithm 1 The Subset S-wTGS Algorithm Input: Dataset D = {*X, Y* } with P covariates; prior inclusion probability h; prior precision τ ; subset size S; anchor set size A; total number of MCMC iterations T; number of burn-in iteration Tburn.
|
160 |
+
|
161 |
+
Output: Approximate weighted posterior samples {ρ
|
162 |
+
(t), γ(t)}
|
163 |
+
T
|
164 |
+
t=Tburn+1 Initializations: γ
|
165 |
+
(0) = (1, 1, *· · ·* , 1), and choose A be the A covariate with exhibiting the largest correlations with Y . Choose i
|
166 |
+
(0) randomly from {1, 2, · · · , P} and S
|
167 |
+
(0) ∼ U(·|i
|
168 |
+
(0), A).
|
169 |
+
|
170 |
+
for t = 1, 2, · · · , T do Estimate S conditional PIPs p(γ
|
171 |
+
(t−1)
|
172 |
+
j|γ
|
173 |
+
(t−1)
|
174 |
+
−j, D) for all j ∈ S(t−1)
|
175 |
+
ϕ(γ
|
176 |
+
(t−1), S
|
177 |
+
(t−1)) ←Pj∈S(t−1)
|
178 |
+
1 2 η(γ
|
179 |
+
(t−1)
|
180 |
+
−j)
|
181 |
+
p(γ
|
182 |
+
(t−1)
|
183 |
+
j|γ
|
184 |
+
(t−1)
|
185 |
+
−j,D)
|
186 |
+
Estimate f(j|γ
|
187 |
+
(t−1)) ← ϕ
|
188 |
+
−1(γ
|
189 |
+
(t−1), S
|
190 |
+
(t−1))12 η(γ
|
191 |
+
(t−1)
|
192 |
+
−j)
|
193 |
+
p(γ
|
194 |
+
(t−1)
|
195 |
+
j|γ
|
196 |
+
(t−1)
|
197 |
+
−j,D)
|
198 |
+
for all j ∈ [P].
|
199 |
+
|
200 |
+
Sample i
|
201 |
+
(t) ∼ f(·|γ
|
202 |
+
(t−1))
|
203 |
+
γ
|
204 |
+
(t) ← flip(γ
|
205 |
+
(t−1)|i
|
206 |
+
(t)) where flip(γ|i) flips the i-th coordinate of γ : γi ← 1 − γi.
|
207 |
+
|
208 |
+
Sample S
|
209 |
+
(t) ∼ U(·|i
|
210 |
+
(t), A)
|
211 |
+
Compute the unnormalized weights ρ˜
|
212 |
+
(t) ← ϕ
|
213 |
+
−1(γ
|
214 |
+
(t), S
|
215 |
+
(t))
|
216 |
+
if t ≤ Tburn **then**
|
217 |
+
Adapt A using some adaptive scheme.
|
218 |
+
|
219 |
+
end if end for for t = 1, 2, · · · , T do ρ
|
220 |
+
(t) ← ρ˜
|
221 |
+
(t)
|
222 |
+
PT
|
223 |
+
s>Tburn ρ˜
|
224 |
+
(s)
|
225 |
+
end for Output: {ρ
|
226 |
+
(t), γ(t)}
|
227 |
+
T
|
228 |
+
t=1.
|
229 |
+
|
230 |
+
The details of this algorithm is described in ALG 1. The associated estimator for this sampler is defined as
|
231 |
+
(Jankowiak, 2023):
|
232 |
+
|
233 |
+
$$\mathbf{P}\mathbf{I}\mathbf{P}(i):=\sum_{t=1}^{T}\rho^{(t)}\big(\mathbf{1}\{i\in{\mathcal{S}}^{(t)}\}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},D)+\mathbf{1}\{i\notin{\mathcal{S}}^{(t)}\}\gamma_{i}^{(t)}\big).$$
|
234 |
+
i. (14)
|
235 |
+
|
236 |
+
## 3.2 A Variable Complexity Wtgs Scheme
|
237 |
+
|
238 |
+
In the subset wTGS in Subsection 3.1, the number of conditional PIP computations per MCMC iteration is fixed, i.e., it is equal to S. In the following, we propose a variable complexity-based wTGS scheme (VC-wTGS), say ALG 2, where the only requirement is that the expected number of the conditional PIP
|
239 |
+
computations per MCMC iteration is S. This means that E[St] = S, where St is the number of conditional PIP computations at the t-th MCMC iteration.
|
240 |
+
|
241 |
+
Compared with ALG 1, ALG 2 allows us to use different subset sizes at MCMC iterations. By ALG 2, the expectation of number of conditional PIP computations in each MCMC iteration is P×(S/P)+0×(1−S/P) =
|
242 |
+
S. Since we aim to bound the variance at each finite iteration T, we don't mention about Tburn in ALG 2. In practice, we usually remove some initial samples. We also use the following new version of Rao-Blackwellized
|
243 |
+
|
244 |
+
$$(14)$$
|
245 |
+
|
246 |
+
estimator:
|
247 |
+
|
248 |
+
$$\mathrm{\mathsf{P I P}}(i):=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}}).$$
|
249 |
+
$$\left(15\right)$$
|
250 |
+
, D). (15)
|
251 |
+
In ALG 2, Bernoulli random variables {Q(t)}
|
252 |
+
T
|
253 |
+
t=1 are used to replace for random set S in ALG 1. There are Algorithm 2 A Variable-Complexity Based wTGS Algorithm
|
254 |
+
|
255 |
+
Input: Dataset D = {*X, Y* } with P covariates; prior inclusion probability h; prior precision τ ; total
|
256 |
+
number of MCMC iterations T; subset size S.
|
257 |
+
Output: Approximate weighted posterior samples {ρ (t), γ(t)} T t=1 Initializations: γ (0) = (γ1, γ2, · · · , γP ) where γj ∼ Bern(h) for all j ∈ [P]. for t = 1, 2, · · · , T do Set Q(1) = 1. Sample a Bernoulli random variable Q(t) ∼ Bern( S P ) if t ≥ 2. if Q(t) = 1 then Estimate P conditional PIPs p(γ (t−1) j|γ (t−1) −j, D) for all j ∈ [P] ϕ(γ (t−1)) ←Pj∈[P ] 1 2 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) Estimate f(j|γ (t−1)) ← ϕ −1(γ (t−1))12 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) for all j ∈ [P]. Sample i (t) ∼ f(·|γ (t−1)) γ (t) ← flip(γ (t−1)|i (t)) where flip(γ|i) flips the i-th coordinate of γ : γi ← 1 − γi. Compute the unnormalized weights ρ˜ (t) ← ϕ −1(γ (t)) else
|
258 |
+
γ ρ˜
|
259 |
+
end if
|
260 |
+
end for
|
261 |
+
for t = 1, 2, · · · , T do
|
262 |
+
ρ
|
263 |
+
(t) ← ρ˜
|
264 |
+
(t)Q(t)
|
265 |
+
PT
|
266 |
+
s=1
|
267 |
+
ρ˜
|
268 |
+
(s)Q(s)
|
269 |
+
end for
|
270 |
+
# Compute the unno $)\gets\gamma^{(t-1)}\\ )\gets\phi^{-1}(\gamma^{(t)})\\$ #.
|
271 |
+
Output: {ρ
|
272 |
+
(t), γ(t)}
|
273 |
+
T
|
274 |
+
t=1.
|
275 |
+
|
276 |
+
two main reasons for this replacement: (1) generating a random set S from PS
|
277 |
+
subsets of [P] takes very long running time for most pairs (*P, S*), (2) the associated Rao-Blackwellized estimator usually has smaller variance with ALG 2 than ALG 1 at high P/S. See Section 4 for our simulation results.
|
278 |
+
|
279 |
+
## 3.3 Theoretical Bounds For Algorithm 2
|
280 |
+
|
281 |
+
First, we prove the following result. The proof can be found in Appendix C.
|
282 |
+
|
283 |
+
Lemma 1. Let U and V be two positive random variables such that U/V ≤ M a.s. for some constant M.
|
284 |
+
|
285 |
+
In addition, assume that on a set D with probability at least 1 − α*, we have*
|
286 |
+
|
287 |
+
$|U-\mathbb{E}[U]|\leq\varepsilon\mathbb{E}[U]$, $|V-\mathbb{E}[V]|\leq\varepsilon\mathbb{E}[V]$,
|
288 |
+
for some 0 ≤ ε < 1*. Then, it holds that*
|
289 |
+
|
290 |
+
$$\mathbb{E}\left[\left|{\frac{U}{V}}-{\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right|^{2}\right]\leq{\frac{4\varepsilon^{2}}{(1-\varepsilon)^{2}}}\left({\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right)^{2}+\left[\operatorname*{max}\left(M,{\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right)\right]^{2}\alpha.$$
|
291 |
+
|
292 |
+
2α. (18)
|
293 |
+
We also recall the following Hoeffding's inequality for Markov chain:
|
294 |
+
|
295 |
+
$$\begin{array}{l}{(16)}\\ {(17)}\end{array}$$
|
296 |
+
$$(18)$$
|
297 |
+
|
298 |
+
Lemma 2. *(Rao, 2018, Theorem 1.1) Let* {Yi}∞
|
299 |
+
i=1 *be a stationary Markov chain with state space* [N],
|
300 |
+
transition matrix A, stationary probability measure π, and averaging operator Eπ, so that Y1 *is distributed* according to π. Let λ = ∥A − Eπ∥L2(π)→L2(π) and let f1, f2, · · · , fn : [N] → R *so that* E[fi(Yi)] = 0 *for all* i and |fi(ν)| ≤ ai for all ν ∈ [N] and all i*. Then for* u ≥ 0,
|
301 |
+
|
302 |
+
$$\mathbb{P}\biggl[\biggl|\sum_{i=1}^{n}f_{i}(Y_{i})\biggr|\geq u\biggl(\sum_{i=1}^{n}a_{i}^{2}\biggr)^{1/2}\biggr]\leq2\exp\biggl(-\frac{u^{2}(1-\lambda)}{64e}\biggr).$$
|
303 |
+
|
304 |
+
Now, the following result can be shown. Lemma 3. Let
|
305 |
+
|
306 |
+
$$(19)$$
|
307 |
+
$$\phi(\gamma):=\sum_{j\in[P]}{\frac{{\frac{1}{2}}\eta(\gamma_{-j})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}}$$
|
308 |
+
$$(20)$$
|
309 |
+
$$(21)$$
|
310 |
+
$$(22)$$
|
311 |
+
|
312 |
+
and define f(γ) := ϕ(γ)p(γ|D). (21)
|
313 |
+
Then, by ALG 2, the sequence {γ
|
314 |
+
(t), Q(t)}
|
315 |
+
T
|
316 |
+
t=1 forms a reversible Markov chain with the stationary distribution proportional to f(γ)q(Q) where q is the Bernoulli (S/P) *distribution. This Markov chain has transition kernel* K((*γ, Q*) → (γ
|
317 |
+
′, Q′)) = K∗(γ → γ
|
318 |
+
′)q(Q′) *where*
|
319 |
+
|
320 |
+
$$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-f^{\mathsf{L}}i\mathfrak{p}(\gamma|j))+\biggl(1-\frac{S}{P}\biggr)\delta(\gamma^{\prime}-\gamma).$$
|
321 |
+
|
322 |
+
In the classical wTGS (Zanella & Roberts, 2019), the Markov chain {γ
|
323 |
+
(t)}
|
324 |
+
T
|
325 |
+
t=1 also form a Markov chain.
|
326 |
+
|
327 |
+
However, this Markov chain is different from the Markov chain in Lemma 3. However, the two Markov chains still have the same stationary distribution which is proportional to f(γ). See a detailed proof of Lemma 3 in Appendix B.
|
328 |
+
|
329 |
+
Lemma 4. *For the Rao-Blackwellized estimator in* (15) *which is applied to the output sequence* {ρ
|
330 |
+
(t), γ(t)}
|
331 |
+
T t=1 of ALG 2, it holds that
|
332 |
+
|
333 |
+
$$E_{i,T}:=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}})\to P\!I\!P(i)\tag{1}$$
|
334 |
+
|
335 |
+
as T → ∞.
|
336 |
+
|
337 |
+
Proof. By Lemma 3, {γ
|
338 |
+
(t), Q(t)}
|
339 |
+
T
|
340 |
+
t=1 forms a reversible Markov chain with stationary distribution f(γ)/Zf q(Q) where Zf =Pγ f(γ). Hence, by SLLN for Markov chain (Breiman, 1960), for any bounded function h, we have
|
341 |
+
|
342 |
+
$$\frac{1}{T}\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}h(\gamma^{(t)})$$ $$\to\mathbb{E}_{qf(\cdot)/Z_{f}}\big{[}\phi^{-1}(\gamma)h(\gamma)Q\big{]}\tag{2}$$ $$=\sum_{Q}q(Q)\sum_{\gamma}\frac{f(\gamma)}{Z_{f}}\phi^{-1}(\gamma)h(\gamma)Q$$ $$=\bigg{(}\sum_{Q}q(Q)Q\bigg{)}\bigg{(}\sum_{\gamma}\frac{f(\gamma)}{Z_{f}}\phi^{-1}(\gamma)h(\gamma)\bigg{)}$$ (3) $$=\mathbb{E}_{q}[Q]\frac{1}{Z_{f}}\sum_{\gamma}p(\gamma|\mathcal{D})h(\gamma)$$ (4) $$=\frac{S}{P}\frac{1}{Z_{f}}\sum_{\gamma}p(\gamma|\mathcal{D})h(\gamma),\tag{5}$$
|
343 |
+
$$(23)$$
|
344 |
+
|
345 |
+
$$(28)$$
|
346 |
+
|
347 |
+
where (27) follows from f(γ) = p(γ|D)ϕ(γ).
|
348 |
+
|
349 |
+
Similarly, we have
|
350 |
+
|
351 |
+
$$\frac{1}{T}\sum_{t=1}^T Q^{(t)}\phi^{-1}(\gamma^{(t)})$$ $$\to\mathbb{E}_{qf(\cdot)/Z_f}\big[\phi^{-1}(\gamma)Q\big]$$ $$=\sum_Q q(Q)Q\sum_\gamma\frac{f(\gamma)}{Z_f}\phi^{-1}(\gamma)$$ $$=\mathbb{E}_q[Q]\sum_\gamma\frac{1}{Z_f}p(\gamma|D)$$ $$=\frac{S}{P}\frac{1}{Z_f},$$ $$=p(\gamma|D)\phi(\gamma).$$
|
352 |
+
(29) $\binom{30}{2}$ .
|
353 |
+
$$(31)$$
|
354 |
+
$$(32)$$
|
355 |
+
$$(33)$$
|
356 |
+
$$(34)$$
|
357 |
+
|
358 |
+
where (31) also follows from f(γ) = p(γ|D)ϕ(γ).
|
359 |
+
|
360 |
+
From (28) and (32), we obtain
|
361 |
+
|
362 |
+
$$\begin{array}{l}{{\frac{1}{T}\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}h(\gamma^{(t)})}}\\ {{\frac{1}{T}\sum_{t=1}^{T}Q^{(t)}\phi^{-1}(\gamma^{(t)})}}\end{array}\to\sum_{\gamma}p(\gamma|{\mathcal{D}})h(\gamma),$$
|
363 |
+
|
364 |
+
or equivalently
|
365 |
+
|
366 |
+
$$\sum_{t=1}^{T}\rho^{(t)}h(\gamma^{(t)})\to\sum_{\gamma}p(\gamma|{\mathcal D})h(\gamma)$$
|
367 |
+
|
368 |
+
as T → ∞.
|
369 |
+
|
370 |
+
Now, by setting h(γ) = p(γi = 1|γ−i, D), from (34), we obtain
|
371 |
+
|
372 |
+
$$\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}})\to{\tt PIP}(i)\tag{1}$$
|
373 |
+
|
374 |
+
for all i ∈ [P]. The following result bounds the variance of PIP estimator at finite T.
|
375 |
+
|
376 |
+
Lemma 5. For any ε ∈ [0, 1], let ν and π *be the initial and stationary distributions of the reversible Markov* sequence γ
|
377 |
+
(t), Q(t) *. Define*
|
378 |
+
|
379 |
+
$$(35)$$
|
380 |
+
$$\hat{\phi}(\gamma):=\frac{\phi^{-1}(\gamma)}{\operatorname*{max}_{\gamma}\phi^{-1}(\gamma)},$$
|
381 |
+
$$(36)$$
|
382 |
+
|
383 |
+
and
|
384 |
+
|
385 |
+
$$\varepsilon_{0}=\frac{P}{P I P(i)\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]S}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}.$$
|
386 |
+
|
387 |
+
Then, we have
|
388 |
+
|
389 |
+
$$\mathbb{E}\Bigg{[}\Bigg{|}\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathit{PIP}(i)\Bigg{|}^{2}\Bigg{]}$$ $$\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathit{PIP}^{2}(i)+\frac{4P}{S}\frac{1}{\min_{\gamma}\pi(\gamma)T}\to0,$$ which is true. Hence ($\gamma$) is the same as the distribution of $p(\gamma)$
|
390 |
+
|
391 |
+
as T → ∞ for fixed P, S and the dataset. Here, π(γ) is the marginal distribution of π(*γ, Q*).
|
392 |
+
|
393 |
+
$$(37)$$
|
394 |
+
$$(38)$$
|
395 |
+
|
396 |
+
Proof. See Appendix D.
|
397 |
+
|
398 |
+
Remark 6. *As in the proof of Lemma 3, we have* π(γ) ∼ f(γ) = ϕ(γ)p(γ|D)*. Hence, it holds that*
|
399 |
+
|
400 |
+
$$\min_{\gamma}\pi(\gamma)=\min_{\gamma}\frac{\phi(\gamma)p(\gamma|\mathcal{D})}{\sum_{\gamma}\phi(\gamma)p(\gamma|\mathcal{D})},\tag{1}$$
|
401 |
+
$$(39)$$
|
402 |
+
|
403 |
+
which does not depend on S.
|
404 |
+
|
405 |
+
Next, we provide a lower bound for 1 − λγ,Q. First, we recall the following Dirichlet form on spectral gap.
|
406 |
+
|
407 |
+
Definition 7. Let *f, g* : Ω → R*. The Dirichlet form associated with a reversible Markov chain* Q on Ω is defined by
|
408 |
+
|
409 |
+
$$\mathcal{E}(f,g)=\langle(\mathbf{I}-\mathbf{Q})f,g\rangle_{\pi}$$ $$=\sum_{x\in\Omega}\pi(x)[f(x)-\mathbf{Q}f(x)]g(x)$$ $$=\sum_{x,y\in\Omega\times\Omega}\pi(x)Q(x,y)g(x)(f(x)-f(y)).$$
|
410 |
+
|
411 |
+
$$(42)$$
|
412 |
+
|
413 |
+
Lemma 8. *(Diaconis & Saloff-Coste, 1993) (Variational characterisation) For a reversible Markov chain* Q with state space Ω and stationary distribution π*, it holds that*
|
414 |
+
|
415 |
+
$$1-\lambda=\inf_{\begin{subarray}{c}g\to0,\,g=\{g^{2}\}=1\\ \mathbb{E}_{\pi}[g]=0,\,\mathbb{E}_{\pi}[g^{2}]=1\end{subarray}}\mathcal{E}(g,g),$$
|
416 |
+
$$(43)$$
|
417 |
+
$$(444)$$
|
418 |
+
|
419 |
+
where E(*g, g*) := ⟨(I − Q)*g, g*⟩π. Lemma 9. The spectral gap 1 − λγ,Q *of the reversible Markov chain* {γ
|
420 |
+
(t), Q(t)} *satisfies*
|
421 |
+
|
422 |
+
$$1-\lambda_{\gamma,Q}\geq{\frac{S}{P}}{\big(}1-\lambda_{P}{\big)}+1-{\frac{S}{P}}\geq1-{\frac{S}{P}},$$
|
423 |
+
|
424 |
+
where 1 − λP *is the spectral gap of the reversible Markov chain* {γ
|
425 |
+
(t)} *of the wTGS algorithm (i.e.* S = P).
|
426 |
+
|
427 |
+
See Appendix F for a proof of this lemma. By combining Lemma 4, Lemma 5 and Lemma 9, we come up with the following theorem. Theorem 10. *For the variable-complexity subset wTGS-based estimator in* (15) and given dataset (*X, Y* ), it holds that
|
428 |
+
|
429 |
+
$$E_{i,T}:=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\cal D})\to P\!I\!P(i)\tag{1}$$
|
430 |
+
$$(45)$$
|
431 |
+
|
432 |
+
as T → ∞ and
|
433 |
+
|
434 |
+
$$\mathbb{E}\Bigg{[}\Bigg{|}\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}|\gamma_{-i}^{(t)},\mathcal{D})-\textit{PIP}(i)\Bigg{|}^{2}\Bigg{]}$$ $$=O\Bigg{(}\frac{\log T}{T}\Bigg{(}\frac{P}{S}\Bigg{)}^{2}\Bigg{(}\frac{\max_{\gamma}\phi(\gamma)}{\min_{\gamma}\phi(\gamma)}\Bigg{)}^{2}\Bigg{)},$$
|
435 |
+
|
436 |
+
where
|
437 |
+
|
438 |
+
$$\phi(\gamma)=\frac{1}{2}\sum_{j\in[P]}\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}.$$
|
439 |
+
$$\quad(46)$$
|
440 |
+
$$(47)$$
|
441 |
+
|
442 |
+
Proof. First, (45) is shown in Lemma 4. Now, we show (46) by using Lemma 5 and Lemma 9.
|
443 |
+
|
444 |
+
Observe that
|
445 |
+
|
446 |
+
$$\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]=\mathbb{E}_{\pi}\left[\frac{\phi^{-1}(\gamma)}{\max_{\gamma}\phi^{-1}(\gamma)}\right]$$ $$\geq\frac{\min_{\gamma}\phi(\gamma)}{\max_{\gamma}\phi(\gamma)}.\tag{1}$$
|
447 |
+
|
448 |
+
In addition, we have
|
449 |
+
|
450 |
+
$$(48)$$
|
451 |
+
$$\phi(\gamma)=\sum_{j\in[P]}\frac{\frac{1}{2}\eta(\gamma_{-j})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}$$ $$=\frac{1}{2}\sum_{j\in[P]}\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}.$$
|
452 |
+
(49) $\binom{49}{50}$ (50) .
|
453 |
+
Now, note that
|
454 |
+
|
455 |
+
$$\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}=\begin{cases}1,&\gamma_{j}=1\\ \frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}=0|\gamma_{-j},\mathcal{D})},&\gamma_{j}=0.\end{cases}$$
|
456 |
+
$$\left(51\right)$$
|
457 |
+
|
458 |
+
In Appendix E, show how to estimate the conditional PIPs, i.e., p(γi|D, γ−i) for the linear regression model.
|
459 |
+
|
460 |
+
More specially, we have
|
461 |
+
|
462 |
+
$$p(\gamma_{i}|\mathcal{D},\gamma_{-i})=\frac{p(\gamma_{i}|\mathcal{D},\gamma_{-i})}{p(1-\gamma_{i}|\mathcal{D},\gamma_{-i})}\left(1+\frac{p(\gamma_{i}|\mathcal{D},\gamma_{-i})}{p(1-\gamma_{i}|\mathcal{D},\gamma_{-i})}\right)^{-1}.\tag{52}$$
|
463 |
+
|
464 |
+
Then, we can estimate p(γj=1|γ−j ,D)
|
465 |
+
p(γj=0|γ−j ,D)
|
466 |
+
based on the dataset. More specifically, let γ˜1 is given by γ−i with γi = 1, γ˜0 is given by γ−i with γi = 0, then we can show that
|
467 |
+
|
468 |
+
$$\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}=0|\gamma_{-j},\mathcal{D})}$$ $$=\left(\frac{h}{1-h}\right)\sqrt{\tau\frac{\det(X_{i,0}^{T}X_{i0}+\tau I)}{\det(X_{i,1}^{T}X_{i1}+\tau I)}}$$ $$\quad\times\left(\frac{\|Y\|^{2}-\|\tilde{Y}_{i0}\|^{2}+\nu_{0}\lambda_{0}}{\|Y\|^{2}-\|\tilde{Y}_{i1}^{2}\|^{2}+\nu_{0}\lambda_{0}}\right)^{\frac{N+\tau_{0}}{2}}.\tag{53}$$ $\tau_{\tau}^{T}X_{\tau}+\tau I)^{-1}X_{\tau}^{T}Y$.
|
469 |
+
Here, $\|\hat{Y}_{\gamma}\|^2=\hat{Y}_{\gamma}^T\hat{Y}_{\gamma}=Y^T X_{\gamma}(X_{\gamma}^T X_{\gamma}+\tau I)^{-1}X_{\gamma}^T Y$.
|
470 |
+
Using this algorithm, if pre-computing XT X is not possible, the computational complexity per conditional PIP is O(N|γ| 2 +|γ| 3 +P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity per conditional PIP is O(|γ| 3 + P|γ| 2).
|
471 |
+
|
472 |
+
Remark 11. *As we can see in Appendix E, for the linear regression model in Section 2.2, if pre-computing* XT X *is not possible, the computational complexity for a conditional PIP is* O(N|γ| 2 + |γ| 3 + P|γ| 2). Otherwise, if pre-computing XT X *is possible, the computational complexity for a conditional PIP is* O(|γ| 3+P|γ| 2).
|
473 |
+
|
474 |
+
Here, |γ| ≈ hP*. Hence, the average computational complexity for our algorithm is* O(S(N|γ| 2+|γ| 3+P|γ| 2))
|
475 |
+
or O(S(|γ| 3 + P|γ| 2)) which depends on whether the precomputing of XT X *is possible or not. To reduce* the computational complexity, we can reduce S, or we are only interested in the case P/S *is large. This* computational complexity reductions is more meaningful if |γ| ≈ P h << P*, i.e., we consider the sparse* linear regression regimes. However, the variance of the associated Rao-Blackwellized estimator is increased as S *becomes small. Hence, there is a trade-off between the computational complexity per MCMC iteration* vs. the variance of of the Rao-Blackwellized estimator. The most interesting fact is that the newly-designed Rao-Blackwellized estimator converges to PIPs for any value of S. In practice, the choice of S *depends on* each application and the availability of computational resources. We can choose S *very small (eg.,* S = 2)
|
476 |
+
to have a low complexity estimator and low convergence rate. We can choose S ≈ P for a high complexity estimator with high convergence rate. Furthermore, both our and Jankowiak algorithms are degenerated to the wTGS (Zanella & Roberts (2019)) at S ≈ P.
|
477 |
+
|
478 |
+
## 4 Experiments
|
479 |
+
|
480 |
+
In this section, we show by simulation that the PIP-estimator is convergent as T → ∞. In addition, we compare the variance of associated Rao-Blackwellized estimators for VC-wTGS and subset wTGS on simulated and real datasets. To compute p(γi|γ−i, Y ), we use the same trick as (Zanella & Roberts, 2019, Appendix B.1) for the new setting. See our derivations of this posterior distribution in Appendix E. As
|
481 |
+
(Jankowiak, 2023), in ALG 1 and ALG 2, we choose
|
482 |
+
|
483 |
+
$$\eta(\gamma_{-i})=\mathbb{P}(\gamma_{i}=1|\gamma_{-i},{\mathcal{D}}).$$
|
484 |
+
$$(54)$$
|
485 |
+
η(γ−i) = P(γi = 1|γ−i, D). (54)
|
486 |
+
|
487 |
+
## 4.1 Simulated Datasets
|
488 |
+
|
489 |
+
First, we perform a simulated experiment. Let X ∈ R
|
490 |
+
N×P be a realization of a multivariate (random)
|
491 |
+
Gaussian matrix. We consider the case N = 100 and P = 200. We run T = 20000 iterations. Fig. 1 shows the number of conditional PIP computations per MCMC iteration over T iterations. As we can see, our algorithm (Algorithm 2) has variable complexity where the number of conditional PIP computations per MCMC is a random variable Y which takes value on {0, P} where P(Y = P) = S/P. For Jankowiak's algorithm, the number of conditional PIP computations per MCMC is always fixed, which is equal to S.
|
492 |
+
|
493 |
+
Fig. 2 shows that the Rao-Blackwellized estimator in (15) converges to the value of PIP at T → ∞ for different values of S. Since the number of PIPs, P, is very large, we only run simulations for PIP(0) and PIP(1). The behavior of PIP(0) and PIP(1) represents the behavior of other PIPs. Since VC-wTGS converges very fast at T big enough, the variance of variable-complexity wTGS is very small in the long term. In Fig. 4, we plot the estimators of VC-wTGS, subset wTGS, and wTGS for estimating PIP(0). It can our estimator converges to wTGS estimator faster than subset wTGS. This also means that the variance of VC-wTGS is smaller than the variance of subset wTGS for the same sample complexity S.
|
494 |
+
|
495 |
+
![11_image_0.png](11_image_0.png)
|
496 |
+
|
497 |
+
Figure 1: Computational Complexity Evolution
|
498 |
+
|
499 |
+
![12_image_0.png](12_image_0.png)
|
500 |
+
|
501 |
+
Figure 2: VC-wTGS Rao-Blackwellized Estimators (ALG 2)
|
502 |
+
|
503 |
+
![12_image_1.png](12_image_1.png)
|
504 |
+
|
505 |
+
Figure 3: Convergence of Rao-Blackwellized Estimators
|
506 |
+
|
507 |
+
## 4.2 Real Datasets
|
508 |
+
|
509 |
+
In this simulation, we run ALG 2 on MNIST dataset.
|
510 |
+
|
511 |
+
As Fig. 1, Fig. 4 shows the number of conditional PIP computations per MCMC iteration over T iterations.
|
512 |
+
|
513 |
+
It shows that our algorithm has variable computational complexity per MCMC iteration, which is different from Jankowiak's algorithm. Fig. 5 plots PIP(0) and PIP(1) and the estimated variances for the Rao-Blackwellized estimator in (15)
|
514 |
+
at different values of S, respectively. Here, PIP(0) and PIP(1) are defined in (9), which are posterior inclusion probabilities that the components Po and B1 affect the output. These plots show a trade-off between the computational complexity and the estimated variance for estimating PIP(0) and PIP(1). The
|
515 |
+
|
516 |
+
![13_image_0.png](13_image_0.png)
|
517 |
+
|
518 |
+
Figure 4: Computational Complexity Evolution
|
519 |
+
expected number of PIP computations is only ST in ALG 2 but T P in wTGS if we run T MCMC iterations.
|
520 |
+
|
521 |
+
However, we suffer an increasing in variance. By Theorem 10, the variance is O PS
|
522 |
+
2 log T
|
523 |
+
Tfor a given dataset, i.e., increasing at most (P/S)
|
524 |
+
2times. For many applications, we don't need to estimate PIPs exactly, hence VC-wTGS can be used to reduce computational complexity especially when P is very large (million covariates). Fig. 6 shows that VC-wTGS outperforms subset wTGS (Jankowiak, 2023) at high values of P/S, which shows that our newly-designed Rao-Blackwellized estimator converges to PIP faster than Jankowiak's estimator at high P/S.
|
525 |
+
|
526 |
+
## 5 Conclusion
|
527 |
+
|
528 |
+
This paper proposed a variable complexity wTGS for Bayesian Variable Selection which can improve the computational complexity of the well-known wTGS. Experiments show that our Rao-Blackwellized estimator can give a smaller variance than its counterpart associated with the subset-wTGS at high P/S.
|
529 |
+
|
530 |
+
![14_image_0.png](14_image_0.png)
|
531 |
+
|
532 |
+
Figure 5: The variance of VC-wTGS Rao-Blackwellized Estimators (ALG 2)
|
533 |
+
|
534 |
+
![14_image_1.png](14_image_1.png)
|
535 |
+
|
536 |
+
Figure 6: Comparing the variance between subset wTGS and VC-wTGS at S = 2.
|
537 |
+
|
538 |
+
## References
|
539 |
+
|
540 |
+
Christophe Andrieu, Nando de Freitas, A. Doucet, and Michael I. Jordan. An introduction to MCMC for machine learning. Machine Learning, 50:5-43, 2004.
|
541 |
+
|
542 |
+
C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. William M. Bolstad. Understanding Computational Bayesian Statistics. John Wiley, 2010.
|
543 |
+
|
544 |
+
L. Breiman. The strong law of large numbers for a class of Markov chains. Annals of Mathematical Statistics, 31:801-803, 1960.
|
545 |
+
|
546 |
+
Mónica F. Bugallo, Shanshan Xu, and Petar M. Djurić. Performance comparison of EKF and particle filtering methods for maneuvering targets. Digit. Signal Process., 17:774-786, 2007.
|
547 |
+
|
548 |
+
R. Combes and M. Touati. Computationally efficient estimation of the spectral gap of a markov chain.
|
549 |
+
|
550 |
+
Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3:1 - 21, 2019.
|
551 |
+
|
552 |
+
Persi Diaconis and Laurent Saloff-Coste. Comparison theorems for reversible markov chains.
|
553 |
+
|
554 |
+
Annals of Applied Probability, 3:696-730, 1993.
|
555 |
+
|
556 |
+
William J. Fitzgerald. Markov chain Monte Carlo methods with applications to signal processing. Signal Process., 81:3–18, 2001.
|
557 |
+
|
558 |
+
Ankur Gupta and James B. Rawlings. Comparison of parameter estimation methods in stochastic chemical kinetic models: Examples in systems biology. *AIChE journal. American Institute of Chemical Engineers*,
|
559 |
+
60 4:1253–1268, 2014.
|
560 |
+
|
561 |
+
Tim Hesterberg. Monte carlo strategies in scientific computing. *Technometrics*, 44:403 - 404, 2002. Martin Jankowiak. Bayesian variable selection in a million dimensions. In *International Conference on* Artificial Intelligence and Statistics, 2023.
|
562 |
+
|
563 |
+
Muhammad F. Kasim, A. F. A. Bott, Petros Tzeferacos, Donald Q. Lamb, Gianluca Gregori, and Sam M.
|
564 |
+
|
565 |
+
Vinko. Retrieving fields from proton radiography without source profiles. *Physical review. E*, 100 3-1:
|
566 |
+
033208, 2019.
|
567 |
+
|
568 |
+
Faming Liang, Chuanhai Liu, and Raymond J. Carroll. Advanced Markov chain Monte Carlo methods:
|
569 |
+
Learning from past samples. 2010.
|
570 |
+
|
571 |
+
Daniel Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods.
|
572 |
+
|
573 |
+
Electronic Journal of Probability, 20(79):1 - 32, 2015.
|
574 |
+
|
575 |
+
Shravas Rao. A Hoeffding inequality for Markov chains. *Electronic Communications in Probability*, 2018.
|
576 |
+
|
577 |
+
Jesse Read, Luca Martino, and David Luengo. Efficient Monte Carlo methods for multi-dimensional learning with classifier chains. *Pattern Recognit.*, 47:1535–1546, 2012.
|
578 |
+
|
579 |
+
Christian P. Robert and George Casella. Monte carlo statistical methods. *Technometrics*, 47:243 - 243, 2005.
|
580 |
+
|
581 |
+
Lan V. Truong. On linear model with markov signal priors. In *AISTATS*, 2022. Pekka Tuominen and Richard L. Tweedie. Markov Chains with Continuous Components. Proceedings of the London Mathematical Society, s3-38(1):89–114, 01 1979.
|
582 |
+
|
583 |
+
Adrian G. Wills and Thomas Bo Schön. Sequential monte carlo: A unified review. Annu. Rev. Control.
|
584 |
+
|
585 |
+
Robotics Auton. Syst., 6:159–182, 2023.
|
586 |
+
|
587 |
+
G. Wolfer and A. Kontorovich. Estimating the mixing time of ergodic Markov chains. In 32nd Annual Conference on Learning Theory, 2019.
|
588 |
+
|
589 |
+
Giacomo Zanella and Gareth O. Roberts. Scalable importance tempering and Bayesian variable selection.
|
590 |
+
|
591 |
+
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81, 2019.
|
592 |
+
|
593 |
+
## A Appendix B Proof Of Lemma 3
|
594 |
+
|
595 |
+
The transition kernel for the sequence {γ
|
596 |
+
(t)} can be written as
|
597 |
+
|
598 |
+
$$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-\mathbf{flip}(\gamma|j))+\bigg(1-\frac{S}{P}\bigg)\delta(\gamma^{\prime}-\gamma).$$
|
599 |
+
|
600 |
+
This implies that for any pair (*γ, γ*′) such that γ
|
601 |
+
′ = flip(γ|i) for some i ∈ [P], we have
|
602 |
+
|
603 |
+
$(\gamma,\gamma)$ such that $\gamma^{\prime}=\mathtt{Lip}(\gamma|i)$ for some $i\in[i]$, we have $$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-\mathtt{flip}(\gamma|j))$$ $$=\frac{S}{P}f(i|\gamma).$$
|
604 |
+
$$(55)$$
|
605 |
+
$\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$.
|
606 |
+
If $\gamma^{\prime}=\texttt{T11p}(\gamma|i)$ for some $i\in[P]$, we have $\gamma^{\prime}=\texttt{T11p}(\gamma|i)$.
|
607 |
+
$$(56)$$
|
608 |
+
$$\left(57\right)$$
|
609 |
+
Now, by ALG 2, we also have
|
610 |
+
|
611 |
+
$$f(i|\gamma)=\phi^{-1}(\gamma)\frac{\frac{1}{2}\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},\mathcal{D})}$$
|
612 |
+
|
613 |
+
p(γi|γ−i, D)(58)
|
614 |
+
and
|
615 |
+
|
616 |
+
$$f(i|\gamma^{\prime})=\phi^{-1}(\gamma^{\prime})\frac{\frac{1}{2}\eta(\gamma_{-i}^{\prime})}{p(\gamma_{i}^{\prime}|\gamma_{-i}^{\prime},\mathcal{D})}.$$
|
617 |
+
|
618 |
+
From (58) and (59) and γ−i = γ
|
619 |
+
′
|
620 |
+
−i
|
621 |
+
, we obtain In addition, we also have K∗(γ → γ
|
622 |
+
′) = K∗(γ
|
623 |
+
′ → γ) = 0 if γ
|
624 |
+
′ ̸= γ and γ
|
625 |
+
′ ̸= flip(γ|i) for any i ∈ [P].
|
626 |
+
|
627 |
+
Furthermore, K∗(γ → γ
|
628 |
+
′) = K∗(γ
|
629 |
+
′ → γ) = 1 −
|
630 |
+
S
|
631 |
+
P
|
632 |
+
if γ = γ
|
633 |
+
′.
|
634 |
+
|
635 |
+
By combining all these cases, it holds that
|
636 |
+
|
637 |
+
$$f(\gamma)K^{*}(\gamma\to\gamma^{\prime})=f(\gamma^{\prime})K^{*}(\gamma^{\prime}\to\gamma)$$
|
638 |
+
|
639 |
+
for all γ
|
640 |
+
′, γ.
|
641 |
+
|
642 |
+
This means that {γ
|
643 |
+
(t)}
|
644 |
+
T
|
645 |
+
t=1 form a reversible Markov chain with stationary distribution f(γ)/Zf where
|
646 |
+
|
647 |
+
$$Z_{f}=\sum_{\gamma}f(\gamma).\tag{1}$$
|
648 |
+
$$(65)$$
|
649 |
+
$$(67)$$
|
650 |
+
|
651 |
+
Since {Qt}
|
652 |
+
T
|
653 |
+
t=1 is an i.i.d. Bernoulli sequence with q(1) = S/P and independent of {γ
|
654 |
+
(t)}
|
655 |
+
T
|
656 |
+
t=1, {γ
|
657 |
+
(t), Q(t)}
|
658 |
+
T t=1 forms a Markov chain with the transition kernel satisfying:
|
659 |
+
|
660 |
+
$$K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=q(Q^{\prime})K^{*}(\gamma\to\gamma^{\prime}).$$
|
661 |
+
|
662 |
+
It follows from (66) that
|
663 |
+
|
664 |
+
$$q(Q)f(\gamma)/Z_{f}K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=[K^{*}(\gamma\to\gamma^{\prime})f(\gamma)/Z_{f}]q(Q)q(Q^{\prime})$$
|
665 |
+
′) (67)
|
666 |
+
for any pair (*γ, Q*) and (γ
|
667 |
+
′, Q′).
|
668 |
+
|
669 |
+
Finally, from (64) and (67), we have
|
670 |
+
|
671 |
+
$$q(Q)f(\gamma)/Z_{f}K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=q(Q^{\prime})f(\gamma)/Z_{f}K((\gamma^{\prime},Q^{\prime})\to(\gamma,Q)).$$
|
672 |
+
|
673 |
+
This means that {γt, Q(t)}
|
674 |
+
T
|
675 |
+
t=1 forms a reversible Markov chain with stationary distribution q(Q)f(γ)/Zf .
|
676 |
+
|
677 |
+
## C Proof Of Lemma 1
|
678 |
+
|
679 |
+
Observe that with probability at least 1 − α, we have
|
680 |
+
|
681 |
+
Let $1-\alpha$, we have $\begin{array}{l}(1-\varepsilon)\mathbb{E}[U]\leq U\leq(1+\varepsilon)\mathbb{E}[U]\\ (1-\varepsilon)\mathbb{E}[V]\leq V\leq(1+\varepsilon)\mathbb{E}[V].\end{array}$ .
|
682 |
+
$$\frac{K^{*}(\gamma\rightarrow\gamma^{\prime})}{K^{*}(\gamma^{\prime}\rightarrow\gamma)}=\frac{\frac{S}{P}f(i|\gamma)}{\frac{S}{P}f(i|\gamma^{\prime})}$$ $$=\frac{f(i|\gamma)}{f(i|\gamma^{\prime})}$$ $$=\frac{\phi(\gamma^{\prime})p(\gamma^{\prime}|\mathcal{D})}{\phi(\gamma)p(\gamma|\mathcal{D})}$$ $$=\frac{f(\gamma^{\prime})}{f(\gamma)}.$$
|
683 |
+
$$(58)$$
|
684 |
+
$$(59)$$
|
685 |
+
$$(60)$$
|
686 |
+
$$(61)$$
|
687 |
+
$$(62)$$
|
688 |
+
$$(63)$$
|
689 |
+
$$(64)$$
|
690 |
+
$$(68)$$
|
691 |
+
$$\begin{array}{l}{(69)}\\ {(70)}\end{array}$$
|
692 |
+
Hence, we have
|
693 |
+
|
694 |
+
$$\left({\frac{1-\varepsilon}{1+\varepsilon}}\right){\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\leq{\frac{U}{V}}\leq\left({\frac{1+\varepsilon}{1-\varepsilon}}\right){\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}.$$
|
695 |
+
. (71)
|
696 |
+
From (71), with probability at least 1 − α, we have
|
697 |
+
|
698 |
+
$$(71)$$
|
699 |
+
$$\left|{\frac{U}{V}}\right.$$
|
700 |
+
$$(72)$$
|
701 |
+
(73) $\binom{74}{7}$ .
|
702 |
+
−
|
703 |
+
$$\left.\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|\leq\frac{2\varepsilon}{1-\varepsilon}\bigg{(}\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\bigg{)}.\tag{1}$$
|
704 |
+
|
705 |
+
It follows from (72) that
|
706 |
+
|
707 |
+
$$\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]=\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]D\right]\mathbb{P}(D)+\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]D^{c}\right]\mathbb{P}(D^{c})$$ $$\leq\frac{4\varepsilon^{2}}{(1-\varepsilon)^{2}}\left(\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right)^{2}+\left[\max\left(M,\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right)\right]^{2}\alpha.$$
|
708 |
+
|
709 |
+
## D Proof Of Lemma 5
|
710 |
+
|
711 |
+
First, by definition of ϕˆ(γ) in (36) we have
|
712 |
+
|
713 |
+
$$\rho^{(t)}=\frac{\hat{\phi}(\gamma^{(t)})Q^{(t)}}{\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}}.\tag{1}$$
|
714 |
+
|
715 |
+
In addition, observe that
|
716 |
+
|
717 |
+
$$0\leq{\hat{\phi}}(\gamma)\leq1.$$
|
718 |
+
0 ≤ ϕˆ(γ) ≤ 1. (76)
|
719 |
+
Now, let g : {0, 1}
|
720 |
+
P → R+ such that g(γ) ≤ 1 for all γ. Then, by applying Lemma 2 and a change of measure, with probability 1 − 2 dν dπ exp(−
|
721 |
+
ζ 2T(1−λ)
|
722 |
+
64e), we have
|
723 |
+
|
724 |
+
$$\frac{1}{T}\bigg|\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\bigg[\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}\bigg]\bigg|\leq\zeta\tag{1}$$
|
725 |
+
$$(75)$$
|
726 |
+
$$(76)$$
|
727 |
+
$$\left(77\right)$$
|
728 |
+
|
729 |
+
for any ζ > 0.
|
730 |
+
|
731 |
+
Similarly, by using Lemma 2, with probability at least 1 − 2 dν dπ exp(−
|
732 |
+
ζ 2T(1−λ)
|
733 |
+
64e), it holds that
|
734 |
+
|
735 |
+
$$\frac{1}{T}\left|\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\left[\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}\right]\right|\leq\zeta.$$
|
736 |
+
|
737 |
+
By using the union bound, with probability at least 1 − 4 dν dπ exp(−
|
738 |
+
ζ 2T(1−λ)
|
739 |
+
64e), it holds that
|
740 |
+
|
741 |
+
$$\frac{1}{T}\bigg{|}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\bigg{[}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}\bigg{]}\bigg{|}\leq\zeta,$$ $$\frac{1}{T}\bigg{|}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})-\mathbb{E}_{\pi}\bigg{[}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})\bigg{]}\bigg{|}\leq\zeta.$$
|
742 |
+
$$(78)$$
|
743 |
+
$$\begin{array}{c}{{(79)}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{(80)}}\\ {{}}\end{array}$$
|
744 |
+
|
745 |
+
Now, by setting ζ = ζ0 := ε T min Eπ PT
|
746 |
+
t=1 ϕˆ(γ
|
747 |
+
(t))g(γ
|
748 |
+
(t))Q(t), Eπ PT
|
749 |
+
t=1 ϕˆ(γ
|
750 |
+
(t)) for some ε > 0 (to be chosen later), with probability at least 1 − 4 dν dπ exp(−
|
751 |
+
ζ 2 0 T(1−λ)
|
752 |
+
64e), it holds that
|
753 |
+
|
754 |
+
1 T X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) − Eπ X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) ≤ ε T Eπ X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) , (81) 1 T X T t=1 ϕˆ(γ (t))Q (t) − Eπ X T t=1 ϕˆ(γ (t))Q (t) ≤ ε T Eπ X T t=1 ϕˆ(γ (t))Q (t) . (82)
|
755 |
+
(83) $\binom{84}{84}$ .
|
756 |
+
$$(86)$$
|
757 |
+
Furthermore, by setting
|
758 |
+
|
759 |
+
$$\begin{array}{l}{{U:=\frac{1}{T}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)},}}\\ {{V:=\frac{1}{T}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)},}}\end{array}$$
|
760 |
+
|
761 |
+
we have
|
762 |
+
|
763 |
+
$$\frac{U}{V}=\frac{\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}}{\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}}$$ $$=\sum_{t=1}^{T}\rho^{(t)}g(\gamma^{(t)})$$
|
764 |
+
$$(85)$$
|
765 |
+
|
766 |
+
and
|
767 |
+
|
768 |
+
$$M:=\operatorname*{sup}(U/V)\leq1$$
|
769 |
+
$$(8{\overline{{7}}})$$
|
770 |
+
M := sup(U/V ) ≤ 1 (87)
|
771 |
+
since PT
|
772 |
+
t=1 ρ
|
773 |
+
(t) = 1 and g(γ
|
774 |
+
(t)) ≤ 1 for all γ
|
775 |
+
(t).
|
776 |
+
|
777 |
+
From (80)-(87), by Lemma 1, we have
|
778 |
+
|
779 |
+
$$\mathbb{E}\bigg{[}\bigg{|}\sum_{t=1}^{T}\rho^{(t)}g(\gamma^{(t)})Q^{(t)}-\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{|}^{2}\bigg{]}\leq\frac{4c^{2}}{(1-\varepsilon)^{2}}\bigg{(}\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{)}^{2}+\bigg{[}\max\bigg{(}1,\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{)}\bigg{]}^{2}\alpha,\tag{88}$$ $\frac{d\varepsilon}{d\pi}\exp\left(-\frac{\varepsilon^{2}T(1-\lambda_{\gamma,Q})\min\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}^{2}}{\varepsilon^{4}\varepsilon}\right)$, where $\lambda_{\gamma,Q}$ is the stationary distribution of the reversible
|
780 |
+
|
781 |
+
where α := 4 dν
|
782 |
+
Markov chain {γ
|
783 |
+
(t), Q(t)}.
|
784 |
+
Now, by setting
|
785 |
+
|
786 |
+
$$\varepsilon=\varepsilon_{0}=\frac{1}{\operatorname*{min}\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}},$$ which is a $\varepsilon$-function.
|
787 |
+
we have α = 4 dν dπ 1 T
|
788 |
+
. Then, we obtain
|
789 |
+
|
790 |
+
So if $ \mathbb{E}\bigg[\bigg|\sum_{t=1}^T\rho^{(t)}g(\gamma^{(t)})-\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg|^2\bigg]\leq\frac{4\varepsilon_0^2}{(1-\varepsilon_0)^2}\bigg(\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg)^2+\bigg[\max\bigg(1,\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg)\bigg]^2\alpha.$ that is.
|
791 |
+
|
792 |
+
2α. (90)
|
793 |
+
Now, observe that
|
794 |
+
|
795 |
+
$$\mathbb{E}_{\pi}[U]=\frac{\mathbb{E}_{\pi}\big{[}g(\gamma)Q\hat{\phi}(\gamma)\big{]}}{\mathbb{E}_{\pi}\big{[}\hat{\phi}(\gamma)Q\big{]}}$$ $$=\frac{\mathbb{E}_{\pi}\big{[}g(\gamma)Q\phi^{-1}(\gamma)\big{]}}{\mathbb{E}_{\pi}\big{[}\phi^{-1}(\gamma)Q\big{]}}.$$
|
796 |
+
$$(89)$$
|
797 |
+
$$(90)$$
|
798 |
+
(91) $\binom{92}{92}$ .
|
799 |
+
On the other hand, by Lemma 3, we have π(*γ, Q*) = q(Q)f(γ)
|
800 |
+
Zfwhere Zf := Pγ f(γ) and f(γ) = p(γ|D)ϕ(γ).
|
801 |
+
|
802 |
+
It follows that
|
803 |
+
|
804 |
+
$$\mathbb{E}_{\pi}\left[g(\gamma)Q\phi^{-1}(\gamma)\right]=\mathbb{E}_{q(Q)f(\gamma)/Z_{f}}\left[g(\gamma)Q\phi^{-1}(\gamma)\right]$$ $$=\sum_{\gamma}\sum_{Q}g(\gamma)Q\phi^{-1}(\gamma)\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\frac{1}{Z_{f}}\sum_{\gamma}\sum_{Q}g(\gamma)q(Q)Qp(\gamma|\mathcal{D})$$ $$=\frac{1}{Z_{f}}\mathbb{E}_{p(\gamma|\mathcal{D})}\left[g(\gamma)\right]\mathbb{E}_{q}[Q].$$
|
805 |
+
$$(93)$$
|
806 |
+
$$(94)$$
|
807 |
+
$$(95)$$
|
808 |
+
$$({\mathfrak{g h}})$$
|
809 |
+
|
810 |
+
Similarly, we have
|
811 |
+
|
812 |
+
$$\mathbb{E}_{\pi}\left[\phi^{-1}(\gamma)Q\right]=\mathbb{E}_{q(Q)f(\gamma)/Z_{f}}\left[\phi^{-1}(\gamma)Q\right]$$ $$=\sum_{Q}\sum_{\gamma}\phi^{-1}(\gamma)Q\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\frac{1}{Z_{f}}\bigg{(}\sum_{\gamma}P(\gamma|\mathcal{D})\bigg{)}\mathbb{E}_{q}[Q].$$
|
813 |
+
$$(97)$$
|
814 |
+
$$(98)$$
|
815 |
+
$$(99)$$
|
816 |
+
$$(100)^{\frac{1}{2}}$$
|
817 |
+
|
818 |
+
From (92), (96) and (99), we obtain
|
819 |
+
|
820 |
+
$$\mathbb{E}_{\pi}[U]=\mathbb{E}_{p(\gamma|\mathcal{D})}\left[g(\gamma)\right].\tag{1}$$
|
821 |
+
|
822 |
+
For the given problem, by setting g(γ) = p(γi = 1|γ−i, D), from (100), we have
|
823 |
+
|
824 |
+
$$\mathbb{E}_{\pi}[U]=\mathbb{P}\mathbb{P}(i).\tag{1}$$
|
825 |
+
$$(101)$$
|
826 |
+
$$\mathbb{E}_{\pi}[V]=\mathbb{E}_{\pi}\big{[}\hat{\phi}(\gamma)Q\big{]}$$ $$=\sum_{\gamma,Q}\hat{\phi}(\gamma)Q\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\bigg{(}\sum_{\gamma}\hat{\phi}(\gamma)\frac{f(\gamma)}{Z_{f}}\bigg{)}\bigg{(}\sum_{Q}Qq(Q)\bigg{)}$$ $$=\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]\mathbb{E}_{Q}[Q]$$ $$=\frac{S}{P}\mathbb{E}_{\pi}[\hat{\phi}(\gamma)].$$
|
827 |
+
$$(102)$$
|
828 |
+
$$(103)$$
|
829 |
+
$$(104)$$
|
830 |
+
|
831 |
+
$$\min\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}=\mathbb{E}_{\pi}[V]\min\left\{1,\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\right\}$$ $$=\mathbb{E}_{\pi}[V]\min\left\{1,\texttt{PIP}(i)\right\}$$ $$=\mathbb{E}_{\pi}[V]\texttt{PIP}(i)$$ $$=\frac{S}{P}\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]\texttt{PIP}(i).$$
|
832 |
+
$$(107)$$
|
833 |
+
$$(108)$$
|
834 |
+
$$(109)$$
|
835 |
+
|
836 |
+
In addition, we have Hence, we obtain
|
837 |
+
|
838 |
+
$$(110)$$
|
839 |
+
|
840 |
+
From (90), (101), and (110), we have
|
841 |
+
|
842 |
+
$$\mathbb{E}\left[\left|\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathbb{P}\mathbb{P}(i)\right|^{2}\right]\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathbb{P}\mathbb{P}^{2}(i)+4\frac{d\nu}{d\pi}\frac{1}{T},$$
|
843 |
+
|
844 |
+
, (111)
|
845 |
+
and
|
846 |
+
|
847 |
+
$$\varepsilon_{0}=\frac{P}{\text{PIP}(i)\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]S}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}.\tag{1}$$
|
848 |
+
|
849 |
+
Now, observe that
|
850 |
+
|
851 |
+
$$\frac{d\nu}{d\pi}(\gamma,Q)=\frac{p_{\gamma_{1},Q_{1}}(\gamma,Q)}{\pi(\gamma,Q)}$$ $$\leq\frac{1}{\pi(\gamma,Q)}$$ $$=\frac{1}{\pi(\gamma)q(Q)}$$ $$\leq\frac{P}{S}\frac{1}{\min_{\gamma}\pi(\gamma)}.$$
|
852 |
+
$$(111)$$
|
853 |
+
$$(112)$$
|
854 |
+
|
855 |
+
$$(117)$$
|
856 |
+
|
857 |
+
By combining (111) and (116), we have
|
858 |
+
|
859 |
+
$$\mathbb{E}\left[\left|\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathsf{P I P}(i)\right|^{2}\right]\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathsf{P I P}^{2}(i)+\frac{4P}{S}\frac{1}{\operatorname*{min}_{\gamma}\pi(\gamma)T}.$$
|
860 |
+
|
861 |
+
. (117)
|
862 |
+
|
863 |
+
## E Derive P(Γi|D, Γ−I)
|
864 |
+
|
865 |
+
Observe that
|
866 |
+
|
867 |
+
$$p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})=\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}\bigg(1+\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}\bigg)^{-1}.$$
|
868 |
+
|
869 |
+
In addition, we have
|
870 |
+
|
871 |
+
p(γi = 1|D, γ−i) p(γi = 0|D, γ−i) = p(γi = 1, D|γ−i) p(γi = 0, D|γ−i)(119) = p(γi = 1|γ−i, X) p(γi = 0|γ−i, X) p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X)(120) = p(γi = 1) p(γi = 0)p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X) (121) = h 1 − h p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X) . (122)
|
872 |
+
On the other hand, for any tuple γ = (γ1, γ2, · · · , γP ) such that γi = 1 (so |γ| ≥ 1), we have
|
873 |
+
|
874 |
+
$$p(Y|\gamma_{i}=1,\gamma_{-i},\beta_{\gamma},\sigma_{\gamma}^{2},X)=\frac{1}{\left(\sigma_{\gamma}\sqrt{2\pi}\right)^{N}}\exp\bigg{(}-\frac{\|Y-X_{\gamma}\beta_{\gamma}\|^{2}}{2\sigma_{\gamma}^{2}}\bigg{)}.\tag{123}$$
|
875 |
+
$$(118)$$
|
876 |
+
|
877 |
+
It follows that
|
878 |
+
|
879 |
+
p(Y |γi = 1, γ−i, X = Z βγ Z ∞ σ2γ=0 σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ p(βγ|γi = 1, γ−i)p(σ 2 γ |γi = 1, γ−i)dβγdσ2 1 = Z ∞ σ2γ=0 InvGamma12 ν0, 1 2 ν0λ0 Z βγ σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ 1 ×1 σγ √2πτ −1|γ| exp −∥βγ∥ 2 2σ 2 γ τ −1 dβγdσ2 γ . (125) Now, observe that
|
880 |
+
γ(124)
|
881 |
+
$$\quad(124)$$ $$\quad(125)$$ $$\quad(125)$$
|
882 |
+
$$\begin{array}{l}{(126)}\\ {(127)}\\ {(128)}\end{array}$$
|
883 |
+
$$(129)$$
|
884 |
+
$$\left(130\right)$$ $$\left(131\right)$$
|
885 |
+
$$\|Y-X_{\gamma}\beta_{\gamma}\|^{2}+\tau\|\beta_{\gamma}\|^{2}$$ $$\qquad=(Y-X_{\gamma}\beta_{\gamma})^{T}(Y-X_{\gamma}\beta_{\gamma})+\tau\beta_{\gamma}^{T}\beta_{\gamma}$$ $$\qquad=Y^{T}Y-2Y^{T}X_{\gamma}\beta_{\gamma}+\beta_{\gamma}^{T}X_{\gamma}^{T}X_{\gamma}\beta_{\gamma}+\tau\beta_{\gamma}^{T}\beta_{\gamma}$$ $$\qquad=Y^{T}Y-2Y^{T}X_{\gamma}\beta_{\gamma}+\beta_{\gamma}^{T}(X_{\gamma}^{T}X_{\gamma}+\tau I)\beta_{\gamma}.$$ In the above condition, the critical finite matrix $Y^{T}Y$ is $Y$-invariant.
|
886 |
+
Now, consider the EVD (singular value decomposition) of the positive definite matrix XT
|
887 |
+
γ Xγ +τ I (note that τ > 0):
|
888 |
+
|
889 |
+
$$X_{\gamma}^{T}X_{\gamma}+\tau I=U^{T}\Lambda U$$
|
890 |
+
TΛU (129)
|
891 |
+
where Λ is the a diagonal matrix consisting of all positive eigenvalue of XT
|
892 |
+
γ Xγ + τ I. Let
|
893 |
+
|
894 |
+
$$\begin{array}{l}{{\tilde{\beta}_{\gamma}:=\sqrt{\Lambda}U\beta_{\gamma},}}\\ {{\tilde{Y}_{\gamma}:=\sqrt{\Lambda^{-1}}U X_{\gamma}^{T}Y.}}\end{array}$$
|
895 |
+
Then, we have
|
896 |
+
∥Y − Xγβγ∥ 2 + τ∥βγ∥ 2 = Y T Y − 2Y T Xγβγ + β T γ (XT γ Xγ + τ I)βγ (132) = Y T Y − 2Y T Xγ √ Λ−1U T β˜γ + β˜T γ β˜γ (133) = Y T Y − 2Y˜ T γ β˜γ + β˜T γ β˜γ (134) =∥Y ∥ 2 − ∥Y˜γ| 2+Y˜ T γ Y˜γ − 2Y˜ T γ β˜γ + β˜T γ β˜γ (135) =∥Y ∥ 2 − ∥Y˜γ| 2+ ∥Y˜γ − β˜γ∥ 2. (136)
|
897 |
+
Hence, we have
|
898 |
+
|
899 |
+
$$\begin{array}{l}{{d\beta_{\gamma}=\operatorname*{det}(U^{T}\Lambda^{-1/2})d\tilde{\beta}_{\gamma}}}\\ {{\qquad=\operatorname*{det}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1/2}d\tilde{\beta}_{\gamma}.}}\end{array}$$
|
900 |
+
|
901 |
+
Hence, we have
|
902 |
+
|
903 |
+
σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ σγ √2πτ −1|γ| exp −∥βγ∥ 2 2σ 2 γ τ −1 dβγ (139) 1 Z 1 βγ = Z β˜γ 1 σγ √2πN exp − ∥Y ∥ 2 − ∥Y˜γ| 2+ ∥Y˜γ − β˜γ∥ 2 2σ 2 γ ×1 σγ √2πτ −1|γ| det(XT γ Xγ + τ I) −1/2dβ˜γ (140) =1 σγ √2πN τ |γ|/2exp − ∥Y ∥ 2 − ∥Y˜γ| 2 2σ 2 γ det(XT γ Xγ + τ I) −1/2. (141)
|
904 |
+
$\left(132\right)$ $\left(133\right)$ $\left(134\right)$ $\left(135\right)$ $\left(136\right)$
|
905 |
+
$$(137)$$ $$(138)$$
|
906 |
+
$$\left({139}\right)$$ $$\left({140}\right)$$ $$\left({141}\right)$$ ...
|
907 |
+
By combining (125) and (141), we obtain
|
908 |
+
p(Y |γi = 1, γ−i, X
|
909 |
+
=
|
910 |
+
Z
|
911 |
+
βγ
|
912 |
+
Z ∞
|
913 |
+
σ2γ=0
|
914 |
+
σγ
|
915 |
+
√2πN
|
916 |
+
exp −
|
917 |
+
∥Y − Xγβγ∥
|
918 |
+
2
|
919 |
+
2σ
|
920 |
+
2
|
921 |
+
γ
|
922 |
+
1
|
923 |
+
$$(142)$$
|
924 |
+
$$(143)$$
|
925 |
+
p(βγ|γi = 1, γ−i)p(σ
|
926 |
+
2
|
927 |
+
γ
|
928 |
+
|γi = 1, γ−i)dβγdσ2
|
929 |
+
γ(142)
|
930 |
+
=
|
931 |
+
Z ∞
|
932 |
+
σ2γ=0
|
933 |
+
InvGamma12
|
934 |
+
ν0,
|
935 |
+
1
|
936 |
+
2
|
937 |
+
ν0λ0
|
938 |
+
1
|
939 |
+
σγ
|
940 |
+
√2πN
|
941 |
+
τ
|
942 |
+
|γ|/2
|
943 |
+
× exp −
|
944 |
+
∥Y ∥
|
945 |
+
2 − ∥Y˜γ|
|
946 |
+
2
|
947 |
+
2σ
|
948 |
+
2
|
949 |
+
γ
|
950 |
+
det(XT
|
951 |
+
γ Xγ + τ I)
|
952 |
+
−1/2dσ2
|
953 |
+
γ(143)
|
954 |
+
= det(XT
|
955 |
+
γ Xγ + τ I)
|
956 |
+
−1/2τ
|
957 |
+
|γ|/2(2π)
|
958 |
+
−N/2
|
959 |
+
Z ∞
|
960 |
+
σ2γ=0
|
961 |
+
InvGamma12
|
962 |
+
ν0,
|
963 |
+
1
|
964 |
+
2
|
965 |
+
ν0λ0
|
966 |
+
(σ
|
967 |
+
2
|
968 |
+
γ
|
969 |
+
)
|
970 |
+
−N/2
|
971 |
+
× exp −
|
972 |
+
∥Y ∥
|
973 |
+
2 − ∥Y˜γ∥
|
974 |
+
2
|
975 |
+
2σ
|
976 |
+
2
|
977 |
+
γ
|
978 |
+
dσ2
|
979 |
+
γ(144)
|
980 |
+
= det(XT
|
981 |
+
γ Xγ + τ I)
|
982 |
+
−1/2τ
|
983 |
+
|γ|/2(2π)
|
984 |
+
−N/2
|
985 |
+
×
|
986 |
+
Z ∞
|
987 |
+
σ2γ=0
|
988 |
+
(1/2λ0ν0)
|
989 |
+
1/2ν0
|
990 |
+
Γ(1/2ν0)(1/σ2
|
991 |
+
γ
|
992 |
+
)
|
993 |
+
1/2ν0+1 exp − 1/2ν0λ0/σ2
|
994 |
+
γ
|
995 |
+
(σ
|
996 |
+
2
|
997 |
+
γ
|
998 |
+
)
|
999 |
+
−N/2
|
1000 |
+
× exp −
|
1001 |
+
∥Y ∥
|
1002 |
+
2 − ∥Y˜γ∥
|
1003 |
+
2
|
1004 |
+
2σ
|
1005 |
+
2
|
1006 |
+
γ
|
1007 |
+
dσ2
|
1008 |
+
γ(145)
|
1009 |
+
= det(XT
|
1010 |
+
γ Xγ + τ I)
|
1011 |
+
−1/2τ
|
1012 |
+
|γ|/2(2π)
|
1013 |
+
−N/2
|
1014 |
+
(1/2λ0ν0)
|
1015 |
+
1/2ν0
|
1016 |
+
Γ(1/2ν0)
|
1017 |
+
×
|
1018 |
+
Z ∞
|
1019 |
+
σ2γ=0
|
1020 |
+
(1/σ2
|
1021 |
+
γ
|
1022 |
+
)
|
1023 |
+
1/2ν0+1+N/2exp −
|
1024 |
+
∥Y ∥
|
1025 |
+
2 − ∥Y˜γ∥
|
1026 |
+
2 + ν0λ0
|
1027 |
+
|
1028 |
+
2σ
|
1029 |
+
2
|
1030 |
+
γ
|
1031 |
+
$$(144)$$
|
1032 |
+
$$(145)$$
|
1033 |
+
$$(146)$$
|
1034 |
+
$$(147)$$
|
1035 |
+
$$(148)$$
|
1036 |
+
$$(149)$$ $$(150)$$
|
1037 |
+
dσ2
|
1038 |
+
γ(146)
|
1039 |
+
= det(XT
|
1040 |
+
γ Xγ + τ I)
|
1041 |
+
−1/2τ
|
1042 |
+
|γ|/2(2π)
|
1043 |
+
−N/2
|
1044 |
+
(1/2λ0ν0)
|
1045 |
+
1/2ν0
|
1046 |
+
Γ(1/2ν0)
|
1047 |
+
× Γ
|
1048 |
+
N + ν0
|
1049 |
+
2
|
1050 |
+
∥Y ∥
|
1051 |
+
2 − ∥Y˜γ∥
|
1052 |
+
2 + ν0λ0
|
1053 |
+
2
|
1054 |
+
−
|
1055 |
+
N+ν0
|
1056 |
+
2
|
1057 |
+
. (147)
|
1058 |
+
Let γ˜1 is given by γ−i with γi = 1, γ˜0 is given by γ−i with γi = 0. It follows that
|
1059 |
+
$$\frac{p(Y|\gamma_{i}=1,\gamma_{-i},X)}{p(Y|\gamma_{i}=0,\gamma_{-i},X)}=\sqrt{\tau}\sqrt{\frac{\operatorname*{det}(X_{\gamma_{i}}^{2}X_{\gamma_{0}}+\tau I)}{\operatorname*{det}(X_{\gamma_{1}}^{2}X_{\gamma_{1}}+\tau I)}}\left(\frac{\|Y\|^{2}-\|\hat{Y}_{\gamma_{0}}\|^{2}+\nu_{0}\lambda_{0}}{\|Y\|^{2}-\|\hat{Y}_{\gamma_{1}}\|^{2}+\nu_{0}\lambda_{0}}\right)^{\frac{N+\nu_{0}}{2}}.$$
|
1060 |
+
. (148)
|
1061 |
+
$$\|\tilde{Y}_{\gamma}\|^{2}=\tilde{Y}_{\gamma}^{T}\tilde{Y}_{\gamma}$$ $$=Y^{T}X_{\gamma}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1}X_{\gamma}^{T}Y.\tag{1}$$
|
1062 |
+
γ Y. (150)
|
1063 |
+
$$\frac{p(Y|\gamma_{i}=1,\gamma_{-i},X)}{p(Y|\gamma_{i}=0,\gamma_{-i},X)}=\sqrt{\frac{\operatorname*{det}(X_{\gamma_{0}}^{T}X\gamma_{0}+\tau I)}{\operatorname*{det}(X_{\gamma_{1}}^{T}X\gamma_{1}+\tau I)}\Big(\frac{S_{\gamma_{0}}}{S_{\gamma_{1}}}\Big)^{N+\nu_{0}}},$$
|
1064 |
+
$$S_{\gamma}:=Y^{T}Y-Y^{T}X_{\gamma}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1}X_{\gamma}^{T}Y+\nu_{0}\lambda_{0}.$$
|
1065 |
+
γ Y + ν0λ0. (152)
|
1066 |
+
$$p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})={\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}}\bigg(1+{\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}}\bigg)^{-1}.$$
|
1067 |
+
$$(151)$$
|
1068 |
+
$$(152)$$
|
1069 |
+
$$(153)$$
|
1070 |
+
On the other hand, we have Hence, we finally have where Based on this, we can estimate Denote the set of included variables in γ˜0 as I = {j : ˜γ0,j = 1} . Define F =XT
|
1071 |
+
γ˜0Xγ˜0 + τ I−1, ν = XT Y
|
1072 |
+
and νγ˜0 = (νj )j∈I . Also define A = XT X and ai = (Aji)j∈I . Then, by using the same arguments as (Zanella
|
1073 |
+
& Roberts, 2019, Appendix B1), we can show that
|
1074 |
+
|
1075 |
+
S(˜γ1) = S(˜γ0) − di ν T γ˜0 F ai − νi 2, (154)
|
1076 |
+
where di = (Aii + τ − a T
|
1077 |
+
i F ai)
|
1078 |
+
−1. In addition, we can compute a T
|
1079 |
+
i F ai by using the Cholesky decomposition of F = LLT and
|
1080 |
+
|
1081 |
+
$$a_{i}^{T}Fa_{i}=\|a_{i}^{T}L\|^{2}$$ $$=\sum_{j\in I}(BL)_{ij}^{2},\tag{1}$$
|
1082 |
+
$$\left(155\right)$$ $$\left(156\right)$$
|
1083 |
+
$$(157)$$
|
1084 |
+
$$(158)$$
|
1085 |
+
|
1086 |
+
where B is the p × |γ| matrix made of the columns of A corresponding to variables included in γ.
|
1087 |
+
|
1088 |
+
In addition, we have
|
1089 |
+
|
1090 |
+
$X^T_{\tilde{\tau}_1}X_{\tilde{\tau}_1}+\tau I=\begin{pmatrix}X^T_{\tilde{\tau}0}X_{\tilde{\tau}0}+\tau I&a_i\\ a^T_i&A_{ii}+\tau\end{pmatrix}$ for the last one is to find the natural numbers we want to use that.
|
1091 |
+
Hence, by using Schur's formula for the determinant of block matrix, we are easy to see that
|
1092 |
+
|
1093 |
+
$ \frac{\det(X^T_{\bar{\gamma}_0}X_{\bar{\gamma}_0}+\tau I)}{\det(X^T_{\bar{\gamma}_1}X_{\bar{\gamma}_1}+\tau I)}=d_i.$ $ \tau$.
|
1094 |
+
$$(159)$$
|
1095 |
+
$$(160)$$
|
1096 |
+
(161) (162) (163) (164) (165) (166) (166) (167) (168)
|
1097 |
+
Using this algorithm, if pre-computing XT X is not possible, the computational complexity per conditional PIP is O(N|γ| 2 +|γ| 3 +P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity per conditional PIP is O(|γ| 3 + P|γ| 2).
|
1098 |
+
|
1099 |
+
## F Proof Of Lemma 9
|
1100 |
+
|
1101 |
+
From Lemma 8 and the fact that {γ
|
1102 |
+
(t), Q(t)} forms a reversible Markov chain with transition kernel K((*γ, Q*) → (γ
|
1103 |
+
′, Q′)) = K∗(γ → γ
|
1104 |
+
′)q(Q′), we have
|
1105 |
+
|
1106 |
+
1 − λγ,Q
|
1107 |
+
= inf
|
1108 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1109 |
+
2]=1
|
1110 |
+
⟨g, g⟩π − ⟨K*g, g*⟩ (159)
|
1111 |
+
= 1 − sup
|
1112 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1113 |
+
2]=1
|
1114 |
+
⟨K*g, g*⟩ (160)
|
1115 |
+
= 1 − sup
|
1116 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1117 |
+
2]=1
|
1118 |
+
X
|
1119 |
+
γ,Q
|
1120 |
+
Kg(γ, Q)g(γ, Q)π(*γ, Q*) (161)
|
1121 |
+
= 1 − sup
|
1122 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1123 |
+
2]=1
|
1124 |
+
X
|
1125 |
+
γ,Q
|
1126 |
+
X
|
1127 |
+
γ′,Q′
|
1128 |
+
K((*γ, Q*) → (γ
|
1129 |
+
′, Q′))g(γ
|
1130 |
+
′, Q′)g(γ, Q)π(*γ, Q*) (162)
|
1131 |
+
= 1 −
|
1132 |
+
S
|
1133 |
+
Psup
|
1134 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1135 |
+
2]=1
|
1136 |
+
X
|
1137 |
+
γ,Q
|
1138 |
+
X
|
1139 |
+
γ′,Q′
|
1140 |
+
K∗(γ → γ
|
1141 |
+
′)q(Q
|
1142 |
+
′)g(γ
|
1143 |
+
′, Q′)g(γ, Q)π(*γ, Q*) (163)
|
1144 |
+
= 1 −
|
1145 |
+
S
|
1146 |
+
Psup
|
1147 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1148 |
+
2]=1
|
1149 |
+
X
|
1150 |
+
γ,Q
|
1151 |
+
X
|
1152 |
+
γ′,Q′
|
1153 |
+
K∗(γ → γ
|
1154 |
+
′)
|
1155 |
+
f(γ)
|
1156 |
+
Zf
|
1157 |
+
q(Q)g(γ
|
1158 |
+
′, Q′)g(*γ, Q*)q(Q
|
1159 |
+
′) (164)
|
1160 |
+
= 1 −
|
1161 |
+
S
|
1162 |
+
Psup
|
1163 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1164 |
+
2]=1
|
1165 |
+
X
|
1166 |
+
γ,γ′
|
1167 |
+
K∗(γ → γ
|
1168 |
+
′)
|
1169 |
+
f(γ)
|
1170 |
+
Zf
|
1171 |
+
X
|
1172 |
+
Q,Q′
|
1173 |
+
g(γ
|
1174 |
+
′, Q′)g(*γ, Q*)q(Q)q(Q
|
1175 |
+
′) (165)
|
1176 |
+
= 1 −
|
1177 |
+
S
|
1178 |
+
Psup
|
1179 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1180 |
+
2]=1
|
1181 |
+
X
|
1182 |
+
γ,γ′
|
1183 |
+
K∗(γ → γ
|
1184 |
+
′)π(γ)
|
1185 |
+
X
|
1186 |
+
Q
|
1187 |
+
g(*γ, Q*)q(Q)
|
1188 |
+
X
|
1189 |
+
Q′
|
1190 |
+
π(γ
|
1191 |
+
′, Q′)q(Q
|
1192 |
+
′)
|
1193 |
+
(166)
|
1194 |
+
= 1 −
|
1195 |
+
S
|
1196 |
+
Psup
|
1197 |
+
g(γ,Q):Eπ[g]=0,Eπ[g
|
1198 |
+
2]=1
|
1199 |
+
X
|
1200 |
+
γ,γ′
|
1201 |
+
K∗(γ → γ
|
1202 |
+
′)π(γ)h(γ)h(γ
|
1203 |
+
′) (167)
|
1204 |
+
where
|
1205 |
+
|
1206 |
+
$$\pi(\gamma)=\frac{f(\gamma)}{Z_{f}},$$ $$Z_{f}=\sum_{\gamma}f(\gamma),$$ $$h(\gamma):=\sum_{Q}g(\gamma,Q)q(Q).$$
|
1207 |
+
(168) $\left(169\right)$ $\left(170\right)$ .
|
1208 |
+
Observe that
|
1209 |
+
|
1210 |
+
$$\mathbb{E}_{\pi}[h(\gamma)]=\sum_{\gamma}h(\gamma)\pi(\gamma)$$ $$=\sum_{\gamma}\sum_{Q}g(\gamma,Q)q(Q)\pi(\gamma)$$ $$=\sum_{\gamma,Q}g(\gamma,Q)\pi(\gamma,Q)$$ $$=\mathbb{E}_{\pi}[g(\gamma,Q)]$$ $$=0.$$
|
1211 |
+
$$(171)$$
|
1212 |
+
$$(172)$$
|
1213 |
+
$$(173)$$
|
1214 |
+
|
1215 |
+
On the other hand, we also have where (177) follows from the convexity of the function x 2 on [0, ∞).
|
1216 |
+
|
1217 |
+
From (175), (180), and (167), we obtain
|
1218 |
+
|
1219 |
+
$$1-\lambda_{\gamma,Q}\geq1-\sup_{h(\gamma):\mathbb{E}_{\pi}[h]=0,\mathbb{E}_{\pi}[h^{2}]\leq1}\sum_{\gamma,\gamma^{\prime}}K^{*}(\gamma\to\gamma^{\prime})\pi(\gamma)h(\gamma)h(\gamma^{\prime}).\tag{181}$$
|
1220 |
+
|
1221 |
+
Now, note that Eπ[h] = 0 is equivalent to h ⊥π 1. Let |Ω| = 2P +1 := n and h1, h2, · · · , hn are eigenfunctions of K∗corresponding to the decreasing ordered eigenvalues λ1 ≥ λ2 *≥ · · · ≥* λn and are orthogonal since K∗is self-adjoint. Set h1 = 1. Since ∥h∥2,π = 1 and h ⊥π 1, we have h =Pn j=2 ajhj because it is perpendicular to h1 so it can be only represented by these eigenvectors. By taking l2-norm on both sizes we have Pn j=2 a 2 j ≤ 1 since the form like ⟨hi, hj ⟩π = 0 and ⟨hi, hi⟩ = ∥hi∥
|
1222 |
+
2 2,π = 1. Thus,
|
1223 |
+
|
1224 |
+
$$\sup_{h:\mathbb{E}_{\tau}[h]=0,\mathbb{E}_{\tau}[h^{2}]\leq1}\sum_{\gamma,\gamma^{\prime}}K^{*}(\gamma\to\gamma^{\prime})\pi(\gamma)h(\gamma)h(\gamma^{\prime})\leq\max_{a_{2},a_{3},\cdots,a_{n},\sum_{j=2}^{n}a_{j}^{2}\leq1}\sum_{j=1}^{n}a_{j}^{2}\lambda_{j}$$ $$\leq\lambda_{2}\sum_{j=2}^{n}a_{j}^{2}$$ $$=\lambda_{2},$$
|
1225 |
+
jλj (182)
|
1226 |
+
|
1227 |
+
$$\begin{array}{l}{(174)}\\ {(175)}\end{array}$$
|
1228 |
+
$$\mathbb{E}_{\pi}\big{[}h^{2}(\gamma)\big{]}=\sum_{\gamma}\bigg{(}\sum_{Q}g(\gamma,Q)q(Q)\bigg{)}^{2}\pi(\gamma)$$ $$\leq\sum_{\gamma}\bigg{(}\sum_{Q}g(\gamma,Q)^{2}q(Q)\bigg{)}\pi(\gamma)$$ $$=\sum_{\gamma,Q}g(\gamma,Q)^{2}\pi(\gamma,Q)$$ $$=\mathbb{E}_{\pi}\big{[}g(\gamma,Q)^{2}\big{]}$$ $$=1,$$
|
1229 |
+
$$(176)$$
|
1230 |
+
$$(177)$$
|
1231 |
+
$$(178)$$
|
1232 |
+
$$\begin{array}{l}{(179)}\\ {(180)}\end{array}$$
|
1233 |
+
|
1234 |
+
where Pn j=2 a 2 j ≤ 1 and λj ∈ spec(P) such that λ2 ≥ λ3 *· · · ≥* λn. Hence, from (184), we obtain
|
1235 |
+
|
1236 |
+
$$1-\lambda_{\gamma,Q}\geq1-\frac{S}{P}\lambda_{2}\tag{185}$$ $$=\frac{S}{P}(1-\lambda_{P})+1-\frac{S}{P}$$ (186) $$\geq1-\frac{S}{P}.\tag{187}$$
|
jFi4dXEOdN/jFi4dXEOdN_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 26,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 3,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 3,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 24,
|
14 |
+
"code": 0,
|
15 |
+
"table": 0,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 227,
|
18 |
+
"unsuccessful_ocr": 22,
|
19 |
+
"equations": 249
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|