chunk stringlengths 2 512 |
|---|
Matrix Denoising with Doubly Heteroscedastic Noise:
Fundamental Limits and Optimal Spectral Methods
Yihan Zhang
Institute of Science and Technology Austria
zephyr.z798@gmail.com
Marco Mondelli
Institute of Science and Technology Austria
marco.mondelli@ist.ac.at
Abstract
We study the matrix denoising problem of estimating the singular vectors of a
rank-1 signal corrupted by noise with both column and row correlations. Existing
works are either unable to pinpoint the exact asymptotic estimation error or, when |
works are either unable to pinpoint the exact asymptotic estimation error or, when
they do so, the resulting approaches (e.g., based on whitening or singular value
shrinkage) remain vastly suboptimal. On top of this, most of the literature has
focused on the special case of estimating the left singular vector of the signal
when the noise only possesses row correlation (one-sided heteroscedasticity). In
contrast, our work establishes the information-theoretic and algorithmic limits of |
contrast, our work establishes the information-theoretic and algorithmic limits of
matrix denoising with doubly heteroscedastic noise. We characterize the exact
asymptotic minimum mean square error, and design a novel spectral estimator
with rigorous optimality guarantees: under a technical condition, it attains positive
correlation with the signals whenever information-theoretically possible and, for
one-sided heteroscedasticity, it also achieves the Bayes-optimal error. Numerical |
one-sided heteroscedasticity, it also achieves the Bayes-optimal error. Numerical
experiments demonstrate the significant advantage of our theoretically principled
method with the state of the art. The proofs draw connections with statistical
physics and approximate message passing, departing drastically from standard
random matrix theory techniques.
1
Introduction
Matrix denoising is a central primitive in statistics and machine learning, and the problem is to |
Matrix denoising is a central primitive in statistics and machine learning, and the problem is to
recover a signal X ∈ Rn×d from an observation A = X + W corrupted by additive noise W. This
finds applications across multiple domains of sciences, e.g., imaging [21, 60], biology [13, 42] and
astronomy [67, 5]. When X has low rank and W i.i.d. entries, A is the standard model for principal
component analysis, typically referred to as the Johnstone spiked covariance model [38]. When |
component analysis, typically referred to as the Johnstone spiked covariance model [38]. When
n, d are both large and proportional, which corresponds to the most sample-efficient regime, its
Bayes-optimal limits are well understood [48], and it has been established how to achieve them
efficiently [53]. Minimax/non-asymptotic guarantees are also available in special cases, such as
sparse PCA [17], Gaussian mixtures [69] and certain joint scalings of (n, d) [54]. |
sparse PCA [17], Gaussian mixtures [69] and certain joint scalings of (n, d) [54].
However, in most applications, noise is highly structured and correlated, thereby calling for more
realistic assumptions on W than having i.i.d. entries. A recent line of work addresses this concern
by studying matrix denoising with heteroscedastic noise [1, 66, 29, 40, 23], resting on two basic
ideas: whitening and singular value shrinkage. Whitening refers to multiplying the data matrix by |
ideas: whitening and singular value shrinkage. Whitening refers to multiplying the data matrix by
the square root of the inverse covariance, in order to reduce the model to one with i.i.d. noise; and
singular value shrinkage retains the singular vectors of the data while deflating the singular values to
correct for the noise. Though the exact asymptotic performance of these algorithms has been derived
[66, 29, 40, 23], their optimality is yet to be determined from a Bayesian standpoint. In fact, we will |
prove that whitening and shrinkage are not the correct way to approach Bayes optimality.
Preprint. Under review.
arXiv:2405.13912v1 [math.ST] 22 May 2024
Main contributions.
We focus on the prototypical model A = X + W, where X = λ
nu∗v∗⊤ is a
rank-1 signal, λ is the signal-to-noise ratio (SNR), and W = Ξ1/2�
WΣ1/2 is doubly heterogeneous
noise. Here u∗, v∗ follow i.i.d. priors; �
W contains i.i.d. Gaussian entries; the covariance matrices |
W contains i.i.d. Gaussian entries; the covariance matrices
Ξ, Σ capture column and row correlations; and we consider the typical high-dimensional regime in
which n, d are both large and proportional. Our main results are summarized below.
1. We design an efficient spectral estimator to recover u∗, v∗, and we provide a precise asymp-
totic analysis of its performance, see Theorem 5.1. This estimator is given by the top singular
vectors of a matrix obtained by carefully pre-processing A, see (5.3). |
vectors of a matrix obtained by carefully pre-processing A, see (5.3).
2. When the priors of u∗, v∗ are standard Gaussian, we show in Corollary 5.2 that the spectral
estimator above is optimal in the following sense: (i) under a technical condition, it achieves
the optimal weak recovery threshold, namely its mean square error is non-trivial as soon
as this is information-theoretically possible; (ii) it achieves the Bayes-optimal error for |
as this is information-theoretically possible; (ii) it achieves the Bayes-optimal error for
u∗ (resp. v∗) when Ξ (resp. Σ) is the identity. These optimality guarantees follow from
rigorously obtaining the asymptotic minimum mean square error (MMSE) for the estimation
of the whitened signals Ξ−1/2u∗ and Σ−1/2v∗, see Theorem 4.2.
Our spectral estimator only involves matrix multiplication and computing principal singular vectors. |
Practically, this can be efficiently done using standard SVD algorithms or power iteration [44].
For both one-sided and double heteroscedasticity, numerical experiments in Figures 2 and 3 show
significant advantage of our spectral estimator for moderate SNRs over HeteroPCA [72] and shrinkage-
based methods, i.e., Whiten-Shrink-reColor [40, 41], OptShrink [56], and ScreeNOT [24].
Proof techniques.
We take a completely different route from classical approaches in statistics |
Proof techniques.
We take a completely different route from classical approaches in statistics
and random matrix theory (e.g., whitening and shrinkage), and instead exploit tools from statistical
physics and the theory of approximate message passing. In particular, the MMSE for the whitened
signals Ξ−1/2u∗, Σ−1/2v∗ is obtained via an interpolation argument [9, 48, 49]. This result allows us
to derive the weak recovery threshold for estimating the true signals u∗, v∗. Moreover, for one-sided |
heteroscedasticity, this MMSE coincides with that for estimating the true signal on the homoscedastic
side. Evaluating the Bayes-optimal estimators requires solving high-dimensional integrals that are
computationally intractable. To circumvent this issue, we propose an efficient spectral method that
still enjoys optimality guarantees. Its design and analysis draw connections with a family of iterative
algorithms called approximate message passing (AMP) [10, 28].
2
Related work |
algorithms called approximate message passing (AMP) [10, 28].
2
Related work
Research on matrix denoising in the homoscedastic case (Ξ = In, Σ = Id) has a rich history, and in
random matrix theory properties of the spectrum and eigenspaces of A have been studied exhaustively.
Most prominently, the BBP phase transition phenomenon [4] (and its finite-sample counterpart [57])
unveils a threshold of the SNR λ above which a pair of outlier singular value and singular vector |
unveils a threshold of the SNR λ above which a pair of outlier singular value and singular vector
emerge. Under i.i.d. priors, the asymptotic Bayes-optimal estimation error has been derived [48, 49],
rigorously justifying predictions from statistical physics [43]. The proof uses the interpolation method
due to Guerra [31], originally developed in the context of mean-field spin glasses. Besides low-rank
matrix estimation, this method (including its adaptive variant [9] and the Aizenman–Sims–Starr |
matrix estimation, this method (including its adaptive variant [9] and the Aizenman–Sims–Starr
scheme [2]) has also been applied to a range of problems, including spiked tensor estimation [45],
generalized linear models [8], stochastic block models [71] and group synchronization [70].
Moving to the heteroscedastic case, an active line of work concerns optimal singular value shrinkage
methods [40, 29, 41, 66, 56, 23]. These methods can be regarded as a special family of rotationally |
methods [40, 29, 41, 66, 56, 23]. These methods can be regarded as a special family of rotationally
invariant estimators, which apply a univariate function η: R≥0 → R to each empirical singular value.
An example widely employed by practitioners is the thresholding function ηθ(y) = y1{y > θ}
[24]. In the presence of noise heteroscedasticity, most of these results are based on whitening [39].
Another model of noise heterogeneity common in the literature takes W = �
W ◦ ∆◦1/2, where �
W |
Another model of noise heterogeneity common in the literature takes W = �
W ◦ ∆◦1/2, where �
W
has i.i.d. Gaussian entries, ∆ is a deterministic block matrix with fixed (i.e., constant with respect to
n, d) number of blocks, and ◦ denotes the element-wise product. This means that the entries of the
noise are independent but non-identically distributed, and they follow the variance profile ∆. The
corresponding low-rank perturbation A, known as a spiked inhomogeneous matrix, has attracted |
corresponding low-rank perturbation A, known as a spiked inhomogeneous matrix, has attracted
attention from both the information-theoretic [11, 63, 32] and the algorithmic sides [34, 46, 58].
Spiked inhomogeneous matrices have some connections with the model considered in this paper: if
∆ has rank 1, such A can be realized by taking Ξ, Σ to be diagonal with suitable block structures.
2
Finally, non-asymptotic results for the heteroscedastic and the inhomogeneous models have been |
2
Finally, non-asymptotic results for the heteroscedastic and the inhomogeneous models have been
derived in varying generality in [72, 78, 20, 1, 16]. We highlight that our paper is the first to establish
information-theoretic and algorithmic limits for doubly heteroscedastic noise.
Our characterization of the spectral estimator relies on an AMP algorithm that converges to it by
performing power iteration. AMP refers to a family of iterative procedures, whose performance in |
performing power iteration. AMP refers to a family of iterative procedures, whose performance in
the high-dimensional limit is precisely characterized by a low-dimensional deterministic recursion
called state evolution [10, 14]. Originally introduced for compressed sensing [25], AMP algorithms
have been developed for various settings, including low-rank estimation [53, 27, 6] and inference in
generalized linear models [61, 62, 68]. Beyond statistical estimation, AMP proves its versatility as |
both an efficient algorithm and a proof technique for studying e.g. posterior sampling [55], spectral
universality [26], first order methods with random data [19], mismatched estimation [7], spectral
estimators for generalized linear models [75, 76] and their combination with linear estimators [50].
3
Problem setup
Consider the following rank-1 rectangular matrix estimation problem with doubly heteroscedastic
noise where we observe
A = λ
nu∗v∗⊤ + W ∈ Rn×d,
(3.1) |
noise where we observe
A = λ
nu∗v∗⊤ + W ∈ Rn×d,
(3.1)
and aim to estimate u∗, v∗. The following assumptions are imposed throughout the paper. The
dimensions n, d → ∞ obey the proportional scaling n/d → δ ∈ (0, ∞), where δ is the aspect ratio.
The SNR λ ∈ [0, ∞) is a known constant (relative to n, d). The signals (u∗, v∗) ∼ P ⊗n ⊗ Q⊗d
have i.i.d. priors, where P, Q are distributions on R with mean 0 and variance 1. The unknown noise
matrix has the form W = Ξ1/2�
WΣ1/2 ∈ Rn×d, with �
Wi,j
i.i.d. |
matrix has the form W = Ξ1/2�
WΣ1/2 ∈ Rn×d, with �
Wi,j
i.i.d.
∼ N(0, 1/n) independent of (u∗, v∗).
The covariances Ξ ∈ Rn×n, Σ ∈ Rd×d are known, deterministic,1 strictly positive definite and satisfy
lim
n→∞
1
n Tr(Ξ) = lim
d→∞
1
d Tr(Σ) = 1.
(3.2)
Their empirical spectral distributions (ESD) converge (as n, d → ∞ s.t. n/d → ∞) weakly to the
laws of the random variables Ξ and Σ. Furthermore, ∥Ξ∥2, ∥Σ∥2 are uniformly bounded over d. The |
laws of the random variables Ξ and Σ. Furthermore, ∥Ξ∥2, ∥Σ∥2 are uniformly bounded over d. The
supports of Ξ, Σ are compact subsets of (0, ∞). For all ε > 0, there exists d0 ∈ N s.t. for all d ≥ d0,
supp(ESD(Ξ)) ⊂ supp(Ξ) + [−ε, ε],
supp(ESD(Σ)) ⊂ supp(Σ) + [−ε, ε].
(3.3)
The trace assumption (3.2) on the covariances is for normalization purposes since the values of the
traces, if not 1, can be absorbed into λ. The support assumption (3.3) excludes outliers in the spectra |
of covariances which may contribute to undesirable spikes in A [66].
4
Information-theoretic limits
In this section, we switch to an equivalent rescaled model
Y := √nA =
�γ
nu∗v∗⊤ + Ξ1/2ZΣ1/2 ∈ Rn×d,
(4.1)
where γ := λ2 and Z = √n�
W contains i.i.d. elements Zi,j
i.i.d.
∼ N(0, 1). Abusing terminology, we
refer to γ as the SNR of Y . Define also α := 1/δ ∈ (0, ∞) so that d/n → α. The scaling of the
parameters in (4.1) turns out to be more convenient for presenting the results in this section. Results |
for Y can be easily translated to A by a change of variables.
Let �u∗ := Ξ−1/2u∗ and �v∗ := Σ−1/2v∗ denote the whitened signals. The main result of this section
is Theorem 4.2, which characterizes the performance of the matrix minimum mean square error
(MMSE) associated to the estimation of �u∗(�v∗)⊤, �u∗(�u∗)⊤ and �v∗(�v∗)⊤, via the corresponding
Bayes-optimal estimators:
MMSEn(γ) := 1
ndE
����u∗(�v∗)⊤ − E
�
�u∗(�v∗)⊤ �� Y
���2
F
�
,
(4.2) |
Bayes-optimal estimators:
MMSEn(γ) := 1
ndE
����u∗(�v∗)⊤ − E
�
�u∗(�v∗)⊤ �� Y
���2
F
�
,
(4.2)
1All our results hold verbatim if Ξ, Σ are random matrices independent of each other and of u∗, v∗, �
W.
3
MMSEu
n(γ) := 1
n2 E
����u∗(�u∗)⊤ − E
�
�u(�u∗)⊤ �� Y
���2
F
�
,
(4.3)
MMSEv
n(γ) := 1
d2 E
����v∗(�v∗)⊤ − E
�
�v∗(�v∗)⊤ �� Y
���2
F
�
.
(4.4)
Our characterization involves a pair of parameters (q∗
u, q∗
v) ∈ R2
≥0 defined as the largest solution to
qu = E
�
αγqvΞ
−2
1 + αγqvΞ
−1
�
,
qv = E
�
γquΣ
−2 |
v) ∈ R2
≥0 defined as the largest solution to
qu = E
�
αγqvΞ
−2
1 + αγqvΞ
−1
�
,
qv = E
�
γquΣ
−2
1 + γquΣ
−1
�
.
(4.5)
The proposition below, proved in Appendix A, justifies the existence of the solution to (4.5) and
identifies when a non-trivial solution emerges.
Proposition 4.1. The fixed point equation (4.5) always has a trivial solution (0, 0). There exists a
non-trivial solution (q∗
u, q∗
v) ∈ R2
>0 if and only if
αγ2E
�
Σ
−2�
E
�
Ξ
−2�
> 1,
(4.6)
in which case the non-trivial solution is unique. |
αγ2E
�
Σ
−2�
E
�
Ξ
−2�
> 1,
(4.6)
in which case the non-trivial solution is unique.
We are now ready to state our main result on the MMSE.
Theorem 4.2. Assume P = Q = N(0, 1). For almost every γ > 0,
lim
n→∞ MMSEn(γ) = E
�
Ξ
−1�
E
�
Σ
−1�
− q∗
uq∗
v,
(4.7)
lim
n→∞ MMSEu
n(γ) = E
�
Ξ
−1�2
− q∗
u
2,
lim
n→∞ MMSEu
n(γ) = E
�
Σ
−1�2
− q∗
v
2.
(4.8)
We note that
lim
n→∞
1
ndE
����u∗(�v∗)⊤��2
F
�
= lim
n→∞
1
ndE
�
∥�u∗∥2
2
�
E
�
∥�v∗∥2
2
�
= E
�
Ξ
−1�
E
�
Σ
−1�
,
(4.9) |
1
ndE
����u∗(�v∗)⊤��2
F
�
= lim
n→∞
1
ndE
�
∥�u∗∥2
2
�
E
�
∥�v∗∥2
2
�
= E
�
Ξ
−1�
E
�
Σ
−1�
,
(4.9)
where the last step follows from Proposition G.2. This quantity represents the trivial error in the
estimation of �u∗(�v∗)⊤, which is achieved by the all-0 estimator. Analogous considerations hold for
�u∗(�u∗)⊤ and �v∗(�v∗)⊤, for which the trivial estimation error is E
�
Ξ
−1�2
and E
�
Σ
−1�2
, respectively.
Thus, Proposition 4.1 and Theorem 4.2 identify (4.6) as the condition for non-trivial estimation, and |
the smallest γ that satisfies (4.6) gives the weak recovery threshold.
We show below that the weak recovery threshold is the same for the estimation of the true signals
u∗v∗⊤, u∗u∗⊤ and v∗v∗⊤. In this case, since the signal priors are Gaussian, using the same passages
as in (4.9) one has that the trivial estimation error for u∗v∗⊤, u∗u∗⊤ and v∗v∗⊤ is always equal to 1.
Corollary 4.3. Assume P = Q = N(0, 1). The MMSE associated to the estimation of u∗v∗⊤ is
non-trivial, i.e,
lim
n→∞
1
ndE
����u∗v∗⊤ − E
� |
non-trivial, i.e,
lim
n→∞
1
ndE
����u∗v∗⊤ − E
�
u∗v∗⊤ ��� Y
����
2
F
�
< 1
(4.10)
if and only if (4.6) holds. The same result holds for the MMSE of u∗u∗⊤ and v∗v∗⊤.
Proof strategy.
To derive the characterizations in Theorem 4.2, we write the posterior distribution
of u∗, v∗ given Y in a Gibbs form, i.e., its density is the exponential of a Hamiltonian normalized by
a partition function. The interpolation argument relates the log-partition function (also referred to as |
the ‘free energy’) of the posterior to that of the posteriors of two Gaussian location models. Since
i.i.d. Gaussianity is key to this approach, the challenge is to handle noise covariances. Our idea is
to incorporate the covariances into the priors. In terms of the Hamiltonian, the model is equivalent
to the estimation of the whitened signals Ξ−1/2u∗, Σ−1/2v∗, whose priors have covariances, in the
presence of i.i.d. Gaussian noise. We then manage to carry out the interpolation argument for the |
presence of i.i.d. Gaussian noise. We then manage to carry out the interpolation argument for the
equivalent model and evaluate the free energy of the corresponding Gaussian location models.
Specifically, let us starting by writing down the expression of the posterior distribution after setting
up some notation. For u ∈ Rn, v ∈ Rd, let �u := Ξ−1/2u, �v := Σ−1/2v. Define the densities
d �P(�u) :=
�
det(Ξ) dP ⊗n(Ξ1/2�u),
d �Q(�v) :=
�
det(Σ) dQ⊗d(Σ1/2�v),
4 |
d �P(�u) :=
�
det(Ξ) dP ⊗n(Ξ1/2�u),
d �Q(�v) :=
�
det(Σ) dQ⊗d(Σ1/2�v),
4
where the determinant factors ensure that the integrals equal 1. With P = Q = N(0, 1), we have
�P = N(0n, Ξ−1), �Q = N(0d, Σ−1), and from Bayes’ rule the posterior of (u∗, v∗) given Y is
dP(u, v | Y ) =
1
Zn(γ) exp
�
Hn(Ξ−1/2u, Σ−1/2v)
�
dP ⊗n(u) dQ⊗d(v),
(4.11)
where the Hamiltonian and the partition function are given respectively by
Hn(�u, �v) :=
�γ
n �u⊤Z�v + γ
n �u⊤�u∗�v⊤�v∗ − γ
2n∥�u∥2
2∥�v∥2
2,
(4.12)
Zn(γ) :=
� �
exp
� |
Hn(�u, �v) :=
�γ
n �u⊤Z�v + γ
n �u⊤�u∗�v⊤�v∗ − γ
2n∥�u∥2
2∥�v∥2
2,
(4.12)
Zn(γ) :=
� �
exp
�
Hn(Ξ−1/2u, Σ−1/2v)
�
dP ⊗n(u) dQ⊗d(v) =
� �
exp(Hn(�u, �v)) d �P(�u) d �Q(�v).
(4.13)
Define the free energy as
Fn(γ) := 1
nE[log Zn(γ)].
(4.14)
The major technical step is to characterize Fn(γ) in the large n limit in terms of a bivariate functional
F introduced below. This is the core component to derive the MMSE characterization.
For a positive random variable Σ subject to the conditions in Section 3, let |
For a positive random variable Σ subject to the conditions in Section 3, let
ψΣ(γ) := 1
2
�
γE
�
Σ
−1�
− E
�
log
�
1 + γΣ
−1���
.
(4.15)
As shown in Appendix B, ψΣ(γ) is the limiting free energy of a Gaussian channel, in which one
wishes to estimate x∗ ∈ Rn from the observation Y = √γx∗ + Σ1/2Z corrupted by anisotropic
Gaussian noise with covariance Σ. Using (4.15), let us define the replica symmetric potential F:
F(qu, qv) := ψΞ(αγqv) + αψΣ(γqu) − αγ
2 quqv,
and the set of critical points of F:
C(γ, α) := |
F(qu, qv) := ψΞ(αγqv) + αψΣ(γqu) − αγ
2 quqv,
and the set of critical points of F:
C(γ, α) :=
�
(qu, qv) ∈ R2
≥0 : ∂1F(qu, qv) = 0, ∂2F(qu, qv) = 0
�
=
�
(qu, qv) ∈ R2
≥0 : qu = 2ψ′
Ξ(αγqv), qv = 2ψ′
Σ(γqu)
�
(4.16)
=
�
(qu, qv) ∈ R2
≥0 : (qu, qv) solves (4.5)
�
,
where the last equality is a direct calculation of ψ′
Ξ, ψ′
Σ. The following result, proved in Appendix C,
shows that the limit of Fn(γ) is given by a dimension-free variational problem involving F(qu, qv). |
shows that the limit of Fn(γ) is given by a dimension-free variational problem involving F(qu, qv).
Theorem 4.4 (Free energy). Assume P = Q = N(0, 1). Then, we have
lim
n→∞ Fn(γ) = sup
qv≥0
inf
qu≥0 F(qu, qv) =
sup
(qu,qv)∈C(γ,α)
F(qu, qv),
and supqv infqu and sup(qu,qv) are achieved by the same (q∗
u, q∗
v) in Proposition 4.1.
Remark 4.1 (Equivalent models). Informally, the above result says that the matrix model (4.1) is |
Remark 4.1 (Equivalent models). Informally, the above result says that the matrix model (4.1) is
equivalent at the level of Hamiltonian to the following two statistically uncorrelated vector models:
Y u :=
�
αγq∗vu∗ + Ξ1/2Zu ∈ Rn,
Y v :=
�
γq∗uv∗ + Σ1/2Zv ∈ Rd,
(4.17)
with q∗
u, q∗
v the largest solution to (4.5) and (u∗, v∗, Zu, Zv) ∼ P ⊗n⊗Q⊗d⊗N(0n, In)⊗N(0d, Id).
Remark 4.2 (Gaussian priors). Theorem 4.4 crucially relies on having Gaussian priors P, Q. This |
Remark 4.2 (Gaussian priors). Theorem 4.4 crucially relies on having Gaussian priors P, Q. This
assumption is mainly used to derive single-letter (i.e., dimension-free) expressions of the free energy
of the vector models in (4.17) which, under Gaussian priors, are nothing but Gaussian integrals. The
free energy, and hence the MMSE, are expected to be sensitive to the priors. Indeed, this is already
the case in the homoscedastic setting Ξ = In, Σ = Id [48]. An extension towards general i.i.d. priors |
is a challenging open problem and, in fact, without posing additional assumptions on Ξ, Σ, it is
unclear whether a single-letter expression for free energy and MMSE is possible.
At this point, the MMSE can be derived from the above characterization of free energy. Indeed, let
D(α) := {γ > 0 : F has a unique maximizer (q∗
u, q∗
v) over C(γ, α)}.
The envelope theorem [47, Corollary 4] ensures that D(α) is equal to R>0 up to a countable set. |
The envelope theorem [47, Corollary 4] ensures that D(α) is equal to R>0 up to a countable set.
Using algebraic relations between free energy and MMSE, we prove (4.7) and (4.8) for all γ ∈ D(α)
(and, thus, for almost every γ > 0). Then, using the Nishimori identity and the fact that the ESDs of
Ξ, Σ are upper and lower bounded by constants independent of n and d, Corollary 4.3 also follows.
The formal arguments are contained in Appendix D.
5
5
Spectral estimator |
The formal arguments are contained in Appendix D.
5
5
Spectral estimator
This section introduces a spectral estimator that meets the weak recovery threshold and, for one-sided
heteroscedasticity, attains the Bayes-optimal error. Suppose that the following condition holds
λ4
δ E
�
Σ
−2�
E
�
Ξ
−2�
> 1,
(5.1)
which is equivalent to (4.6). Under this condition, the fixed point equations (4.5) have a unique pair of
positive solutions (q∗
u, q∗
v). For convenience, we also define the rescalings µ∗ := λq∗ |
positive solutions (q∗
u, q∗
v). For convenience, we also define the rescalings µ∗ := λq∗
v/δ, ν∗ := λq∗
u,
and the auxiliary quantities
b∗ := 1
δ E
�
λ
λν∗ + Σ
�
,
c∗ := E
�
λ
λµ∗ + Ξ
�
.
(5.2)
Now, we pre-process the data matrix A as
A∗ := λ(λ(µ∗ + b∗)In + Ξ)−1/2Ξ−1/2AΣ−1/2(λ(ν∗ + c∗)Id + Σ)−1/2,
(5.3)
from which we obtain the spectral estimators
�u := ηu
√n
Ξ1/2(λ(µ∗ + b∗)In + Ξ)−1/2(λµ∗In + Ξ)u1(A∗)
��Ξ1/2(λ(µ∗ + b∗)In + Ξ)−1/2(λµ∗In + Ξ)u1(A∗)
��
2
,
(5.4a)
�v := ηv
√
d |
��Ξ1/2(λ(µ∗ + b∗)In + Ξ)−1/2(λµ∗In + Ξ)u1(A∗)
��
2
,
(5.4a)
�v := ηv
√
d
Σ1/2(λ(ν∗ + c∗)Id + Σ)−1/2(λν∗Id + Σ)v1(A∗)
��Σ1/2(λ(ν∗ + c∗)Id + Σ)−1/2(λν∗Id + Σ)v1(A∗)
��
2
,
(5.4b)
where u1(·)/v1(·) denote the top left/right singular vectors and
ηu :=
�
λµ∗
λµ∗ + 1,
ηv :=
�
λν∗
λν∗ + 1.
(5.5)
Note that ηu, ηv > 0, provided that (5.1) holds. The pre-processing of A in (5.3) and the form of
the spectral estimators in (5.4) come from the derivation of a suitable AMP algorithm, and they are |
the spectral estimators in (5.4) come from the derivation of a suitable AMP algorithm, and they are
discussed at the end of the section. We finally defer to Appendix E.3 the definition of the scalar
quantity σ∗
2 obtained via a fixed point equation depending only on Ξ, Σ, λ, δ, see (E.26) for details.
Our main result, Theorem 5.1, shows that, under the criticality condition (5.1), the matrix A∗ exhibits
a spectral gap between the top two singular values, and it characterizes the performance of the spectral |
estimators in (5.4), proving that they achieve weak recovery of u∗ and v∗, respectively.
Theorem 5.1. Suppose that (5.1) holds and that, for any c > 0,
lim
β↓s E
�
Σ
∗
β − cΣ
∗
�
= lim
β↓s E
�
Σ
∗
β − cΣ
∗
�2
= ∞,
lim
α↓sup supp(Ξ
∗)
E
�
Ξ
∗
α − Ξ
∗
�
= ∞,
(5.6)
where Ξ
∗ :=
λ
λ(µ∗+b∗)+Ξ, Σ
∗ :=
λ
λ(ν∗+c∗)+Σ and s := c · sup supp(Σ
∗). Let A∗, �u, �v, σ∗
2 be defined
in (5.3), (5.4) and (E.26), and σi(A∗) denote the i-th largest singular value of A∗. Then, if σ∗
2 < 1, |
2 < 1,
the following limits hold in probability:
lim
n→∞ σ1(A∗) = 1 > σ∗
2 = lim
n→∞ σ2(A∗),
(5.7)
lim
n→∞
|⟨�u, u∗⟩|
∥�u∥2∥u∗∥2
= ηu,
lim
d→∞
|⟨�v, v∗⟩|
∥�v∥2∥v∗∥2
= ηv
(5.8)
lim
n→∞
1
n2
���u∗u∗⊤ − �u�u⊤���
2
F = 1 − η4
u,
lim
d→∞
1
d2
���v∗v∗⊤ − �v�v⊤���
2
F = 1 − η4
v,
(5.9)
lim
n→∞
1
nd
���u∗v∗⊤ − �u�v⊤���
2
F = 1 − η2
uη2
v.
(5.10)
Remark 5.1 (Assumptions). To guarantee a spectral gap for A∗ and the weak recoverability of u∗, v∗ |
Remark 5.1 (Assumptions). To guarantee a spectral gap for A∗ and the weak recoverability of u∗, v∗
via the proposed spectral method, we also require the algebraic condition σ∗
2 < 1. We conjecture
that this condition is implied by (5.1), and we have verified that this is the case in all our numerical
experiments (see Figure 1 for two concrete examples). The additional assumption (5.6) is a mild
regularity condition on the covariances. It ensures that the densities of Ξ
∗, Σ
∗ decay sufficiently |
∗, Σ
∗ decay sufficiently
slowly at the edges of the support, so that σ∗
2 is well-posed [75].
6
(a) Ξ = In and Σ a Toeplitz matrix with ρ = 0.9.
(b) Ξ a circulant matrix with c = 0.1, ℓ = 5 and Σ a
Toeplitz matrix with ρ = 0.5.
Figure 1: Top two singular values of A∗ in (5.3), where d = 4000, δ = 4 and each simulation is
averaged over 10 i.i.d. trials. The singular values computed experimentally (‘sim’ in the legends and |
× in the plots) closely match our theoretical prediction in (5.7) (‘thy’ in the legends and solid curves
with the same color in the plots). The threshold λ∗ is such that equality holds in (5.1). We note that
the green curve corresponding to σ∗
2 is smaller than 1 for λ > λ∗, i.e., when (5.1) holds.
(a) Normalized correlation with u∗ (b) Normalized correlation with v∗
(c) Matrix MSE for u∗v∗⊤
Figure 2: Performance comparison when Ξ = In and Σ is a circulant matrix. The numerical results |
Figure 2: Performance comparison when Ξ = In and Σ is a circulant matrix. The numerical results
closely follow the predictions of Theorem 5.1, and our spectral estimators in (5.4) outperform all
other methods (Leeb–Romanov, OptShrink, ScreeNOT, and HeteroPCA), especially at low SNR.
Remark 5.2 (Signal priors). Theorem 5.1 does not require the prior distributions P, Q to be Gaussian,
and it is valid for any i.i.d. prior with mean 0 and variance 1. |
and it is valid for any i.i.d. prior with mean 0 and variance 1.
On the one hand, Corollary 4.3 shows that, if (5.1) is violated, the problem is information-theoretically
impossible, i.e., no estimator achieves non-trivial error. On the other hand, Theorem 5.1 exhibits a pair
of estimators that achieves non-trivial error as soon as (5.1) holds – under the additional assumption
σ∗
2 < 1 which we conjecture to be equivalent. Thus, the spectral method in (5.4) is optimal in terms |
2 < 1 which we conjecture to be equivalent. Thus, the spectral method in (5.4) is optimal in terms
of weak recovery threshold. Though such estimators do not attain the optimal error, when both priors
are Gaussian and Ξ = In, �u�u⊤ is the Bayes-optimal estimate for u∗u∗⊤.
Corollary 5.2. Assume P = Q = N(0, 1), and consider the setting of Theorem 5.1 with the
additional assumption Ξ = In. Then, ηu = √q∗u, i.e., �u�u⊤ achieves the MMSE for u∗u∗⊤. |
additional assumption Ξ = In. Then, ηu = √q∗u, i.e., �u�u⊤ achieves the MMSE for u∗u∗⊤.
The claim readily follows by noting that, when Ξ = In, the first equation in (4.5) becomes
q∗
u =
αγq∗
v
1 + αγq∗v
=
(λ2/δ)(δµ∗/λ)
1 + (λ2/δ)(δµ∗/λ) =
λµ∗
1 + λµ∗ = η2
u,
where the last equality is by the definition (5.5) of ηu. Let us highlight that, even if Ξ = In, �u still
makes non-trivial use of the other covariance Σ1/2. At the information-theoretic level, this is reflected
by the fact that Σ1/2 enters q∗ |
by the fact that Σ1/2 enters q∗
u through the fixed point equations (4.5). Therefore, even though the
matrix model in (4.1) is equivalent to a pair of uncorrelated vector models in (4.17) in the sense of
the free energy, the tasks of estimating u∗ and v∗ cannot be decoupled.
Numerical experiments.
Figures 2 and 3 demonstrate the advantage of our method over existing
approaches, and they display an accurate agreement between simulations (‘sim’ in the legends and × |
approaches, and they display an accurate agreement between simulations (‘sim’ in the legends and ×
in the plots) and the theoretical predictions of Theorem 5.1 (‘thy’ in the legends and solid curves with
the same color in the plots), both plotted as a function of λ. In both figures, n = 4000, d = 2000 (so
7
(a) Normalized correlation with u∗ (b) Normalized correlation with v∗
(c) Matrix MSE for u∗v∗⊤
Figure 3: Performance comparison when Ξ is a Toeplitz matrix and Σ is circulant. The numerical |
Figure 3: Performance comparison when Ξ is a Toeplitz matrix and Σ is circulant. The numerical
results closely follow the predictions of Theorem 5.1, and our spectral estimators in (5.4) outperform
all other methods (Leeb, OptShrink, and ScreeNOT), especially at low SNR.
δ = 2), and P = Q = N(0, 1). Each data point is computed from 20 i.i.d. trials and error bars are
reported at 1 standard deviation. We let Ξ be either the identity or a Toeplitz matrix [73, 37, 18], i.e., |
Ξi,j = ρ|i−j| with ρ = 0.9. We let Σ be a circulant matrix [36, 35]: the first row has 1 in the first
position, c = 0.0078 in the second through (ℓ + 1)-st position and in the last ℓ positions (ℓ = 300),
with the remaining entries being 0; for 2 ≤ i ≤ d, the i-th row is a cyclic shift of the (i − 1)-st row to
the right by 1 position. Both matrices satisfy (5.6) and the conditions of Section 3.
Our spectral estimator outperforms all other approaches: Leeb–Romanov [40], OptShrink [56], |
Our spectral estimator outperforms all other approaches: Leeb–Romanov [40], OptShrink [56],
ScreeNOT [24], and HeteroPCA [72] in the one-sided heteroscedastic case (Figure 2); Leeb [41],
OptShrink, and ScreeNOT in the doubly heteroscedastic case (Figure 3). When computing the
normalized correlation with the signals (left/right overlap), the performance of Leeb–Romanov and
Leeb is the same as the estimators Ξ1/2u1(Ξ−1/2AΣ−1/2), Σ1/2v1(Ξ−1/2AΣ−1/2) referred to as |
Leeb is the same as the estimators Ξ1/2u1(Ξ−1/2AΣ−1/2), Σ1/2v1(Ξ−1/2AΣ−1/2) referred to as
‘whiten’ in Figures 2a and 2b; the performance of OptShrink and ScreeNOT is the same as the
estimators u1(A), v1(A) referred to as ‘vanilla’ in Figures 3a and 3b. The advantage of our approach
(in black) is especially significant at low SNR; as SNR increases, Leeb-Romanov and Leeb (in red)
achieve similar performance; a much larger SNR (> 2 and > 3 in Figures 2 and 3) is required by |
achieve similar performance; a much larger SNR (> 2 and > 3 in Figures 2 and 3) is required by
HeteroPCA, OptShrink and ScreeNOT (in magenta, blue and green) to perform comparably.
Proof strategy.
The design and analysis of the spectral estimator in (5.4) comprise two steps,
detailed in Appendix E. The first step is to present an AMP algorithm dubbed Bayes-AMP for matrix
denoising with doubly heteroscedastic noise. Specifically, its iterates are updated as
ut = Ξ−1AΣ−1�vt − btΞ−1�ut−1,
�ut = g∗
t (ut), |
ut = Ξ−1AΣ−1�vt − btΞ−1�ut−1,
�ut = g∗
t (ut),
ct = 1
n Tr((∇g∗
t (ut))Ξ−1),
(5.11)
vt+1 = Σ−1A⊤Ξ−1�ut − ctΣ−1�vt,
�vt+1 = f ∗
t+1(vt+1),
bt+1 = 1
n Tr((∇f ∗
t+1(vt+1))Σ−1),
where ∇ denotes the Jacobian matrix, and the functions g∗
t , f ∗
t+1 are specified below in (5.12).
As common in AMP algorithms, the iterates (5.11) are accompanied with a state evolution which
accurately tracks their behavior via a simple deterministic recursion: the joint empirical distribution |
of (u∗, v∗, ut, vt+1) converges to the random variables (U ∗, V ∗, Ut, Vt+1), see Proposition E.1 for a
formal statement and the recursive description of the laws of such random variables. Then, the name
‘Bayes-AMP’ is motivated by the fact that g∗
t , f ∗
t+1 are the posterior-mean denoisers given by
g∗
t (u) := E[U ∗ | Ut = u],
f ∗
t+1(v) := E[V ∗ | Vt+1 = v].
(5.12)
Remarkably, Bayes-AMP operates on Ξ−1AΣ−1, as opposed to the widely adopted ansatz of |
(5.12)
Remarkably, Bayes-AMP operates on Ξ−1AΣ−1, as opposed to the widely adopted ansatz of
considering the whitened matrix Ξ−1/2AΣ−1/2. The advantage of operating on Ξ−1AΣ−1 is that
the fixed point of the corresponding state evolution matches the extremizers of the free energy in
(4.5). This would not be the case if Bayes-AMP used the whitening Ξ−1/2AΣ−1/2.
The design of Bayes-AMP and the proof of its state evolution follow a two-step reduction detailed in |
Appendix F. Using a change of variables, we show in Appendix F.2 that Bayes-AMP can be realized
by an auxiliary AMP with non-separable denoising functions (meaning that gt, ft+1 cannot be written
as univariate functions applied component-wise) operating on Ξ−1/2AΣ−1/2 = λ
n �u∗(�v∗)⊤ + �
W.
Then, in Appendix F.1 we simulate the auxiliary AMP using a standard AMP operating on the i.i.d.
Gaussian matrix �
W, whose state evolution has been established in [12, 30].
8 |
Gaussian matrix �
W, whose state evolution has been established in [12, 30].
8
However, Bayes-AMP by itself is not a practical algorithm since it needs a warm start, i.e., an
initialization that achieves non-trivial error. Thus, the second step is to design a spectral estimator
that solves the fixed point equation of Bayes-AMP, which turns out to be an eigen-equation for A∗.
We now heuristically derive the form (5.3) of A∗ and the expression (5.4) of the spectral estimator. To |
do so, we note that the large-n limits of ct, bt+1 coincide with the auxiliary quantities c∗, b∗ defined
in (5.2). Furthermore, when the priors of u∗, v∗ are Gaussian, (5.12) reduces to
g∗
t (u) = λ(λµ∗Ξ−1 + In)−1u,
f ∗
t+1(v) = λ(λν∗Σ−1 + Id)−1v,
where we recall that µ∗ = λq∗
v/δ and ν∗ = λq∗
u are rescalings of the non-trivial solution (q∗
u, q∗
v) of
(4.5). Denoting by u, v the fixed points of the iteration (5.11), after some manipulations we have
g(Ξ)u = A∗f(Σ)v,
f(Σ)v = A∗⊤g(Ξ)u, |
g(Ξ)u = A∗f(Σ)v,
f(Σ)v = A∗⊤g(Ξ)u,
where A∗ is given in (5.3) and
g(Ξ) :=
√
λ(λ(µ∗ + b∗)In + Ξ)1/2(λµ∗In + Ξ)−1Ξ1/2,
f(Σ) :=
√
λ(λ(ν∗ + c∗)Id + Σ)1/2(λν∗Id + Σ)−1Σ1/2.
This suggests that A∗ has top singular value equal to 1 and (g(Ξ)u, f(Σ)v) are aligned with the corre-
sponding singular vectors (u1(A∗), v1(A∗)). Moreover, state evolution implies that the distribution
of the fixed point (u, v) is close to that of
(µ∗Ξ−1u∗ +
�
µ∗/λwu, ν∗Σ−1v∗ +
�
ν∗/λwv), |
of the fixed point (u, v) is close to that of
(µ∗Ξ−1u∗ +
�
µ∗/λwu, ν∗Σ−1v∗ +
�
ν∗/λwv),
with (wu, wv) ∼ N(0n, Ξ−1) ⊗ N(0d, Σ−1) independent of u∗, v∗. Thus, to obtain estimates of
(u∗, v∗), we take (Ξg(Ξ)−1u1(A∗), Σf(Σ)−1v1(A∗)) and suitably rescale their norm, which leads
to the expressions in (5.4). More details on the above heuristics are discussed in Appendix E.2.
The most outstanding step remains to make the heuristics rigorous. This involves proving that |
The most outstanding step remains to make the heuristics rigorous. This involves proving that
Ξut, Σvt+1 are aligned with the proposed spectral estimator, which allows for a performance charac-
terization via state evolution. The formal argument is carried out in Appendix E.4.
6
Concluding remarks
In this work, we establish information-theoretic limits and propose an efficient spectral method with
optimality guarantees, for matrix estimation with doubly heteroscedastic noise. On the one hand, |
optimality guarantees, for matrix estimation with doubly heteroscedastic noise. On the one hand,
under Gaussian priors, we give a rigorous characterization of the MMSE; on the other hand, we
present a spectral estimator that (i) achieves the information-theoretic weak recovery threshold, and
(ii) is Bayes-optimal for the estimation of one of the signals, when the noise is heteroscedastic only
on the other side. While our analysis focuses on rank-1 estimation, we expect that all results admit |
proper extensions to rank-r signals, where r is a constant independent on n, d.
The design and analysis of the spectral estimator draws connections with approximate message
passing and, along the way, we introduce a Bayes-AMP algorithm which could be of independent
interest. In this paper, we employ Bayes-AMP solely as a proof technique. However, one could use
the spectral method designed here as an initialization of Bayes-AMP itself, after suitably correcting |
its iterates. This strategy has been successfully carried out for i.i.d. Gaussian noise in [53] and for
rotationally invariant noise in [51, 77]. Bayes-AMP is well equipped to exploit signal priors more
informative than the Gaussian one, and AMP algorithms are known to achieve the information-
theoretically optimal estimation error for low-rank matrix inference [53, 6]. Nevertheless, we point
out two obstacles towards doing so in the presence of doubly heteroscedastic noise. First, for general |
priors, establishing the information-theoretic limits remains a challenging open problem, and it is
unclear whether a low-dimensional characterization of the free energy (and, hence, of the MMSE) is
possible. Second, even for Gaussian priors, Bayes-AMP reduces to the proposed spectral estimator,
which is not Bayes-optimal for the general case of doubly heteroscedastic noise.
Finally, the proposed spectral estimator makes non-trivial use of the covariances Ξ, Σ, which are |
Finally, the proposed spectral estimator makes non-trivial use of the covariances Ξ, Σ, which are
assumed to be known. However, when n and d grow proportionally, such matrices – if unknown –
cannot be consistently estimated from the data. Thus, a challenging open problem is to construct
estimators that retain comparable performance without knowing the noise covariances. The paper
[52] takes a step in this direction by developing methods that achieve the minimax risk when the |
[52] takes a step in this direction by developing methods that achieve the minimax risk when the
noise is i.i.d. with an unknown non-Gaussian distribution.
9
Acknowledgments and Disclosure of Funding
YZ thanks Shashank Vatedka for discussions at the early stage of this project. MM thanks Jean
Barbier for sharing his insights into the interpolation argument.
This research is partially supported by the 2019 Lopez-Loreta Prize and by the Interdisciplinary |
This research is partially supported by the 2019 Lopez-Loreta Prize and by the Interdisciplinary
Projects Committee (IPC) at the Institute of Science and Technology Austria (ISTA).
References
[1] Joshua Agterberg, Zachary Lubberts, and Carey E. Priebe. Entrywise estimation of singular
vectors of low-rank matrices with heteroskedasticity and dependence. IEEE Trans. Inform.
Theory, 68(7):4618–4650, 2022. 1, 3
[2] Michael Aizenman, Robert Sims, and Shannon L. Starr. Extended variational principle for the |
[2] Michael Aizenman, Robert Sims, and Shannon L. Starr. Extended variational principle for the
sherrington-kirkpatrick spin-glass model. Phys. Rev. B, 68:214403, Dec 2003. 2
[3] Z. D. Bai and Y. Q. Yin. Limit of the smallest eigenvalue of a large-dimensional sample
covariance matrix. Ann. Probab., 21(3):1275–1294, 1993. 40, 46, 47
[4] Jinho Baik, Gérard Ben Arous, and Sandrine Péché. Phase transition of the largest eigenvalue |
[4] Jinho Baik, Gérard Ben Arous, and Sandrine Péché. Phase transition of the largest eigenvalue
for nonnull complex sample covariance matrices. Ann. Probab., 33(5):1643–1697, 2005. 2
[5] Stephen Bailey. Principal component analysis with noisy and/or missing data. Publications of
the Astronomical Society of the Pacific, 124(919):1015, sep 2012. 1
[6] Jean Barbier, Francesco Camilli, Marco Mondelli, and Manuel Sáenz. Fundamental limits in |
[6] Jean Barbier, Francesco Camilli, Marco Mondelli, and Manuel Sáenz. Fundamental limits in
structured principal component analysis and how to reach them. Proc. Natl. Acad. Sci. USA,
120(30):Paper No. e2302028120, 7, 2023. 3, 9
[7] Jean Barbier, TianQi Hou, Marco Mondelli, and Manuel Saenz. The price of ignorance: how
much does it cost to forget noise structure in low-rank matrix estimation? In Advances in Neural
Information Processing Systems, volume 35, pages 36733–36747, 2022. 3 |
Information Processing Systems, volume 35, pages 36733–36747, 2022. 3
[8] Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, and Lenka Zdeborová. Optimal
errors and phase transitions in high-dimensional generalized linear models. Proc. Natl. Acad.
Sci. USA, 116(12):5451–5460, 2019. 2
[9] Jean Barbier and Nicolas Macris. The adaptive interpolation method: a simple scheme to prove
replica formulas in Bayesian inference. Probab. Theory Related Fields, 174(3-4):1133–1185,
2019. 2, 15 |
2019. 2, 15
[10] Mohsen Bayati and Andrea Montanari. The dynamics of message passing on dense graphs, with
applications to compressed sensing. IEEE Trans. Inform. Theory, 57(2):764–785, 2011. 2, 3
[11] Joshua K. Behne and Galen Reeves.
Fundamental limits for rank-one matrix estimation
with groupwise heteroskedasticity. In International Conference on Artificial Intelligence and
Statistics, pages 8650–8672, 2022. 2
[12] Raphaël Berthier, Andrea Montanari, and Phan-Minh Nguyen. State evolution for approximate |
[12] Raphaël Berthier, Andrea Montanari, and Phan-Minh Nguyen. State evolution for approximate
message passing with non-separable functions. Inf. Inference, 9(1):33–79, 2020. 8, 44
[13] Tejal Bhamre, Teng Zhang, and Amit Singer. Denoising and covariance estimation of single
particle cryo-em images. Journal of Structural Biology, 195(1):72–81, 2016. 1
[14] Erwin Bolthausen.
An iterative construction of solutions of the TAP equations for the |
[14] Erwin Bolthausen.
An iterative construction of solutions of the TAP equations for the
Sherrington-Kirkpatrick model. Comm. Math. Phys., 325(1):333–366, 2014. 3
[15] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities. Oxford
University Press, Oxford, 2013. 50
[16] T. Tony Cai, Rungang Han, and Anru R. Zhang. On the non-asymptotic concentration of
heteroskedastic Wishart-type matrix. Electron. J. Probab., 27:Paper No. 29, 40, 2022. 3 |
heteroskedastic Wishart-type matrix. Electron. J. Probab., 27:Paper No. 29, 40, 2022. 3
[17] T. Tony Cai, Zongming Ma, and Yihong Wu. Sparse PCA: optimal rates and adaptive estimation.
Ann. Statist., 41(6):3074–3110, 2013. 1
[18] T. Tony Cai, Zhao Ren, and Harrison H. Zhou. Optimal rates of convergence for estimating
Toeplitz covariance matrices. Probab. Theory Related Fields, 156(1-2):101–143, 2013. 8
10
[19] Michael Celentano, Chen Cheng, and Andrea Montanari. The high-dimensional asymptotics of |
10
[19] Michael Celentano, Chen Cheng, and Andrea Montanari. The high-dimensional asymptotics of
first order methods with random data. arXiv preprint arXiv:2112.07572, 2021. 3
[20] Chen Cheng, Yuting Wei, and Yuxin Chen. Tackling small eigen-gaps: fine-grained eigenvector
estimation and inference under heteroscedastic noise. IEEE Trans. Inform. Theory, 67(11):7380–
7419, 2021. 3
[21] Lucilio Cordero-Grande, Daan Christiaens, Jana Hutter, Anthony N. Price, and Jo V. Hajnal. |
[21] Lucilio Cordero-Grande, Daan Christiaens, Jana Hutter, Anthony N. Price, and Jo V. Hajnal.
Complex diffusion-weighted image estimation via matrix recovery under general noise models.
NeuroImage, 200:391–404, 2019. 1
[22] Romain Couillet and Walid Hachem. Analysis of the limiting spectral measure of large random
matrices of the separable covariance type. Random Matrices Theory Appl., 3(4):1450016, 23,
2014. 37 |
matrices of the separable covariance type. Random Matrices Theory Appl., 3(4):1450016, 23,
2014. 37
[23] Xiucai Ding, Yun Li, and Fan Yang. Eigenvector distributions and optimal shrinkage estimators
for large covariance and precision matrices. arXiv preprint arXiv:2404.14751, 2024. 1, 2
[24] David Donoho, Matan Gavish, and Elad Romanov. ScreeNOT: exact MSE-optimal singular
value thresholding in correlated noise. Ann. Statist., 51(1):122–148, 2023. 2, 8 |
value thresholding in correlated noise. Ann. Statist., 51(1):122–148, 2023. 2, 8
[25] David L. Donoho, Arian Maleki, and Andrea Montanari. Message passing algorithms for
compressed sensing. Proceedings of the National Academy of Sciences, 106:18914–18919,
2009. 3
[26] Rishabh Dudeja, Subhabrata Sen, and Yue M Lu. Spectral universality of regularized linear
regression with nearly deterministic sensing matrices. IEEE Transactions on Information
Theory, 2024. 3 |
Theory, 2024. 3
[27] Zhou Fan. Approximate message passing algorithms for rotationally invariant matrices. The
Annals of Statistics, 50(1):197–224, 2022. 3
[28] Oliver Y Feng, Ramji Venkataramanan, Cynthia Rush, Richard J Samworth, et al. A unifying
tutorial on approximate message passing. Foundations and Trends® in Machine Learning,
15(4):335–536, 2022. 2
[29] Matan Gavish, William Leeb, and Elad Romanov. Matrix denoising with partial noise statistics: |
[29] Matan Gavish, William Leeb, and Elad Romanov. Matrix denoising with partial noise statistics:
optimal singular value shrinkage of spiked F-matrices. Inf. Inference, 12(3):Paper No. iaad028,
46, 2023. 1, 2
[30] Cédric Gerbelot and Raphaël Berthier. Graph-based approximate message passing iterations.
Inf. Inference, 12(4):Paper No. iaad020, 67, 2023. 8, 44
[31] Francesco Guerra. Broken replica symmetry bounds in the mean field spin glass model. Comm.
Math. Phys., 233(1):1–12, 2003. 2 |
Math. Phys., 233(1):1–12, 2003. 2
[32] Alice Guionnet, Justin Ko, Florent Krzakala, and Lenka Zdeborová. Low-rank matrix estimation
with inhomogeneous noise. arXiv preprint arXiv:2208.05918, 2022. 2
[33] Philip Hartman. Ordinary differential equations, volume 38. Society for Industrial and Applied
Mathematics (SIAM), Philadelphia, PA, 2002. 25
[34] David Hong, Fan Yang, Jeffrey A. Fessler, and Laura Balzano. Optimally weighted PCA for |
[34] David Hong, Fan Yang, Jeffrey A. Fessler, and Laura Balzano. Optimally weighted PCA for
high-dimensional heteroscedastic data. SIAM J. Math. Data Sci., 5(1):222–250, 2023. 2
[35] Adel Javanmard and Andrea Montanari. Confidence intervals and hypothesis testing for high-
dimensional regression. J. Mach. Learn. Res., 15:2869–2909, 2014. 8
[36] Adel Javanmard and Andrea Montanari. Hypothesis testing in high-dimensional regression |
[36] Adel Javanmard and Andrea Montanari. Hypothesis testing in high-dimensional regression
under the Gaussian random design model: asymptotic theory. IEEE Trans. Inform. Theory,
60(10):6522–6554, 2014. 8
[37] Adel Javanmard and Andrea Montanari. Debiasing the Lasso: optimal sample size for Gaussian
designs. Ann. Statist., 46(6A):2593–2622, 2018. 8
[38] Iain M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis.
Ann. Statist., 29(2):295–327, 2001. 1 |
Ann. Statist., 29(2):295–327, 2001. 1
[39] Boris Landa, Thomas T. C. K. Zhang, and Yuval Kluger. Biwhitening reveals the rank of a
count matrix. SIAM J. Math. Data Sci., 4(4):1420–1446, 2022. 2
[40] William Leeb and Elad Romanov. Optimal spectral shrinkage and PCA with heteroscedastic
noise. IEEE Trans. Inform. Theory, 67(5):3009–3037, 2021. 1, 2, 8
11
[41] William E. Leeb. Matrix denoising for weighted loss functions and heterogeneous signals.
SIAM J. Math. Data Sci., 3(3):987–1012, 2021. 2, 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.