Dataset Viewer
chunk_id
int64 0
448
| chunk_text
stringlengths 1
10.8k
| chunk_text_tokens
int64 1
2.01k
| serialized_text
stringlengths 1
11.1k
| serialized_text_tokens
int64 1
2.02k
|
---|---|---|---|---|
0 | In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase.
We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data. | 215 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Abstract
In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase.
We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data. | 229 |
1 | Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{1,\ldots,N\}, where ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} are (random) features and ηi∈𝐑subscript𝜂𝑖𝐑\eta_{i}\in{\mathbf{R}} are observations, linked by the following equation
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i∈[N]:={1,…,N}formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖delimited-[]𝑁assign1…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\quad i\in[N]:=\{1,\ldots,N\}. , 2 = . , 3 = (1)
where ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are i.i.d. observation noises. The standard objective is to recover the unknown parameter x∗∈𝐑nsubscript𝑥superscript𝐑𝑛x_{*}\in{\mathbf{R}}^{n} of the Generalized Linear Regression (1) – which is assumed to belong to a given convex closed set X𝑋X and to be s𝑠s-sparse, i.e., to have at most s≪nmuch-less-than𝑠𝑛s\ll n non-vanishing entries from the data-set.
As mentioned before, we consider random design, where ϕisubscriptitalic-ϕ𝑖\phi_{i} are i.i.d. random variables, so that the estimation problem of x∗subscript𝑥x_{*} can be recast as the following generic Stochastic Optimization problem:
, 1 = g∗=minx∈Xg(x),whereg(x)=𝐄{G(x,(ϕ,η))},G(x,(ϕ,η))=𝔰(ϕTx)−ϕTxη,formulae-sequencesubscript𝑔subscript𝑥𝑋𝑔𝑥whereformulae-sequence𝑔𝑥𝐄𝐺𝑥italic-ϕ𝜂𝐺𝑥italic-ϕ𝜂𝔰superscriptitalic-ϕ𝑇𝑥superscriptitalic-ϕ𝑇𝑥𝜂\displaystyle g_{*}=\min_{x\in X}g(x),\quad\text{where}\quad g(x)={\mathbf{E}}\big{\{}G\big{(}x,(\phi,\eta)\big{)}\big{\}},\quad G(x,(\phi,\eta))=\mathfrak{s}(\phi^{T}x)-\phi^{T}x\eta,. , 2 = . , 3 = (2)
with 𝔰(⋅)𝔰⋅\mathfrak{s}(\cdot) any primitive of 𝔯(⋅)𝔯⋅\mathfrak{r}(\cdot), i.e., 𝔯(t)=𝔰′(t)𝔯𝑡superscript𝔰′𝑡\mathfrak{r}(t)=\mathfrak{s}^{\prime}(t). The equivalence between the original and the stochastic optimization problems comes from the fact that x∗subscript𝑥x_{*} is a critical point of g(⋅)𝑔⋅g(\cdot), i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0 since, under mild assumptions, ∇g(x)=𝐄{ϕ[𝔯(ϕTx)−𝔯(ϕTx∗)]}∇𝑔𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝔯superscriptitalic-ϕ𝑇subscript𝑥\nabla g(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)-\mathfrak{r}(\phi^{T}x_{*})]\}. Hence, as soon as g𝑔g as a unique minimizer (say, g𝑔g is strongly convex over X𝑋X), solutions of both problems are identical.
As a consequence, we shall focus on the generic problem (2), that has already been widely tackled. For instance, when given an observation sample (ϕi,ηi)subscriptitalic-ϕ𝑖subscript𝜂𝑖(\phi_{i},\eta_{i}), i∈[N]𝑖delimited-[]𝑁i\in[N], one may build a Sample Average Approximation (SAA) of the objective g(x)𝑔𝑥g(x)
, 1 = g^N(x)=1N∑i=1NG(x,(ϕi,ηi))=1N∑i=1N[𝔰(ϕiTx)−ϕiTxηi]subscript^𝑔𝑁𝑥1𝑁superscriptsubscript𝑖1𝑁𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖1𝑁superscriptsubscript𝑖1𝑁delimited-[]𝔰subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\displaystyle{\widehat{g}}_{N}(x)=\frac{1}{N}\sum_{i=1}^{N}G(x,(\phi_{i},\eta_{i}))=\frac{1}{N}\sum_{i=1}^{N}[\mathfrak{s}(\phi^{T}_{i}x)-\phi^{T}_{i}x\eta_{i}]. , 2 = . , 3 = (3)
and then solve the resulting problem of minimizing g^N(x)subscript^𝑔𝑁𝑥{\widehat{g}}_{N}(x) over sparse x𝑥x’s. The celebrated ℓ1subscriptℓ1\ell_{1}-norm minimization approach allows to reduce this problem to convex optimization. We will provide a new algorithm adapted to this high-dimensional case, and instantiating it to the original problem 1. | 1,715 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{1,\ldots,N\}, where ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} are (random) features and ηi∈𝐑subscript𝜂𝑖𝐑\eta_{i}\in{\mathbf{R}} are observations, linked by the following equation
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i∈[N]:={1,…,N}formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖delimited-[]𝑁assign1…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\quad i\in[N]:=\{1,\ldots,N\}. , 2 = . , 3 = (1)
where ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are i.i.d. observation noises. The standard objective is to recover the unknown parameter x∗∈𝐑nsubscript𝑥superscript𝐑𝑛x_{*}\in{\mathbf{R}}^{n} of the Generalized Linear Regression (1) – which is assumed to belong to a given convex closed set X𝑋X and to be s𝑠s-sparse, i.e., to have at most s≪nmuch-less-than𝑠𝑛s\ll n non-vanishing entries from the data-set.
As mentioned before, we consider random design, where ϕisubscriptitalic-ϕ𝑖\phi_{i} are i.i.d. random variables, so that the estimation problem of x∗subscript𝑥x_{*} can be recast as the following generic Stochastic Optimization problem:
, 1 = g∗=minx∈Xg(x),whereg(x)=𝐄{G(x,(ϕ,η))},G(x,(ϕ,η))=𝔰(ϕTx)−ϕTxη,formulae-sequencesubscript𝑔subscript𝑥𝑋𝑔𝑥whereformulae-sequence𝑔𝑥𝐄𝐺𝑥italic-ϕ𝜂𝐺𝑥italic-ϕ𝜂𝔰superscriptitalic-ϕ𝑇𝑥superscriptitalic-ϕ𝑇𝑥𝜂\displaystyle g_{*}=\min_{x\in X}g(x),\quad\text{where}\quad g(x)={\mathbf{E}}\big{\{}G\big{(}x,(\phi,\eta)\big{)}\big{\}},\quad G(x,(\phi,\eta))=\mathfrak{s}(\phi^{T}x)-\phi^{T}x\eta,. , 2 = . , 3 = (2)
with 𝔰(⋅)𝔰⋅\mathfrak{s}(\cdot) any primitive of 𝔯(⋅)𝔯⋅\mathfrak{r}(\cdot), i.e., 𝔯(t)=𝔰′(t)𝔯𝑡superscript𝔰′𝑡\mathfrak{r}(t)=\mathfrak{s}^{\prime}(t). The equivalence between the original and the stochastic optimization problems comes from the fact that x∗subscript𝑥x_{*} is a critical point of g(⋅)𝑔⋅g(\cdot), i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0 since, under mild assumptions, ∇g(x)=𝐄{ϕ[𝔯(ϕTx)−𝔯(ϕTx∗)]}∇𝑔𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝔯superscriptitalic-ϕ𝑇subscript𝑥\nabla g(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)-\mathfrak{r}(\phi^{T}x_{*})]\}. Hence, as soon as g𝑔g as a unique minimizer (say, g𝑔g is strongly convex over X𝑋X), solutions of both problems are identical.
As a consequence, we shall focus on the generic problem (2), that has already been widely tackled. For instance, when given an observation sample (ϕi,ηi)subscriptitalic-ϕ𝑖subscript𝜂𝑖(\phi_{i},\eta_{i}), i∈[N]𝑖delimited-[]𝑁i\in[N], one may build a Sample Average Approximation (SAA) of the objective g(x)𝑔𝑥g(x)
, 1 = g^N(x)=1N∑i=1NG(x,(ϕi,ηi))=1N∑i=1N[𝔰(ϕiTx)−ϕiTxηi]subscript^𝑔𝑁𝑥1𝑁superscriptsubscript𝑖1𝑁𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖1𝑁superscriptsubscript𝑖1𝑁delimited-[]𝔰subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\displaystyle{\widehat{g}}_{N}(x)=\frac{1}{N}\sum_{i=1}^{N}G(x,(\phi_{i},\eta_{i}))=\frac{1}{N}\sum_{i=1}^{N}[\mathfrak{s}(\phi^{T}_{i}x)-\phi^{T}_{i}x\eta_{i}]. , 2 = . , 3 = (3)
and then solve the resulting problem of minimizing g^N(x)subscript^𝑔𝑁𝑥{\widehat{g}}_{N}(x) over sparse x𝑥x’s. The celebrated ℓ1subscriptℓ1\ell_{1}-norm minimization approach allows to reduce this problem to convex optimization. We will provide a new algorithm adapted to this high-dimensional case, and instantiating it to the original problem 1. | 1,730 |
2 | Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem
minxg^N(x)+λ‖x‖1subscript𝑥subscript^𝑔𝑁𝑥𝜆subscriptnorm𝑥1\min_{x}{\widehat{g}}_{N}(x)+\lambda\|x\|_{1} where λ≥0𝜆0\lambda\geq 0 is the algorithm parameter [35]. This delivers “good solutions”, with high probability for sparsity level s𝑠s as large as O(NκΣlnn)𝑂𝑁subscript𝜅Σ𝑛O\left(\frac{N\kappa_{\Sigma}}{\ln n}\right), as soon as the random regressors (the ϕisubscriptitalic-ϕ𝑖\phi_{i}) are drawn independently from a normal distribution with a covariance matrix ΣΣ\Sigma such that κΣI⪯Σ⪯ρκΣIprecedes-or-equalssubscript𝜅Σ𝐼Σprecedes-or-equals𝜌subscript𝜅Σ𝐼\kappa_{\Sigma}I\preceq\Sigma\preceq\rho\kappa_{\Sigma}I111We use A⪯Bprecedes-or-equals𝐴𝐵A\preceq B for two symmetric matrices A𝐴A and B𝐵B if B−A⪰0succeeds-or-equals𝐵𝐴0B-A\succeq 0, i.e. B−A𝐵𝐴B-A is positive semidefinite., for some κΣ>0,ρ≥1formulae-sequencesubscript𝜅Σ0𝜌1\kappa_{\Sigma}>0,\rho\geq 1.
However, computing this solution may be challenging in a very high-dimensional setting: even popular iterative algorithms, like coordinate descent, loops over a large number of variables. To mitigate this, randomized algorithms [3, 22], screening rules and working sets [19, 30, 34] may be used to diminish the size of the optimization problem at hand, while iterative thresholding [1, 7, 20, 16, 33] is a “direct” approach to enhance sparsity of the solution.
Another approach relies on Stochastic Approximation (SA). As ∇G(x,(ϕi,ηi))=ϕi(𝔯(ϕiTx)−ηi)∇𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖subscriptitalic-ϕ𝑖𝔯subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\nabla G(x,(\phi_{i},\eta_{i}))=\phi_{i}(\mathfrak{r}(\phi^{T}_{i}x)-\eta_{i}) is an unbiased estimate of ∇g(x)∇𝑔𝑥\nabla g(x), iterative Stochastic Gradient Descent (SGD) algorithm may be used to build approximate solutions.
Unfortunately, unless regressors ϕitalic-ϕ\phi are sparse or possess a special structure, standard SA leads to accuracy bounds for sparse recovery proportional to the dimension n𝑛n which are essentially useless in the high-dimensional setting.
This motivates non-Euclidean SA procedures, such as Stochastic Mirror Descent (SMD) [37], its application to sparse recovery enjoys almost dimension free convergence and it has been well studied in the literature.
For instance, under bounded regressors and with sub-Gaussian noise, SMD reaches “slow rate” of sparse recovery of the type g(x^N)−g∗=O(σsln(n)/N)𝑔subscript^𝑥𝑁subscript𝑔𝑂𝜎𝑠𝑛𝑁g({\widehat{x}}_{N})-g_{*}=O\left({\sigma\sqrt{s\ln(n)/N}}\right) where x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} is the approximate solution after
N𝑁N iterations [44, 45]. Multistage routines may be used to improve the error estimates of SA under strong or uniform convexity assumptions [27, 29, 18]. However, they do not always hold, as in sparse Generalized Linear Regression, where they are replaced by Restricted Strong Convexity conditions. Then multistage procedures [2, 17] based on standard SMD algorithms [24, 38] control the ℓ2subscriptℓ2\ell_{2}-error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} at the rate O(σκΣslnnN)𝑂𝜎subscript𝜅Σ𝑠𝑛𝑁O\Big{(}\frac{\sigma}{\kappa_{\Sigma}}\sqrt{\frac{s\ln n}{N}}\Big{)}
with high probability. This is the best “asymptotic” rate attainable when solving (2).
However, those algorithms have two major limitations. They both need a number of iterations to reach a given accuracy proportional to the initial error R=‖x∗−x0‖1𝑅subscriptnormsubscript𝑥subscript𝑥01R=\|x_{*}-x_{0}\|_{1} and the sparsity level s𝑠s must be of order O(κΣNlnn)𝑂subscript𝜅Σ𝑁𝑛O\Big{(}\kappa_{\Sigma}\sqrt{\tfrac{N}{\ln n}}\Big{)} for the sparse linear regression. These limits may be seen as a consequence of dealing with non-smooth objective g(x)𝑔𝑥g(x).
Although it slightly restricts the scope of corresponding algorithms, we shall consider smooth objectives and algorithm for minimizing composite objectives (cf. [25, 32, 39]) to mitigate the aforementioned drawbacks of the multistage algorithms from [2, 17]. | 1,505 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Existing approaches and related works.
Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem
minxg^N(x)+λ‖x‖1subscript𝑥subscript^𝑔𝑁𝑥𝜆subscriptnorm𝑥1\min_{x}{\widehat{g}}_{N}(x)+\lambda\|x\|_{1} where λ≥0𝜆0\lambda\geq 0 is the algorithm parameter [35]. This delivers “good solutions”, with high probability for sparsity level s𝑠s as large as O(NκΣlnn)𝑂𝑁subscript𝜅Σ𝑛O\left(\frac{N\kappa_{\Sigma}}{\ln n}\right), as soon as the random regressors (the ϕisubscriptitalic-ϕ𝑖\phi_{i}) are drawn independently from a normal distribution with a covariance matrix ΣΣ\Sigma such that κΣI⪯Σ⪯ρκΣIprecedes-or-equalssubscript𝜅Σ𝐼Σprecedes-or-equals𝜌subscript𝜅Σ𝐼\kappa_{\Sigma}I\preceq\Sigma\preceq\rho\kappa_{\Sigma}I111We use A⪯Bprecedes-or-equals𝐴𝐵A\preceq B for two symmetric matrices A𝐴A and B𝐵B if B−A⪰0succeeds-or-equals𝐵𝐴0B-A\succeq 0, i.e. B−A𝐵𝐴B-A is positive semidefinite., for some κΣ>0,ρ≥1formulae-sequencesubscript𝜅Σ0𝜌1\kappa_{\Sigma}>0,\rho\geq 1.
However, computing this solution may be challenging in a very high-dimensional setting: even popular iterative algorithms, like coordinate descent, loops over a large number of variables. To mitigate this, randomized algorithms [3, 22], screening rules and working sets [19, 30, 34] may be used to diminish the size of the optimization problem at hand, while iterative thresholding [1, 7, 20, 16, 33] is a “direct” approach to enhance sparsity of the solution.
Another approach relies on Stochastic Approximation (SA). As ∇G(x,(ϕi,ηi))=ϕi(𝔯(ϕiTx)−ηi)∇𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖subscriptitalic-ϕ𝑖𝔯subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\nabla G(x,(\phi_{i},\eta_{i}))=\phi_{i}(\mathfrak{r}(\phi^{T}_{i}x)-\eta_{i}) is an unbiased estimate of ∇g(x)∇𝑔𝑥\nabla g(x), iterative Stochastic Gradient Descent (SGD) algorithm may be used to build approximate solutions.
Unfortunately, unless regressors ϕitalic-ϕ\phi are sparse or possess a special structure, standard SA leads to accuracy bounds for sparse recovery proportional to the dimension n𝑛n which are essentially useless in the high-dimensional setting.
This motivates non-Euclidean SA procedures, such as Stochastic Mirror Descent (SMD) [37], its application to sparse recovery enjoys almost dimension free convergence and it has been well studied in the literature.
For instance, under bounded regressors and with sub-Gaussian noise, SMD reaches “slow rate” of sparse recovery of the type g(x^N)−g∗=O(σsln(n)/N)𝑔subscript^𝑥𝑁subscript𝑔𝑂𝜎𝑠𝑛𝑁g({\widehat{x}}_{N})-g_{*}=O\left({\sigma\sqrt{s\ln(n)/N}}\right) where x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} is the approximate solution after
N𝑁N iterations [44, 45]. Multistage routines may be used to improve the error estimates of SA under strong or uniform convexity assumptions [27, 29, 18]. However, they do not always hold, as in sparse Generalized Linear Regression, where they are replaced by Restricted Strong Convexity conditions. Then multistage procedures [2, 17] based on standard SMD algorithms [24, 38] control the ℓ2subscriptℓ2\ell_{2}-error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} at the rate O(σκΣslnnN)𝑂𝜎subscript𝜅Σ𝑠𝑛𝑁O\Big{(}\frac{\sigma}{\kappa_{\Sigma}}\sqrt{\frac{s\ln n}{N}}\Big{)}
with high probability. This is the best “asymptotic” rate attainable when solving (2).
However, those algorithms have two major limitations. They both need a number of iterations to reach a given accuracy proportional to the initial error R=‖x∗−x0‖1𝑅subscriptnormsubscript𝑥subscript𝑥01R=\|x_{*}-x_{0}\|_{1} and the sparsity level s𝑠s must be of order O(κΣNlnn)𝑂subscript𝜅Σ𝑁𝑛O\Big{(}\kappa_{\Sigma}\sqrt{\tfrac{N}{\ln n}}\Big{)} for the sparse linear regression. These limits may be seen as a consequence of dealing with non-smooth objective g(x)𝑔𝑥g(x).
Although it slightly restricts the scope of corresponding algorithms, we shall consider smooth objectives and algorithm for minimizing composite objectives (cf. [25, 32, 39]) to mitigate the aforementioned drawbacks of the multistage algorithms from [2, 17]. | 1,526 |
3 | We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressive” choice of parameters in a multistage algorithm with significantly improved performances compared to those in [2]. We summarize below some properties of the proposed procedure for problem (2).
Each stage of the algorithm is a specific CSMD recursion; They fall into two phases. During the first (preliminary) phase, the estimation error decreases linearly with the exponent proportional to κΣslnnsubscript𝜅Σ𝑠𝑛\frac{\kappa_{\Sigma}}{s\ln n}. When it reaches the value O(σsκΣ)𝑂𝜎𝑠subscript𝜅ΣO\Big{(}\frac{\sigma s}{\sqrt{\kappa_{\Sigma}}}\Big{)}, the second (asymptotic) phase begins, and its stages contain exponentially increasing number of iterations per stage, hence the estimation error decreases as O(σsκΣlnnN)𝑂𝜎𝑠subscript𝜅Σ𝑛𝑁O\Big{(}\frac{\sigma s}{{\kappa_{\Sigma}}}\sqrt{\frac{\ln n}{N}}\Big{)} where N𝑁N is the total iteration count. | 311 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Principal contributions.
We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressive” choice of parameters in a multistage algorithm with significantly improved performances compared to those in [2]. We summarize below some properties of the proposed procedure for problem (2).
Each stage of the algorithm is a specific CSMD recursion; They fall into two phases. During the first (preliminary) phase, the estimation error decreases linearly with the exponent proportional to κΣslnnsubscript𝜅Σ𝑠𝑛\frac{\kappa_{\Sigma}}{s\ln n}. When it reaches the value O(σsκΣ)𝑂𝜎𝑠subscript𝜅ΣO\Big{(}\frac{\sigma s}{\sqrt{\kappa_{\Sigma}}}\Big{)}, the second (asymptotic) phase begins, and its stages contain exponentially increasing number of iterations per stage, hence the estimation error decreases as O(σsκΣlnnN)𝑂𝜎𝑠subscript𝜅Σ𝑛𝑁O\Big{(}\frac{\sigma s}{{\kappa_{\Sigma}}}\sqrt{\frac{\ln n}{N}}\Big{)} where N𝑁N is the total iteration count. | 329 |
4 | The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the properties of the method and conditions under which it leads to “small error” solutions to sparse GLR estimation problems. Finally, a small simulation study illustrating numerical performance of the proposed routines in high-dimensional GLR estimation problem is presented in Section 3.3.
In the following, E𝐸E is a Euclidean space and ∥⋅∥\|\cdot\| is a norm on E𝐸E; we denote ∥⋅∥∗\|\cdot\|_{*} the conjugate norm (i.e., ‖x‖∗=sup‖y‖≤1⟨y,x⟩subscriptnorm𝑥subscriptsupremumnorm𝑦1𝑦𝑥\|x\|_{*}=\sup_{\|y\|\leq 1}{\langle}y,x{\rangle}).
Given a positive semidefinite matrix Σ∈𝐒nΣsubscript𝐒𝑛\Sigma\in{\mathbf{S}}_{n}, for x∈𝐑n𝑥superscript𝐑𝑛x\in{\mathbf{R}}^{n} we denote ‖x‖Σ=xTΣxsubscriptnorm𝑥Σsuperscript𝑥𝑇Σ𝑥\|x\|_{\Sigma}=\sqrt{x^{T}\Sigma x} and for any matrix Q𝑄Q, we denote ‖Q‖∞=maxij|[Q]ij|subscriptnorm𝑄subscript𝑖𝑗subscriptdelimited-[]𝑄𝑖𝑗\|Q\|_{\infty}=\max_{ij}|[Q]_{ij}|.
We use a generic notation c𝑐c and C𝐶C for absolute constants; a shortcut notation a≲bless-than-or-similar-to𝑎𝑏a\lesssim b (a≳bgreater-than-or-equivalent-to𝑎𝑏a\gtrsim b) means that the ratio a/b𝑎𝑏a/b (ratio b/a𝑏𝑎b/a) is bounded by an absolute constant; the symbols ⋁\bigvee,⋀\bigwedge and the notation (.)+(.)_{+} respectively refer to ”maximum between”, ”minimum between” and ”positive part”. | 583 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Organization and notation
The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the properties of the method and conditions under which it leads to “small error” solutions to sparse GLR estimation problems. Finally, a small simulation study illustrating numerical performance of the proposed routines in high-dimensional GLR estimation problem is presented in Section 3.3.
In the following, E𝐸E is a Euclidean space and ∥⋅∥\|\cdot\| is a norm on E𝐸E; we denote ∥⋅∥∗\|\cdot\|_{*} the conjugate norm (i.e., ‖x‖∗=sup‖y‖≤1⟨y,x⟩subscriptnorm𝑥subscriptsupremumnorm𝑦1𝑦𝑥\|x\|_{*}=\sup_{\|y\|\leq 1}{\langle}y,x{\rangle}).
Given a positive semidefinite matrix Σ∈𝐒nΣsubscript𝐒𝑛\Sigma\in{\mathbf{S}}_{n}, for x∈𝐑n𝑥superscript𝐑𝑛x\in{\mathbf{R}}^{n} we denote ‖x‖Σ=xTΣxsubscriptnorm𝑥Σsuperscript𝑥𝑇Σ𝑥\|x\|_{\Sigma}=\sqrt{x^{T}\Sigma x} and for any matrix Q𝑄Q, we denote ‖Q‖∞=maxij|[Q]ij|subscriptnorm𝑄subscript𝑖𝑗subscriptdelimited-[]𝑄𝑖𝑗\|Q\|_{\infty}=\max_{ij}|[Q]_{ij}|.
We use a generic notation c𝑐c and C𝐶C for absolute constants; a shortcut notation a≲bless-than-or-similar-to𝑎𝑏a\lesssim b (a≳bgreater-than-or-equivalent-to𝑎𝑏a\gtrsim b) means that the ratio a/b𝑎𝑏a/b (ratio b/a𝑏𝑎b/a) is bounded by an absolute constant; the symbols ⋁\bigvee,⋀\bigwedge and the notation (.)+(.)_{+} respectively refer to ”maximum between”, ”minimum between” and ”positive part”. | 602 |
5 | This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm. | 24 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm. | 51 |
6 | Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\rightarrow{\mathbf{R}} such that, for all ω∈Ω𝜔Ω\omega\in\Omega, G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is convex on X𝑋X and smooth, meaning that ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) is Lipschitz continuous on X𝑋X with a.s. bounded Lipschitz constant,
, 1 = ∀x,x′∈X,‖∇G(x,ω)−∇G(x′,ω)‖≤ℒ(ω)‖x−x′‖,ℒ(ω)≤νa.s..formulae-sequencefor-all𝑥superscript𝑥′𝑋formulae-sequencenorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔ℒ𝜔norm𝑥superscript𝑥′ℒ𝜔𝜈𝑎𝑠\displaystyle\forall x,x^{\prime}\in X,\quad\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|\leq{\cal L}(\omega)\|x-x^{\prime}\|,\qquad{\cal L}(\omega)\leq\nu\quad a.s... , 2 = . , 3 = (4)
We define g(x):=𝐄{G(x,ω)}assign𝑔𝑥𝐄𝐺𝑥𝜔g(x):={\mathbf{E}}\{G(x,\omega)\}, where 𝐄{⋅}𝐄⋅{\mathbf{E}}\{\cdot\} stands for the expectation with respect to ω𝜔\omega, drawn from P𝑃P. We shall assume that the mapping g(⋅)𝑔⋅g(\cdot) is finite, convex and differentiable on X𝑋X and we aim at solving the following stochastic optimization problem
, 1 = minx∈X[g(x)=𝐄{G(x,ω)}],subscript𝑥𝑋𝑔𝑥𝐄𝐺𝑥𝜔\min_{x\in X}[g(x)={\mathbf{E}}\{G(x,\omega)\}],. , 2 = . , 3 = (5)
assuming it admits an s𝑠s-sparse optimal solution x∗subscript𝑥x_{*} for some sparsity structure.
To solve this problem, stochastic oracle can be queried: when given at input a point x∈X𝑥𝑋x\in X, generates an ω∈Ω𝜔Ω\omega\in\Omega from P𝑃P and outputs
G(x,ω)𝐺𝑥𝜔G(x,\omega) and ∇G(x,ω):=∇xG(x,ω)assign∇𝐺𝑥𝜔subscript∇𝑥𝐺𝑥𝜔\nabla G(x,\omega):=\nabla_{x}G(x,\omega) (with a slight abuse of notations). We assume that the oracle is unbiased, i.e.,
, 1 = 𝐄{∇G(x,ω)}=∇g(x),∀x∈X.formulae-sequence𝐄∇𝐺𝑥𝜔∇𝑔𝑥for-all𝑥𝑋{\mathbf{E}}\{\nabla G(x,\omega)\}=\nabla g(x),\qquad\forall x\in X.. , 2 =
To streamline presentation, we assume, as it is often the case in applications of stochastic optimization problem (5), that x∗subscript𝑥x_{*} is unconditional, i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0.
or stated otherwise 𝐄{∇G(x∗,ω)}=0𝐄∇𝐺subscript𝑥𝜔0{\mathbf{E}}\{\nabla G(x_{*},\omega)\}=0;
we also suppose the sub-Gaussianity of ∇G(x∗,ω)∇𝐺subscript𝑥𝜔\nabla G(x_{*},\omega), namely that,
for some σ∗<∞subscript𝜎\sigma_{*}<\infty
, 1 = 𝐄{exp(‖∇G(x∗,ω)‖∗2/σ∗2)}≤exp(1).𝐄subscriptsuperscriptnorm∇𝐺subscript𝑥𝜔2superscriptsubscript𝜎21\displaystyle{\mathbf{E}}\Big{\{}\exp\Big{(}{\|\nabla G(x_{*},\omega)\|^{2}_{*}/\sigma_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).. , 2 = . , 3 = (6) | 1,321 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.1 Problem statement
Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\rightarrow{\mathbf{R}} such that, for all ω∈Ω𝜔Ω\omega\in\Omega, G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is convex on X𝑋X and smooth, meaning that ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) is Lipschitz continuous on X𝑋X with a.s. bounded Lipschitz constant,
, 1 = ∀x,x′∈X,‖∇G(x,ω)−∇G(x′,ω)‖≤ℒ(ω)‖x−x′‖,ℒ(ω)≤νa.s..formulae-sequencefor-all𝑥superscript𝑥′𝑋formulae-sequencenorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔ℒ𝜔norm𝑥superscript𝑥′ℒ𝜔𝜈𝑎𝑠\displaystyle\forall x,x^{\prime}\in X,\quad\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|\leq{\cal L}(\omega)\|x-x^{\prime}\|,\qquad{\cal L}(\omega)\leq\nu\quad a.s... , 2 = . , 3 = (4)
We define g(x):=𝐄{G(x,ω)}assign𝑔𝑥𝐄𝐺𝑥𝜔g(x):={\mathbf{E}}\{G(x,\omega)\}, where 𝐄{⋅}𝐄⋅{\mathbf{E}}\{\cdot\} stands for the expectation with respect to ω𝜔\omega, drawn from P𝑃P. We shall assume that the mapping g(⋅)𝑔⋅g(\cdot) is finite, convex and differentiable on X𝑋X and we aim at solving the following stochastic optimization problem
, 1 = minx∈X[g(x)=𝐄{G(x,ω)}],subscript𝑥𝑋𝑔𝑥𝐄𝐺𝑥𝜔\min_{x\in X}[g(x)={\mathbf{E}}\{G(x,\omega)\}],. , 2 = . , 3 = (5)
assuming it admits an s𝑠s-sparse optimal solution x∗subscript𝑥x_{*} for some sparsity structure.
To solve this problem, stochastic oracle can be queried: when given at input a point x∈X𝑥𝑋x\in X, generates an ω∈Ω𝜔Ω\omega\in\Omega from P𝑃P and outputs
G(x,ω)𝐺𝑥𝜔G(x,\omega) and ∇G(x,ω):=∇xG(x,ω)assign∇𝐺𝑥𝜔subscript∇𝑥𝐺𝑥𝜔\nabla G(x,\omega):=\nabla_{x}G(x,\omega) (with a slight abuse of notations). We assume that the oracle is unbiased, i.e.,
, 1 = 𝐄{∇G(x,ω)}=∇g(x),∀x∈X.formulae-sequence𝐄∇𝐺𝑥𝜔∇𝑔𝑥for-all𝑥𝑋{\mathbf{E}}\{\nabla G(x,\omega)\}=\nabla g(x),\qquad\forall x\in X.. , 2 =
To streamline presentation, we assume, as it is often the case in applications of stochastic optimization problem (5), that x∗subscript𝑥x_{*} is unconditional, i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0.
or stated otherwise 𝐄{∇G(x∗,ω)}=0𝐄∇𝐺subscript𝑥𝜔0{\mathbf{E}}\{\nabla G(x_{*},\omega)\}=0;
we also suppose the sub-Gaussianity of ∇G(x∗,ω)∇𝐺subscript𝑥𝜔\nabla G(x_{*},\omega), namely that,
for some σ∗<∞subscript𝜎\sigma_{*}<\infty
, 1 = 𝐄{exp(‖∇G(x∗,ω)‖∗2/σ∗2)}≤exp(1).𝐄subscriptsuperscriptnorm∇𝐺subscript𝑥𝜔2superscriptsubscript𝜎21\displaystyle{\mathbf{E}}\Big{\{}\exp\Big{(}{\|\nabla G(x_{*},\omega)\|^{2}_{*}/\sigma_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).. , 2 = . , 3 = (6) | 1,354 |
7 | As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. We take a similar approach here, by transforming the generic problem 5 into the following composite Stochastic Optimization problem, adapted to some norm ∥⋅∥\|\cdot\|, and parameterized by κ≥0𝜅0\kappa\geq 0,
, 1 = minx∈X[Fκ(x):=12g(x)+κ‖x‖=12𝐄{G(x,ω)}+κ‖x‖].subscript𝑥𝑋assignsubscript𝐹𝜅𝑥12𝑔𝑥𝜅norm𝑥12𝐄𝐺𝑥𝜔𝜅norm𝑥\min_{x\in X}\big{[}F_{\kappa}(x):=\mbox{\small$\frac{1}{2}$}g(x)+\kappa\|x\|=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\{G(x,\omega)\}+\kappa\|x\|\big{]}.. , 2 = . , 3 = (7)
The purpose of this section is to derive a new (proximal) algorithm. We first provide necessary backgrounds and notations. | 317 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. We take a similar approach here, by transforming the generic problem 5 into the following composite Stochastic Optimization problem, adapted to some norm ∥⋅∥\|\cdot\|, and parameterized by κ≥0𝜅0\kappa\geq 0,
, 1 = minx∈X[Fκ(x):=12g(x)+κ‖x‖=12𝐄{G(x,ω)}+κ‖x‖].subscript𝑥𝑋assignsubscript𝐹𝜅𝑥12𝑔𝑥𝜅norm𝑥12𝐄𝐺𝑥𝜔𝜅norm𝑥\min_{x\in X}\big{[}F_{\kappa}(x):=\mbox{\small$\frac{1}{2}$}g(x)+\kappa\|x\|=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\{G(x,\omega)\}+\kappa\|x\|\big{]}.. , 2 = . , 3 = (7)
The purpose of this section is to derive a new (proximal) algorithm. We first provide necessary backgrounds and notations. | 355 |
8 | Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\to{\mathbf{R}} be a distance-generating function (d.-g.f.) of B𝐵B, i.e., a continuously differentiable convex function which is strongly convex with respect to the norm ∥⋅∥\|\cdot\|,
, 1 = ⟨∇θ(x)−∇θ(x′),x−x′⟩≥‖x−x′‖2,∀x,x′∈X.formulae-sequence∇𝜃𝑥∇𝜃superscript𝑥′𝑥superscript𝑥′superscriptnorm𝑥superscript𝑥′2for-all𝑥superscript𝑥′𝑋{\langle}\nabla\theta(x)-\nabla\theta(x^{\prime}),x-x^{\prime}\rangle\geq\|x-x^{\prime}\|^{2},\quad\forall x,x^{\prime}\in X.. , 2 =
We assume w.l.o.g. that θ(x)≥θ(0)=0𝜃𝑥𝜃00\theta(x)\geq\theta(0)=0 and denote Θ=max‖z‖≤1θ(z)Θsubscriptnorm𝑧1𝜃𝑧\Theta=\max_{\|z\|\leq 1}\theta(z).
We now introduce a local and renormalized version of the d.-g.f. θ𝜃\theta. | 393 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\to{\mathbf{R}} be a distance-generating function (d.-g.f.) of B𝐵B, i.e., a continuously differentiable convex function which is strongly convex with respect to the norm ∥⋅∥\|\cdot\|,
, 1 = ⟨∇θ(x)−∇θ(x′),x−x′⟩≥‖x−x′‖2,∀x,x′∈X.formulae-sequence∇𝜃𝑥∇𝜃superscript𝑥′𝑥superscript𝑥′superscriptnorm𝑥superscript𝑥′2for-all𝑥superscript𝑥′𝑋{\langle}\nabla\theta(x)-\nabla\theta(x^{\prime}),x-x^{\prime}\rangle\geq\|x-x^{\prime}\|^{2},\quad\forall x,x^{\prime}\in X.. , 2 =
We assume w.l.o.g. that θ(x)≥θ(0)=0𝜃𝑥𝜃00\theta(x)\geq\theta(0)=0 and denote Θ=max‖z‖≤1θ(z)Θsubscriptnorm𝑧1𝜃𝑧\Theta=\max_{\|z\|\leq 1}\theta(z).
We now introduce a local and renormalized version of the d.-g.f. θ𝜃\theta. | 448 |
9 | For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR(x0):={z∈X:‖z−x0‖≤R}assignsubscript𝑋𝑅subscript𝑥0conditional-set𝑧𝑋norm𝑧subscript𝑥0𝑅X_{R}(x_{0}):=\{z\in X:\|z-x_{0}\|\leq R\} be the ball of radius R𝑅R around x0subscript𝑥0x_{0}. It is equipped with the d.-g.f. ϑx0R(z):=R2θ((z−x0)/R)assignsubscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧superscript𝑅2𝜃𝑧subscript𝑥0𝑅\vartheta^{R}_{x_{0}}(z):=R^{2}\theta\left((z-x_{0})/R\right).
Note that ϑx0R(z)subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\vartheta^{R}_{x_{0}}(z) is strongly convex on XR(x0)subscript𝑋𝑅subscript𝑥0{X}_{R}(x_{0}) with modulus 1, ϑx0R(x0)=0subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0subscript𝑥00\vartheta^{R}_{x_{0}}(x_{0})=0, and ϑx0R(z)≤ΘR2subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧Θsuperscript𝑅2\vartheta^{R}_{x_{0}}(z)\leq\Theta R^{2}. | 443 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.1
For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR(x0):={z∈X:‖z−x0‖≤R}assignsubscript𝑋𝑅subscript𝑥0conditional-set𝑧𝑋norm𝑧subscript𝑥0𝑅X_{R}(x_{0}):=\{z\in X:\|z-x_{0}\|\leq R\} be the ball of radius R𝑅R around x0subscript𝑥0x_{0}. It is equipped with the d.-g.f. ϑx0R(z):=R2θ((z−x0)/R)assignsubscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧superscript𝑅2𝜃𝑧subscript𝑥0𝑅\vartheta^{R}_{x_{0}}(z):=R^{2}\theta\left((z-x_{0})/R\right).
Note that ϑx0R(z)subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\vartheta^{R}_{x_{0}}(z) is strongly convex on XR(x0)subscript𝑋𝑅subscript𝑥0{X}_{R}(x_{0}) with modulus 1, ϑx0R(x0)=0subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0subscript𝑥00\vartheta^{R}_{x_{0}}(x_{0})=0, and ϑx0R(z)≤ΘR2subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧Θsuperscript𝑅2\vartheta^{R}_{x_{0}}(z)\leq\Theta R^{2}. | 504 |
10 | Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman divergence V𝑉V associated to ϑitalic-ϑ\vartheta is defined by
, 1 = Vx0(x,z)=ϑx0R(z)−ϑx0R(x)−⟨∇ϑx0R(x),z−x⟩,x,z∈X.formulae-sequencesubscript𝑉subscript𝑥0𝑥𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧𝑥𝑥𝑧𝑋V_{x_{0}}(x,z)=\vartheta^{R}_{x_{0}}(z)-\vartheta^{R}_{x_{0}}(x)-{\langle}\nabla\vartheta^{R}_{x_{0}}(x),z-x{\rangle},\quad x,z\in X.. , 2 =
We can now define composite proximal mapping on XR(x0)subscript𝑋𝑅subscript𝑥0X_{R}(x_{0}) [39, 40] with respect to some convex and continuous mapping h:X→𝐑:ℎ→𝑋𝐑h:\,X\to{\mathbf{R}}. | 368 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.2
Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman divergence V𝑉V associated to ϑitalic-ϑ\vartheta is defined by
, 1 = Vx0(x,z)=ϑx0R(z)−ϑx0R(x)−⟨∇ϑx0R(x),z−x⟩,x,z∈X.formulae-sequencesubscript𝑉subscript𝑥0𝑥𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧𝑥𝑥𝑧𝑋V_{x_{0}}(x,z)=\vartheta^{R}_{x_{0}}(z)-\vartheta^{R}_{x_{0}}(x)-{\langle}\nabla\vartheta^{R}_{x_{0}}(x),z-x{\rangle},\quad x,z\in X.. , 2 =
We can now define composite proximal mapping on XR(x0)subscript𝑋𝑅subscript𝑥0X_{R}(x_{0}) [39, 40] with respect to some convex and continuous mapping h:X→𝐑:ℎ→𝑋𝐑h:\,X\to{\mathbf{R}}. | 429 |
11 | The composite proximal mapping with respect to hℎh and x𝑥x is defined by
, 1 = Proxh,x0(ζ,x)subscriptProxℎsubscript𝑥0𝜁𝑥\displaystyle\mathrm{Prox}_{h,x_{0}}(\zeta,x). , 2 = :=assign\displaystyle:=. , 3 = argminz∈XR(x0){⟨ζ,z⟩+h(z)+Vx0(x,z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁𝑧ℎ𝑧subscript𝑉subscript𝑥0𝑥𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta,z\rangle+h(z)+V_{x_{0}}(x,z)\big{\}}. , 4 = . , 5 = (8). , 1 = . , 2 = =\displaystyle=. , 3 = argminz∈XR(x0){⟨ζ−∇ϑx0R(x),z⟩+h(z)+ϑx0R(z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧ℎ𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta-\nabla\vartheta^{R}_{x_{0}}(x),z\rangle+h(z)+\vartheta^{R}_{x_{0}}(z)\big{\}}. , 4 = . , 5 = (8)
If (8) can be efficiently solved to high accuracy and ΘΘ\Theta is “not too large” (we refer to [27, 36, 40]); those setups will be called “prox-friendly”. We now introduce the main building block of our algorithm, the Composite Stochastic Mirror Descent. | 527 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.3
The composite proximal mapping with respect to hℎh and x𝑥x is defined by
, 1 = Proxh,x0(ζ,x)subscriptProxℎsubscript𝑥0𝜁𝑥\displaystyle\mathrm{Prox}_{h,x_{0}}(\zeta,x). , 2 = :=assign\displaystyle:=. , 3 = argminz∈XR(x0){⟨ζ,z⟩+h(z)+Vx0(x,z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁𝑧ℎ𝑧subscript𝑉subscript𝑥0𝑥𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta,z\rangle+h(z)+V_{x_{0}}(x,z)\big{\}}. , 4 = . , 5 = (8). , 1 = . , 2 = =\displaystyle=. , 3 = argminz∈XR(x0){⟨ζ−∇ϑx0R(x),z⟩+h(z)+ϑx0R(z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧ℎ𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta-\nabla\vartheta^{R}_{x_{0}}(x),z\rangle+h(z)+\vartheta^{R}_{x_{0}}(z)\big{\}}. , 4 = . , 5 = (8)
If (8) can be efficiently solved to high accuracy and ΘΘ\Theta is “not too large” (we refer to [27, 36, 40]); those setups will be called “prox-friendly”. We now introduce the main building block of our algorithm, the Composite Stochastic Mirror Descent. | 588 |
12 | Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stochastic Mirror Descent (CSMD) is defined by the following recursion
, 1 = xisubscript𝑥𝑖\displaystyle x_{i}. , 2 = =Proxγih,x0(γi−1∇G(xi−1,ωi),xi−1),x0∈X.formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}(\gamma_{i-1}\nabla G(x_{i-1},\omega_{i}),x_{i-1}),\quad x_{0}\in X.. , 3 = . , 4 = . , 5 = (9)
After m𝑚m steps of CSMD, the final output is x^msubscript^𝑥𝑚{\widehat{x}}_{m} (approximate solution) defined by
, 1 = x^m=∑i=0m−1γixi∑i=0m−1γisubscript^𝑥𝑚superscriptsubscript𝑖0𝑚1subscript𝛾𝑖subscript𝑥𝑖superscriptsubscript𝑖0𝑚1subscript𝛾𝑖\displaystyle{\widehat{x}}_{m}=\frac{\sum_{i=0}^{m-1}\gamma_{i}x_{i}}{\sum_{i=0}^{m-1}\gamma_{i}}. , 2 = . , 3 = (10)
For any integer L∈𝐍𝐿𝐍L\in{\mathbf{N}}, we can also define the L𝐿L-minibatch CSMD. Let ωi(L)=[ωi1,…,ωiL]superscriptsubscript𝜔𝑖𝐿superscriptsubscript𝜔𝑖1…superscriptsubscript𝜔𝑖𝐿\omega_{i}^{(L)}=[\omega_{i}^{1},...,\omega_{i}^{L}] be i.i.d. realizations of ωisubscript𝜔𝑖\omega_{i}. The associated (average) stochastic gradient is then simply defined as
, 1 = H(xi−1,ωi(L))=1L∑ℓ=1L∇G(xi−1,ωiℓ),𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖1𝐿superscriptsubscriptℓ1𝐿∇𝐺subscript𝑥𝑖1subscriptsuperscript𝜔ℓ𝑖H\left(x_{i-1},\omega^{(L)}_{i}\right)={1\over L}\sum_{\ell=1}^{L}\nabla G(x_{i-1},\omega^{\ell}_{i}),. , 2 =
which yields the following recursion for the L𝐿L-minibatch CSMD recursion:
, 1 = xi(L)superscriptsubscript𝑥𝑖𝐿\displaystyle x_{i}^{(L)}. , 2 = =Proxγih,x0(γi−1H(xi−1,ωi(L)),xi−1(L)),x0∈X,formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖superscriptsubscript𝑥𝑖1𝐿subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}\left(\gamma_{i-1}H\left(x_{i-1},\omega^{(L)}_{i}\right),x_{i-1}^{(L)}\right),\quad x_{0}\in X,. , 3 = . , 4 = . , 5 = (11)
with its approximate solution x^m(L)=∑i=0m−1γixi(L)/∑i=0m−1γisuperscriptsubscript^𝑥𝑚𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖{\widehat{x}}_{m}^{(L)}=\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)}/\sum_{i=0}^{m-1}\gamma_{i} after m𝑚m iterations.
From now on, we set h(x)=κ‖x‖ℎ𝑥𝜅norm𝑥h(x)=\kappa\|x\|. | 1,249 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Composite Stochastic Mirror Descent algorithm.
Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stochastic Mirror Descent (CSMD) is defined by the following recursion
, 1 = xisubscript𝑥𝑖\displaystyle x_{i}. , 2 = =Proxγih,x0(γi−1∇G(xi−1,ωi),xi−1),x0∈X.formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}(\gamma_{i-1}\nabla G(x_{i-1},\omega_{i}),x_{i-1}),\quad x_{0}\in X.. , 3 = . , 4 = . , 5 = (9)
After m𝑚m steps of CSMD, the final output is x^msubscript^𝑥𝑚{\widehat{x}}_{m} (approximate solution) defined by
, 1 = x^m=∑i=0m−1γixi∑i=0m−1γisubscript^𝑥𝑚superscriptsubscript𝑖0𝑚1subscript𝛾𝑖subscript𝑥𝑖superscriptsubscript𝑖0𝑚1subscript𝛾𝑖\displaystyle{\widehat{x}}_{m}=\frac{\sum_{i=0}^{m-1}\gamma_{i}x_{i}}{\sum_{i=0}^{m-1}\gamma_{i}}. , 2 = . , 3 = (10)
For any integer L∈𝐍𝐿𝐍L\in{\mathbf{N}}, we can also define the L𝐿L-minibatch CSMD. Let ωi(L)=[ωi1,…,ωiL]superscriptsubscript𝜔𝑖𝐿superscriptsubscript𝜔𝑖1…superscriptsubscript𝜔𝑖𝐿\omega_{i}^{(L)}=[\omega_{i}^{1},...,\omega_{i}^{L}] be i.i.d. realizations of ωisubscript𝜔𝑖\omega_{i}. The associated (average) stochastic gradient is then simply defined as
, 1 = H(xi−1,ωi(L))=1L∑ℓ=1L∇G(xi−1,ωiℓ),𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖1𝐿superscriptsubscriptℓ1𝐿∇𝐺subscript𝑥𝑖1subscriptsuperscript𝜔ℓ𝑖H\left(x_{i-1},\omega^{(L)}_{i}\right)={1\over L}\sum_{\ell=1}^{L}\nabla G(x_{i-1},\omega^{\ell}_{i}),. , 2 =
which yields the following recursion for the L𝐿L-minibatch CSMD recursion:
, 1 = xi(L)superscriptsubscript𝑥𝑖𝐿\displaystyle x_{i}^{(L)}. , 2 = =Proxγih,x0(γi−1H(xi−1,ωi(L)),xi−1(L)),x0∈X,formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖superscriptsubscript𝑥𝑖1𝐿subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}\left(\gamma_{i-1}H\left(x_{i-1},\omega^{(L)}_{i}\right),x_{i-1}^{(L)}\right),\quad x_{0}\in X,. , 3 = . , 4 = . , 5 = (11)
with its approximate solution x^m(L)=∑i=0m−1γixi(L)/∑i=0m−1γisuperscriptsubscript^𝑥𝑚𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖{\widehat{x}}_{m}^{(L)}=\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)}/\sum_{i=0}^{m-1}\gamma_{i} after m𝑚m iterations.
From now on, we set h(x)=κ‖x‖ℎ𝑥𝜅norm𝑥h(x)=\kappa\|x\|. | 1,295 |
13 | If step-sizes are constant, i.e., γi≡γ≤(4ν)−1subscript𝛾𝑖𝛾superscript4𝜈1\gamma_{i}\equiv\gamma\leq(4\nu)^{-1}, i=0,1,…𝑖01…i=0,1,..., and the initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) then for any t≳1+lnmgreater-than-or-equivalent-to𝑡1𝑚t\gtrsim\sqrt{1+\ln m}, with probability at least 1−4e−t14superscript𝑒𝑡1-4e^{-t}
, 1 = Fκ(x^m)−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2(m+t)],less-than-or-similar-tosubscript𝐹𝜅subscript^𝑥𝑚subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}(m+t)\big{]},. , 2 = . , 3 = (12)
and the approximate solution x^m(L)superscriptsubscript^𝑥𝑚𝐿{\widehat{x}}_{m}^{(L)} of the L𝐿L-minibatch CSMD satisfies
, 1 = Fκ(x^m(L))−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2ΘL−1(m+t)].less-than-or-similar-tosubscript𝐹𝜅superscriptsubscript^𝑥𝑚𝐿subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2Θsuperscript𝐿1𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m}^{(L)})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}\Theta L^{-1}(m+t)\big{]}.. , 2 = . , 3 = (13)
For the sake of clarity and conciseness, we denote CSMD(x0,γ,κ,R,m,Lsubscript𝑥0𝛾𝜅𝑅𝑚𝐿x_{0},\gamma,\kappa,R,m,L) the approximate solution x^m(L)subscriptsuperscript^𝑥𝐿𝑚\widehat{x}^{(L)}_{m} computed after m𝑚m iterations of L𝐿L-minibatch CSMD algorithm with initial point x0subscript𝑥0x_{0}, step-size γ𝛾\gamma, and radius R𝑅R using recursion (11). | 902 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Composite Stochastic Mirror Descent algorithm.
Proposition 2.1
If step-sizes are constant, i.e., γi≡γ≤(4ν)−1subscript𝛾𝑖𝛾superscript4𝜈1\gamma_{i}\equiv\gamma\leq(4\nu)^{-1}, i=0,1,…𝑖01…i=0,1,..., and the initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) then for any t≳1+lnmgreater-than-or-equivalent-to𝑡1𝑚t\gtrsim\sqrt{1+\ln m}, with probability at least 1−4e−t14superscript𝑒𝑡1-4e^{-t}
, 1 = Fκ(x^m)−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2(m+t)],less-than-or-similar-tosubscript𝐹𝜅subscript^𝑥𝑚subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}(m+t)\big{]},. , 2 = . , 3 = (12)
and the approximate solution x^m(L)superscriptsubscript^𝑥𝑚𝐿{\widehat{x}}_{m}^{(L)} of the L𝐿L-minibatch CSMD satisfies
, 1 = Fκ(x^m(L))−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2ΘL−1(m+t)].less-than-or-similar-tosubscript𝐹𝜅superscriptsubscript^𝑥𝑚𝐿subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2Θsuperscript𝐿1𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m}^{(L)})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}\Theta L^{-1}(m+t)\big{]}.. , 2 = . , 3 = (13)
For the sake of clarity and conciseness, we denote CSMD(x0,γ,κ,R,m,Lsubscript𝑥0𝛾𝜅𝑅𝑚𝐿x_{0},\gamma,\kappa,R,m,L) the approximate solution x^m(L)subscriptsuperscript^𝑥𝐿𝑚\widehat{x}^{(L)}_{m} computed after m𝑚m iterations of L𝐿L-minibatch CSMD algorithm with initial point x0subscript𝑥0x_{0}, step-size γ𝛾\gamma, and radius R𝑅R using recursion (11). | 955 |
14 | Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary composite problems (7), with their sequence of parameters (κ𝜅\kappa, x0subscript𝑥0x_{0}, R𝑅R) defined recursively. For the latter, we need to infer the quality of approximate solution to (5). To this end, we introduce the following Reduced Strong Convexity (RSC) assumption, satisfied in the motivating example (it is discussed in the appendix for the sake of fluency): | 117 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary composite problems (7), with their sequence of parameters (κ𝜅\kappa, x0subscript𝑥0x_{0}, R𝑅R) defined recursively. For the latter, we need to infer the quality of approximate solution to (5). To this end, we introduce the following Reduced Strong Convexity (RSC) assumption, satisfied in the motivating example (it is discussed in the appendix for the sake of fluency): | 157 |
15 | There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X to the composite problem (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)−Fκ(x∗)≤υ,subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}(\widehat{x})-F_{\kappa}(x_{*})\leq\upsilon,. , 2 =
it holds, with probability at least 1−ε1𝜀1-\varepsilon, that
, 1 = ‖x^−x∗‖≤δ[ρsκ+υκ−1].norm^𝑥subscript𝑥𝛿delimited-[]𝜌𝑠𝜅𝜐superscript𝜅1\displaystyle\|{\widehat{x}}-x_{*}\|\leq\delta\left[\rho s\kappa+{\upsilon\kappa^{-1}}\right].. , 2 = . , 3 = (14)
Given the different problem parameters s,ν,δ,ρ,κ,R𝑠𝜈𝛿𝜌𝜅𝑅s,\nu,\delta,\rho,\kappa,R and some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) Algorithm 1 works in stages. Each stage represents a run of CSMD algorithm with properly set penalty parameter κ𝜅\kappa. More precisely, at stage k+1𝑘1k+1, given the approximate solution x^mksubscriptsuperscript^𝑥𝑘𝑚\widehat{x}^{k}_{m} of stage k𝑘k, a new instance of CSMD is initialized on XRk+1(x0k+1)subscript𝑋subscript𝑅𝑘1subscriptsuperscript𝑥𝑘10X_{R_{k+1}}(x^{k+1}_{0}) with x0k+1=x^mksubscriptsuperscript𝑥𝑘10subscriptsuperscript^𝑥𝑘𝑚x^{k+1}_{0}=\widehat{x}^{k}_{m} and Rk+1=Rk/2subscript𝑅𝑘1subscript𝑅𝑘2R_{k+1}=R_{k}/2.
Furthermore, those stages are divided into two phases which we refer to as preliminary and asymptotic:
During this phase, the step-sizes γ𝛾\gamma and the number of CSMD iterations per stage are fixed; the error of approximate solutions converges linearly with the total number of calls to stochastic oracle. This phase terminates when the error of approximate solution becomes independent of the initial error of the algorithm; then the asymptotic phase begins.
In this phase, the step-size decreases and the length of the stage increases linearly; the solution converges sublinearly, with the “standard” rate O(N−1/2)𝑂superscript𝑁12O\big{(}N^{-1/2}\big{)} where N𝑁N is the total number of oracle calls. When expensive proximal computation (8) results in high numerical cost of the iterative algorithm, minibatches are used to keep the number of iterations per stage fixed.
In the algorithm description, K¯1subscript¯𝐾1\overline{K}_{1} and K¯2≍1+log(Nm0)asymptotically-equalssubscript¯𝐾21𝑁subscript𝑚0\overline{K}_{2}\asymp 1+\log(\frac{N}{m_{0}}) stand for the respective maximal number of stages of the two phases of the method, here, m0≍sρνδ2(Θ+t)asymptotically-equalssubscript𝑚0𝑠𝜌𝜈superscript𝛿2Θ𝑡m_{0}\asymp{s\rho\nu\delta^{2}(\Theta+t)} is the length of stages of the first (preliminary) phase. The pseudo-code for the variant of the asymptotic phase with minibatches is given in Algorithm 2.
The following theorem states the main result of this paper, an upper bound on the precision of the estimator computed by our multistage method. | 1,104 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X to the composite problem (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)−Fκ(x∗)≤υ,subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}(\widehat{x})-F_{\kappa}(x_{*})\leq\upsilon,. , 2 =
it holds, with probability at least 1−ε1𝜀1-\varepsilon, that
, 1 = ‖x^−x∗‖≤δ[ρsκ+υκ−1].norm^𝑥subscript𝑥𝛿delimited-[]𝜌𝑠𝜅𝜐superscript𝜅1\displaystyle\|{\widehat{x}}-x_{*}\|\leq\delta\left[\rho s\kappa+{\upsilon\kappa^{-1}}\right].. , 2 = . , 3 = (14)
Given the different problem parameters s,ν,δ,ρ,κ,R𝑠𝜈𝛿𝜌𝜅𝑅s,\nu,\delta,\rho,\kappa,R and some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) Algorithm 1 works in stages. Each stage represents a run of CSMD algorithm with properly set penalty parameter κ𝜅\kappa. More precisely, at stage k+1𝑘1k+1, given the approximate solution x^mksubscriptsuperscript^𝑥𝑘𝑚\widehat{x}^{k}_{m} of stage k𝑘k, a new instance of CSMD is initialized on XRk+1(x0k+1)subscript𝑋subscript𝑅𝑘1subscriptsuperscript𝑥𝑘10X_{R_{k+1}}(x^{k+1}_{0}) with x0k+1=x^mksubscriptsuperscript𝑥𝑘10subscriptsuperscript^𝑥𝑘𝑚x^{k+1}_{0}=\widehat{x}^{k}_{m} and Rk+1=Rk/2subscript𝑅𝑘1subscript𝑅𝑘2R_{k+1}=R_{k}/2.
Furthermore, those stages are divided into two phases which we refer to as preliminary and asymptotic:
During this phase, the step-sizes γ𝛾\gamma and the number of CSMD iterations per stage are fixed; the error of approximate solutions converges linearly with the total number of calls to stochastic oracle. This phase terminates when the error of approximate solution becomes independent of the initial error of the algorithm; then the asymptotic phase begins.
In this phase, the step-size decreases and the length of the stage increases linearly; the solution converges sublinearly, with the “standard” rate O(N−1/2)𝑂superscript𝑁12O\big{(}N^{-1/2}\big{)} where N𝑁N is the total number of oracle calls. When expensive proximal computation (8) results in high numerical cost of the iterative algorithm, minibatches are used to keep the number of iterations per stage fixed.
In the algorithm description, K¯1subscript¯𝐾1\overline{K}_{1} and K¯2≍1+log(Nm0)asymptotically-equalssubscript¯𝐾21𝑁subscript𝑚0\overline{K}_{2}\asymp 1+\log(\frac{N}{m_{0}}) stand for the respective maximal number of stages of the two phases of the method, here, m0≍sρνδ2(Θ+t)asymptotically-equalssubscript𝑚0𝑠𝜌𝜈superscript𝛿2Θ𝑡m_{0}\asymp{s\rho\nu\delta^{2}(\Theta+t)} is the length of stages of the first (preliminary) phase. The pseudo-code for the variant of the asymptotic phase with minibatches is given in Algorithm 2.
The following theorem states the main result of this paper, an upper bound on the precision of the estimator computed by our multistage method. | 1,150 |
16 | Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage of the preliminary phase of Algorithm 1 is completed, then for t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N} the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} of Algorithm 1 satisfies, with probability at least 1−C(K¯1+K¯2)e−t1𝐶subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-C(\overline{K}_{1}+\overline{K}_{2})e^{-t},
, 1 = ‖x^N−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ+tN.less-than-or-similar-tonormsubscript^𝑥𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠Θ𝑡𝑁\|\widehat{x}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s(\Theta+t)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}\sqrt{\frac{\Theta+t}{N}}.. , 2 =
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch Algorithm 2 satisfies with probability ≥1−C(K¯1+K~2)e−tabsent1𝐶subscript¯𝐾1subscript~𝐾2superscript𝑒𝑡\geq 1-C(\overline{K}_{1}+\widetilde{K}_{2})e^{-t}
, 1 = ‖x^N(b)−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ(Θ+t)N.less-than-or-similar-tonormsubscriptsuperscript^𝑥𝑏𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠ΘΘ𝑡𝑁\|\widehat{x}^{(b)}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s\left(\Theta+t\right)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}{}\sqrt{\frac{\Theta\left(\Theta+t\right)}{N}}.. , 2 =
where K~2≍1+ln(NΘm0)asymptotically-equalssubscript~𝐾21𝑁Θsubscript𝑚0\widetilde{K}_{2}\asymp 1+\ln\big{(}\frac{N}{\Theta m_{0}}\big{)} is the bound for the number of stages of the asymptotic phase of the minibatch algorithm. | 838 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
Theorem 2.1
Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage of the preliminary phase of Algorithm 1 is completed, then for t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N} the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} of Algorithm 1 satisfies, with probability at least 1−C(K¯1+K¯2)e−t1𝐶subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-C(\overline{K}_{1}+\overline{K}_{2})e^{-t},
, 1 = ‖x^N−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ+tN.less-than-or-similar-tonormsubscript^𝑥𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠Θ𝑡𝑁\|\widehat{x}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s(\Theta+t)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}\sqrt{\frac{\Theta+t}{N}}.. , 2 =
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch Algorithm 2 satisfies with probability ≥1−C(K¯1+K~2)e−tabsent1𝐶subscript¯𝐾1subscript~𝐾2superscript𝑒𝑡\geq 1-C(\overline{K}_{1}+\widetilde{K}_{2})e^{-t}
, 1 = ‖x^N(b)−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ(Θ+t)N.less-than-or-similar-tonormsubscriptsuperscript^𝑥𝑏𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠ΘΘ𝑡𝑁\|\widehat{x}^{(b)}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s\left(\Theta+t\right)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}{}\sqrt{\frac{\Theta\left(\Theta+t\right)}{N}}.. , 2 =
where K~2≍1+ln(NΘm0)asymptotically-equalssubscript~𝐾21𝑁Θsubscript𝑚0\widetilde{K}_{2}\asymp 1+\ln\big{(}\frac{N}{\Theta m_{0}}\big{)} is the bound for the number of stages of the asymptotic phase of the minibatch algorithm. | 891 |
17 | Along with the oracle computation, proximal computation to be implemented at each iteration of the algorithm is an important part of the computational cost of the method. It becomes even more important during the asymptotic phase when number of iterations per stage increases exponentially fast with the stage count, and may result in poor real-time convergence. The interest of minibatch implementation of the second phase of the algorithm is in reducing drastically the number of iterations per asymptotic stage. The price to paid is an extra factor ΘΘ\sqrt{\Theta} that could also theoretically hinder convergence. However, in the problems of interest (sparse and group-sparse recovery, low rank matrix recovery) ΘΘ\Theta is logarithmic in problem dimension. Furthermore, in our numerical experiments we did not observe any accuracy degradation when using the minibatch variant of the method. | 165 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
Remark 2.1
Along with the oracle computation, proximal computation to be implemented at each iteration of the algorithm is an important part of the computational cost of the method. It becomes even more important during the asymptotic phase when number of iterations per stage increases exponentially fast with the stage count, and may result in poor real-time convergence. The interest of minibatch implementation of the second phase of the algorithm is in reducing drastically the number of iterations per asymptotic stage. The price to paid is an extra factor ΘΘ\sqrt{\Theta} that could also theoretically hinder convergence. However, in the problems of interest (sparse and group-sparse recovery, low rank matrix recovery) ΘΘ\Theta is logarithmic in problem dimension. Furthermore, in our numerical experiments we did not observe any accuracy degradation when using the minibatch variant of the method. | 217 |
18 | We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations defined by
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i=1,2,…,N,formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖12…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\;\;\;i=1,2,...,N,. , 2 = . , 3 = (15)
where 𝔯:𝐑→𝐑:𝔯→𝐑𝐑\mathfrak{r}:{\mathbf{R}}\to{\mathbf{R}} is some non-decreasing and continuous “activation function”, and ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} and ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are mutually independent.
We assume that ξisubscript𝜉𝑖\xi_{i} are sub-Gaussian, i.e., 𝐄{eξi2}≤exp(1)𝐄superscript𝑒superscriptsubscript𝜉𝑖21{\mathbf{E}}\big{\{}e^{\xi_{i}^{2}}\big{\}}\leq\exp(1),
while regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are bounded, i.e.,
‖ϕi‖∞≤ν¯subscriptnormsubscriptitalic-ϕ𝑖¯𝜈\|\phi_{i}\|_{\infty}\leq{\overline{\nu}}. We also denote Σ=𝐄{ϕiϕiT}Σ𝐄subscriptitalic-ϕ𝑖superscriptsubscriptitalic-ϕ𝑖𝑇\Sigma={\mathbf{E}}\{\phi_{i}\phi_{i}^{T}\}, with Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I with some κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and ‖Σj‖∞≤υ<∞subscriptnormsubscriptΣ𝑗𝜐\|\Sigma_{j}\|_{\infty}\leq\upsilon<\infty.
We will apply the machinery developed in Section 2, with respect to
, 1 = g(x)=𝐄{𝔰(ϕTx)−xTϕη}𝑔𝑥𝐄𝔰superscriptitalic-ϕ𝑇𝑥superscript𝑥𝑇italic-ϕ𝜂g(x)={\mathbf{E}}\big{\{}\mathfrak{s}(\phi^{T}x)-x^{T}\phi\eta\big{\}}. , 2 =
where 𝔯(t)=∇𝔰(t)𝔯𝑡∇𝔰𝑡\mathfrak{r}(t)=\nabla\mathfrak{s}(t) for some convex and continuously differentiable 𝔰𝔰\mathfrak{s}, applied with the norm ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1} (hence ∥⋅∥∗=∥⋅∥∞\|\cdot\|_{*}=\|\cdot\|_{\infty}), from some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that ‖x∗−x0‖1≤Rsubscriptnormsubscript𝑥subscript𝑥01𝑅\|x_{*}-x_{0}\|_{1}\leq R. It remains to prove that the different assumptions of Section 2 are satisfied. | 1,064 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations defined by
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i=1,2,…,N,formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖12…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\;\;\;i=1,2,...,N,. , 2 = . , 3 = (15)
where 𝔯:𝐑→𝐑:𝔯→𝐑𝐑\mathfrak{r}:{\mathbf{R}}\to{\mathbf{R}} is some non-decreasing and continuous “activation function”, and ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} and ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are mutually independent.
We assume that ξisubscript𝜉𝑖\xi_{i} are sub-Gaussian, i.e., 𝐄{eξi2}≤exp(1)𝐄superscript𝑒superscriptsubscript𝜉𝑖21{\mathbf{E}}\big{\{}e^{\xi_{i}^{2}}\big{\}}\leq\exp(1),
while regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are bounded, i.e.,
‖ϕi‖∞≤ν¯subscriptnormsubscriptitalic-ϕ𝑖¯𝜈\|\phi_{i}\|_{\infty}\leq{\overline{\nu}}. We also denote Σ=𝐄{ϕiϕiT}Σ𝐄subscriptitalic-ϕ𝑖superscriptsubscriptitalic-ϕ𝑖𝑇\Sigma={\mathbf{E}}\{\phi_{i}\phi_{i}^{T}\}, with Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I with some κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and ‖Σj‖∞≤υ<∞subscriptnormsubscriptΣ𝑗𝜐\|\Sigma_{j}\|_{\infty}\leq\upsilon<\infty.
We will apply the machinery developed in Section 2, with respect to
, 1 = g(x)=𝐄{𝔰(ϕTx)−xTϕη}𝑔𝑥𝐄𝔰superscriptitalic-ϕ𝑇𝑥superscript𝑥𝑇italic-ϕ𝜂g(x)={\mathbf{E}}\big{\{}\mathfrak{s}(\phi^{T}x)-x^{T}\phi\eta\big{\}}. , 2 =
where 𝔯(t)=∇𝔰(t)𝔯𝑡∇𝔰𝑡\mathfrak{r}(t)=\nabla\mathfrak{s}(t) for some convex and continuously differentiable 𝔰𝔰\mathfrak{s}, applied with the norm ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1} (hence ∥⋅∥∗=∥⋅∥∞\|\cdot\|_{*}=\|\cdot\|_{\infty}), from some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that ‖x∗−x0‖1≤Rsubscriptnormsubscript𝑥subscript𝑥01𝑅\|x_{*}-x_{0}\|_{1}\leq R. It remains to prove that the different assumptions of Section 2 are satisfied. | 1,091 |
19 | Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯(t)−𝔯(t′)|≥r¯|t−t′|𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\geq{\underline{r}}|t-t^{\prime}| which implies that 𝔰𝔰\mathfrak{s} is r¯¯𝑟{\underline{r}}-strongly convex) then
1.
[Smoothness] G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is ℒ(ω)ℒ𝜔\mathcal{L}(\omega)-smooth with ℒ(ω)≤r¯ν¯2.ℒ𝜔¯𝑟superscript¯𝜈2\mathcal{L}(\omega)\leq{\overline{r}}{\overline{\nu}}^{2}.
2.
[Quadratic minoration] g𝑔g satisfies
g(x)−g(x∗)≥12r¯‖x−x∗‖Σ2.𝑔𝑥𝑔subscript𝑥12¯𝑟subscriptsuperscriptnorm𝑥subscript𝑥2Σ\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}{\underline{r}}\|x-x_{*}\|^{2}_{\Sigma}.
(16)
3.
[Reduced Strong Convexity] Assumption [RSC] holds with δ=1𝛿1\delta=1 and ρ=(κΣr¯)−1𝜌superscriptsubscript𝜅Σ¯𝑟1\rho=(\kappa_{\Sigma}{\underline{r}})^{-1}.
4.
[Sub-Gaussianity] ∇G(x∗,ωi)∇𝐺subscript𝑥subscript𝜔𝑖\nabla G(x_{*},\omega_{i}) is σ2ν¯2superscript𝜎2superscript¯𝜈2\sigma^{2}{\overline{\nu}}^{2}-sub Gaussian.
The proof is postponed to the appendix. The last point is a consequence of a generalization of the Restricted Eigenvalue property [5], that we detail below (as it gives insight on why Proposition 3.1 holds).
This condition, that we state and call 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) in the following Lemma 3.1, and is reminiscent of [26] with the corresponding assumptions of [41, 14]. | 698 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Proposition 3.1
Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯(t)−𝔯(t′)|≥r¯|t−t′|𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\geq{\underline{r}}|t-t^{\prime}| which implies that 𝔰𝔰\mathfrak{s} is r¯¯𝑟{\underline{r}}-strongly convex) then
1.
[Smoothness] G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is ℒ(ω)ℒ𝜔\mathcal{L}(\omega)-smooth with ℒ(ω)≤r¯ν¯2.ℒ𝜔¯𝑟superscript¯𝜈2\mathcal{L}(\omega)\leq{\overline{r}}{\overline{\nu}}^{2}.
2.
[Quadratic minoration] g𝑔g satisfies
g(x)−g(x∗)≥12r¯‖x−x∗‖Σ2.𝑔𝑥𝑔subscript𝑥12¯𝑟subscriptsuperscriptnorm𝑥subscript𝑥2Σ\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}{\underline{r}}\|x-x_{*}\|^{2}_{\Sigma}.
(16)
3.
[Reduced Strong Convexity] Assumption [RSC] holds with δ=1𝛿1\delta=1 and ρ=(κΣr¯)−1𝜌superscriptsubscript𝜅Σ¯𝑟1\rho=(\kappa_{\Sigma}{\underline{r}})^{-1}.
4.
[Sub-Gaussianity] ∇G(x∗,ωi)∇𝐺subscript𝑥subscript𝜔𝑖\nabla G(x_{*},\omega_{i}) is σ2ν¯2superscript𝜎2superscript¯𝜈2\sigma^{2}{\overline{\nu}}^{2}-sub Gaussian.
The proof is postponed to the appendix. The last point is a consequence of a generalization of the Restricted Eigenvalue property [5], that we detail below (as it gives insight on why Proposition 3.1 holds).
This condition, that we state and call 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) in the following Lemma 3.1, and is reminiscent of [26] with the corresponding assumptions of [41, 14]. | 732 |
20 | Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following property is verified:
, 1 = ∀z∈𝐑n‖zI‖1≤sλ‖z‖Σ+12(1−ψ)‖z‖1formulae-sequencefor-all𝑧superscript𝐑𝑛subscriptnormsubscript𝑧𝐼1𝑠𝜆subscriptnorm𝑧Σ121𝜓subscriptnorm𝑧1\displaystyle\forall z\in{\mathbf{R}}^{n}\quad\|z_{I}\|_{1}\leq\sqrt{s\over\lambda}\|z\|_{\Sigma}+\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}. , 2 = . , 3 = 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
where zIsubscript𝑧𝐼z_{I} is obtained by zeroing all its components with indices i∉I𝑖𝐼i\notin I.
If g(⋅)𝑔⋅g(\cdot) satisfies the quadratic minoration condition, i.e., for some μ>0𝜇0\mu>0,
, 1 = g(x)−g(x∗)≥12μ‖x−x∗‖Σ2,𝑔𝑥𝑔subscript𝑥12𝜇superscriptsubscriptnorm𝑥subscript𝑥Σ2\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}\mu\|x-x_{*}\|_{\Sigma}^{2},. , 2 = . , 3 = (17)
and that x^^𝑥{\widehat{x}} is an admissible solution to (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)≤Fκ(x∗)+υ.subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}({\widehat{x}})\leq F_{\kappa}(x_{*})+\upsilon.. , 2 =
Then, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = ‖x^−x∗‖1≤sκλμψ+υκψ.subscriptnorm^𝑥subscript𝑥1𝑠𝜅𝜆𝜇𝜓𝜐𝜅𝜓\displaystyle\|{\widehat{x}}-x_{*}\|_{1}\leq{s\kappa\over\lambda\mu\psi}+{\upsilon\over\kappa\psi}.. , 2 = . , 3 = (18) | 763 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Lemma 3.1
Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following property is verified:
, 1 = ∀z∈𝐑n‖zI‖1≤sλ‖z‖Σ+12(1−ψ)‖z‖1formulae-sequencefor-all𝑧superscript𝐑𝑛subscriptnormsubscript𝑧𝐼1𝑠𝜆subscriptnorm𝑧Σ121𝜓subscriptnorm𝑧1\displaystyle\forall z\in{\mathbf{R}}^{n}\quad\|z_{I}\|_{1}\leq\sqrt{s\over\lambda}\|z\|_{\Sigma}+\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}. , 2 = . , 3 = 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
where zIsubscript𝑧𝐼z_{I} is obtained by zeroing all its components with indices i∉I𝑖𝐼i\notin I.
If g(⋅)𝑔⋅g(\cdot) satisfies the quadratic minoration condition, i.e., for some μ>0𝜇0\mu>0,
, 1 = g(x)−g(x∗)≥12μ‖x−x∗‖Σ2,𝑔𝑥𝑔subscript𝑥12𝜇superscriptsubscriptnorm𝑥subscript𝑥Σ2\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}\mu\|x-x_{*}\|_{\Sigma}^{2},. , 2 = . , 3 = (17)
and that x^^𝑥{\widehat{x}} is an admissible solution to (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)≤Fκ(x∗)+υ.subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}({\widehat{x}})\leq F_{\kappa}(x_{*})+\upsilon.. , 2 =
Then, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = ‖x^−x∗‖1≤sκλμψ+υκψ.subscriptnorm^𝑥subscript𝑥1𝑠𝜅𝜆𝜇𝜓𝜐𝜅𝜓\displaystyle\|{\widehat{x}}-x_{*}\|_{1}\leq{s\kappa\over\lambda\mu\psi}+{\upsilon\over\kappa\psi}.. , 2 = . , 3 = (18) | 796 |
21 | Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the most relaxed condition under which classical bounds for the error of ℓ1subscriptℓ1\ell_{1}-recovery routines were established. Validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) with some λ>0𝜆0\lambda>0 is necessary for ΣΣ\Sigma to possess the celebrated null-space property [13]
, 1 = ∃ψ>0:maxI,|I|≤s‖zI‖1≤12(1−ψ)‖z‖1∀z∈Ker(Σ):𝜓0subscript𝐼𝐼𝑠subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1for-all𝑧KerΣ\exists\psi>0:\;\max_{I,\,|I|\leq s}\|z_{I}\|_{1}\leq\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}\;\;\forall z\in\mathrm{Ker}(\Sigma). , 2 =
which is necessary and sufficient for the s𝑠s-goodness of ΣΣ\Sigma (i.e., x^∈missingArgminu{∥u∥:Σu=Σx∗}\widehat{x}\in\mathop{\mathrm{missing}}{Argmin}_{u}\left\{\|u\|:\;\Sigma u=\Sigma x_{*}\right\} reproduces exactly every s𝑠s-sparse signal x∗subscript𝑥x_{*} in the noiseless case).
When ΣΣ\Sigma possesses the nullspace property, 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may hold for ΣΣ\Sigma with nontrivial kernel; this is typically the case for random matrices [41, 42] such as rank deficient Wishart matrices, etc. When ΣΣ\Sigma is a regular matrix, condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may also holds with constant λ𝜆\lambda which is much higher that the minimal eigenvalue of ΣΣ\Sigma when the eigenspace corresponding to small eigenvalues of ΣΣ\Sigma does not contain vectors z𝑧z with ‖zI‖1>12(1−ψ)‖z‖1subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1\|z_{I}\|_{1}>\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}. | 680 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remark 3.1
Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the most relaxed condition under which classical bounds for the error of ℓ1subscriptℓ1\ell_{1}-recovery routines were established. Validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) with some λ>0𝜆0\lambda>0 is necessary for ΣΣ\Sigma to possess the celebrated null-space property [13]
, 1 = ∃ψ>0:maxI,|I|≤s‖zI‖1≤12(1−ψ)‖z‖1∀z∈Ker(Σ):𝜓0subscript𝐼𝐼𝑠subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1for-all𝑧KerΣ\exists\psi>0:\;\max_{I,\,|I|\leq s}\|z_{I}\|_{1}\leq\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}\;\;\forall z\in\mathrm{Ker}(\Sigma). , 2 =
which is necessary and sufficient for the s𝑠s-goodness of ΣΣ\Sigma (i.e., x^∈missingArgminu{∥u∥:Σu=Σx∗}\widehat{x}\in\mathop{\mathrm{missing}}{Argmin}_{u}\left\{\|u\|:\;\Sigma u=\Sigma x_{*}\right\} reproduces exactly every s𝑠s-sparse signal x∗subscript𝑥x_{*} in the noiseless case).
When ΣΣ\Sigma possesses the nullspace property, 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may hold for ΣΣ\Sigma with nontrivial kernel; this is typically the case for random matrices [41, 42] such as rank deficient Wishart matrices, etc. When ΣΣ\Sigma is a regular matrix, condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may also holds with constant λ𝜆\lambda which is much higher that the minimal eigenvalue of ΣΣ\Sigma when the eigenspace corresponding to small eigenvalues of ΣΣ\Sigma does not contain vectors z𝑧z with ‖zI‖1>12(1−ψ)‖z‖1subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1\|z_{I}\|_{1}>\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}. | 713 |
22 | In the case of linear regression where 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds
, 1 = g(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄{12(ϕTx)2−xTϕη}=12𝐄{(ϕT(x∗−x))2−(ϕTx∗)2}𝐄12superscriptsuperscriptitalic-ϕ𝑇𝑥2superscript𝑥𝑇italic-ϕ𝜂12𝐄superscriptsuperscriptitalic-ϕ𝑇subscript𝑥𝑥2superscriptsuperscriptitalic-ϕ𝑇subscript𝑥2\displaystyle{\mathbf{E}}\big{\{}\mbox{\small$\frac{1}{2}$}(\phi^{T}x)^{2}-x^{T}\phi\eta\big{\}}=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\big{\{}(\phi^{T}(x_{*}-x))^{2}-(\phi^{T}x_{*})^{2}\big{\}}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = 12(x−x∗)TΣ(x−x∗)−12x∗TΣx∗=12‖x−x∗‖Σ2−12‖x∗‖Σ212superscript𝑥subscript𝑥𝑇Σ𝑥subscript𝑥12superscriptsubscript𝑥𝑇Σsubscript𝑥12superscriptsubscriptnorm𝑥subscript𝑥Σ212superscriptsubscriptnormsubscript𝑥Σ2\displaystyle\mbox{\small$\frac{1}{2}$}(x-x_{*})^{T}\Sigma(x-x_{*})-\mbox{\small$\frac{1}{2}$}x_{*}^{T}\Sigma x_{*}=\mbox{\small$\frac{1}{2}$}\|x-x_{*}\|_{\Sigma}^{2}-\mbox{\small$\frac{1}{2}$}\|x_{*}\|_{\Sigma}^{2}. , 4 =
and
∇G(x,ω)=ϕϕT(x−x∗)−σξϕ∇𝐺𝑥𝜔italic-ϕsuperscriptitalic-ϕ𝑇𝑥subscript𝑥𝜎𝜉italic-ϕ\nabla G(x,\omega)=\phi\phi^{T}(x-x_{*})-\sigma\xi\phi.
In this case
ℒ(ω)≤‖ϕϕT‖∞≤ν¯2.ℒ𝜔subscriptnormitalic-ϕsuperscriptitalic-ϕ𝑇superscript¯𝜈2{\cal L}(\omega)\leq\|\phi\phi^{T}\|_{\infty}\leq{\overline{\nu}}^{2}.
Note that quadratic minoration bound (16) for g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥g(x)-g(x_{*}) is often overly pessimistic. Indeed, consider for instance, Gaussian regressor ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim{\cal N}(0,\Sigma) (such regressors are not a.s. bounded, we consider this example only for illustration purposes) and activation 𝔯𝔯\mathfrak{r}, define for some 0≤α≤10𝛼10\leq\alpha\leq 1 (with the convention, 0/0=00000/0=0)
, 1 = 𝔯(t)={t,|t|≤1,sign(t)[α−1(|t|α−1)+1],|t|>1.𝔯𝑡cases𝑡𝑡1sign𝑡delimited-[]superscript𝛼1superscript𝑡𝛼11𝑡1\displaystyle\mathfrak{r}(t)=\left\{\begin{array}[]{ll}t,&|t|\leq 1,\\
\hbox{\rm sign}(t)[\alpha^{-1}(|t|^{\alpha}-1)+1],&|t|>1.\end{array}\right.. , 2 = . , 3 = (21)
When passing from ϕitalic-ϕ\phi to φ=Σ−1/2ϕ𝜑superscriptΣ12italic-ϕ\varphi=\Sigma^{-1/2}\phi and from x𝑥x to z=Σ1/2x𝑧superscriptΣ12𝑥z=\Sigma^{1/2}x and using the fact
that
, 1 = φ=zzT‖z‖22φ+(I−zzT‖z‖22)φ⏟=:χ𝜑𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑subscript⏟𝐼𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑:absent𝜒\varphi={zz^{T}\over\|z\|_{2}^{2}}\varphi+\underbrace{\left(I-{zz^{T}\over\|z\|_{2}^{2}}\right)\varphi}_{=:\chi}. , 2 =
with independent zzT‖z‖22φ𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑{zz^{T}\over\|z\|_{2}^{2}}\varphi and χ𝜒\chi, we obtain
, 1 = H(x)=𝐄{ϕ[𝔯(ϕTx)]}=𝐄{zzT‖z‖22φ𝔯(φTz)}=z‖z‖2𝐄{ς𝔯(ς‖z‖2)}=Σ1/2x‖x‖Σ𝐄{ς𝔯(ς‖x‖Σ)}𝐻𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝐄𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑𝔯superscript𝜑𝑇𝑧𝑧subscriptnorm𝑧2𝐄𝜍𝔯𝜍subscriptnorm𝑧2superscriptΣ12𝑥subscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥ΣH(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)]\}={\mathbf{E}}\left\{{zz^{T}\over\|z\|_{2}^{2}}\varphi\,\mathfrak{r}(\varphi^{T}z)\right\}={z\over\|z\|_{2}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|z\|_{2})\right\}={\Sigma^{1/2}x\over\|x\|_{\Sigma}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}. , 2 = | 1,886 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remarks.
In the case of linear regression where 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds
, 1 = g(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄{12(ϕTx)2−xTϕη}=12𝐄{(ϕT(x∗−x))2−(ϕTx∗)2}𝐄12superscriptsuperscriptitalic-ϕ𝑇𝑥2superscript𝑥𝑇italic-ϕ𝜂12𝐄superscriptsuperscriptitalic-ϕ𝑇subscript𝑥𝑥2superscriptsuperscriptitalic-ϕ𝑇subscript𝑥2\displaystyle{\mathbf{E}}\big{\{}\mbox{\small$\frac{1}{2}$}(\phi^{T}x)^{2}-x^{T}\phi\eta\big{\}}=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\big{\{}(\phi^{T}(x_{*}-x))^{2}-(\phi^{T}x_{*})^{2}\big{\}}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = 12(x−x∗)TΣ(x−x∗)−12x∗TΣx∗=12‖x−x∗‖Σ2−12‖x∗‖Σ212superscript𝑥subscript𝑥𝑇Σ𝑥subscript𝑥12superscriptsubscript𝑥𝑇Σsubscript𝑥12superscriptsubscriptnorm𝑥subscript𝑥Σ212superscriptsubscriptnormsubscript𝑥Σ2\displaystyle\mbox{\small$\frac{1}{2}$}(x-x_{*})^{T}\Sigma(x-x_{*})-\mbox{\small$\frac{1}{2}$}x_{*}^{T}\Sigma x_{*}=\mbox{\small$\frac{1}{2}$}\|x-x_{*}\|_{\Sigma}^{2}-\mbox{\small$\frac{1}{2}$}\|x_{*}\|_{\Sigma}^{2}. , 4 =
and
∇G(x,ω)=ϕϕT(x−x∗)−σξϕ∇𝐺𝑥𝜔italic-ϕsuperscriptitalic-ϕ𝑇𝑥subscript𝑥𝜎𝜉italic-ϕ\nabla G(x,\omega)=\phi\phi^{T}(x-x_{*})-\sigma\xi\phi.
In this case
ℒ(ω)≤‖ϕϕT‖∞≤ν¯2.ℒ𝜔subscriptnormitalic-ϕsuperscriptitalic-ϕ𝑇superscript¯𝜈2{\cal L}(\omega)\leq\|\phi\phi^{T}\|_{\infty}\leq{\overline{\nu}}^{2}.
Note that quadratic minoration bound (16) for g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥g(x)-g(x_{*}) is often overly pessimistic. Indeed, consider for instance, Gaussian regressor ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim{\cal N}(0,\Sigma) (such regressors are not a.s. bounded, we consider this example only for illustration purposes) and activation 𝔯𝔯\mathfrak{r}, define for some 0≤α≤10𝛼10\leq\alpha\leq 1 (with the convention, 0/0=00000/0=0)
, 1 = 𝔯(t)={t,|t|≤1,sign(t)[α−1(|t|α−1)+1],|t|>1.𝔯𝑡cases𝑡𝑡1sign𝑡delimited-[]superscript𝛼1superscript𝑡𝛼11𝑡1\displaystyle\mathfrak{r}(t)=\left\{\begin{array}[]{ll}t,&|t|\leq 1,\\
\hbox{\rm sign}(t)[\alpha^{-1}(|t|^{\alpha}-1)+1],&|t|>1.\end{array}\right.. , 2 = . , 3 = (21)
When passing from ϕitalic-ϕ\phi to φ=Σ−1/2ϕ𝜑superscriptΣ12italic-ϕ\varphi=\Sigma^{-1/2}\phi and from x𝑥x to z=Σ1/2x𝑧superscriptΣ12𝑥z=\Sigma^{1/2}x and using the fact
that
, 1 = φ=zzT‖z‖22φ+(I−zzT‖z‖22)φ⏟=:χ𝜑𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑subscript⏟𝐼𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑:absent𝜒\varphi={zz^{T}\over\|z\|_{2}^{2}}\varphi+\underbrace{\left(I-{zz^{T}\over\|z\|_{2}^{2}}\right)\varphi}_{=:\chi}. , 2 =
with independent zzT‖z‖22φ𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑{zz^{T}\over\|z\|_{2}^{2}}\varphi and χ𝜒\chi, we obtain
, 1 = H(x)=𝐄{ϕ[𝔯(ϕTx)]}=𝐄{zzT‖z‖22φ𝔯(φTz)}=z‖z‖2𝐄{ς𝔯(ς‖z‖2)}=Σ1/2x‖x‖Σ𝐄{ς𝔯(ς‖x‖Σ)}𝐻𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝐄𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑𝔯superscript𝜑𝑇𝑧𝑧subscriptnorm𝑧2𝐄𝜍𝔯𝜍subscriptnorm𝑧2superscriptΣ12𝑥subscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥ΣH(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)]\}={\mathbf{E}}\left\{{zz^{T}\over\|z\|_{2}^{2}}\varphi\,\mathfrak{r}(\varphi^{T}z)\right\}={z\over\|z\|_{2}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|z\|_{2})\right\}={\Sigma^{1/2}x\over\|x\|_{\Sigma}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}. , 2 = | 1,915 |
23 | where ς∼𝒩(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H(x)𝐻𝑥H(x) is proportional to Σ1/2x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_{\Sigma}} with coefficient
, 1 = h(‖x‖Σ)=𝐄{ς𝔯(ς‖x‖Σ)}.ℎsubscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥Σh\big{(}\|x\|_{\Sigma}\big{)}={\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}.. , 2 =
Figure 1 represents the mapping hℎh for different values of α𝛼\alpha (on the left), along with the corresponding mapping H𝐻H on a ∥⋅∥Σ\|\cdot\|_{\Sigma}-ball centered at the origin of radius r𝑟r (on the right). | 296 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remarks.
where ς∼𝒩(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H(x)𝐻𝑥H(x) is proportional to Σ1/2x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_{\Sigma}} with coefficient
, 1 = h(‖x‖Σ)=𝐄{ς𝔯(ς‖x‖Σ)}.ℎsubscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥Σh\big{(}\|x\|_{\Sigma}\big{)}={\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}.. , 2 =
Figure 1 represents the mapping hℎh for different values of α𝛼\alpha (on the left), along with the corresponding mapping H𝐻H on a ∥⋅∥Σ\|\cdot\|_{\Sigma}-ball centered at the origin of radius r𝑟r (on the right). | 325 |
24 | In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use the following distance-generating function of the ℓ1subscriptℓ1\ell_{1}-ball of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} (cf. [27, Section 5.7.1])
, 1 = θ(x)=cp‖x‖pp,p={2,n=21+1ln(n),n≥3,c={2,n=2,elnn,n≥3.formulae-sequence𝜃𝑥𝑐𝑝superscriptsubscriptnorm𝑥𝑝𝑝formulae-sequence𝑝cases2𝑛211𝑛𝑛3𝑐cases2𝑛2𝑒𝑛𝑛3\displaystyle\theta(x)={c\over p}\|x\|_{p}^{p},\quad p=\left\{\begin{array}[]{ll}2,&n=2\\
1+\frac{1}{\ln(n)},&n\geq 3,\end{array}\right.\quad c=\left\{\begin{array}[]{ll}2,&n=2,\\
e\ln n,&n\geq 3.\end{array}\right.. , 2 = . , 3 = (26)
It immediately follows that θ𝜃\theta is strongly convex with modulus 1 w.r.t. the norm ∥⋅∥1\|\cdot\|_{1} on its unit ball, and that Θ≤elnnΘ𝑒𝑛\Theta\leq e\ln n. In particular, Theorem 2.1 entails the following statement. | 409 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use the following distance-generating function of the ℓ1subscriptℓ1\ell_{1}-ball of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} (cf. [27, Section 5.7.1])
, 1 = θ(x)=cp‖x‖pp,p={2,n=21+1ln(n),n≥3,c={2,n=2,elnn,n≥3.formulae-sequence𝜃𝑥𝑐𝑝superscriptsubscriptnorm𝑥𝑝𝑝formulae-sequence𝑝cases2𝑛211𝑛𝑛3𝑐cases2𝑛2𝑒𝑛𝑛3\displaystyle\theta(x)={c\over p}\|x\|_{p}^{p},\quad p=\left\{\begin{array}[]{ll}2,&n=2\\
1+\frac{1}{\ln(n)},&n\geq 3,\end{array}\right.\quad c=\left\{\begin{array}[]{ll}2,&n=2,\\
e\ln n,&n\geq 3.\end{array}\right.. , 2 = . , 3 = (26)
It immediately follows that θ𝜃\theta is strongly convex with modulus 1 w.r.t. the norm ∥⋅∥1\|\cdot\|_{1} on its unit ball, and that Θ≤elnnΘ𝑒𝑛\Theta\leq e\ln n. In particular, Theorem 2.1 entails the following statement. | 440 |
25 | For t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq m_{0} (so that at least one stage of the preliminary phase of Algorithm 1 is completed), the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} output satisfies with probability at least 1−Ce−tlnN1𝐶superscript𝑒𝑡𝑁1-Ce^{-t}\ln N,
, 1 = ‖x^N−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn+tNless-than-or-similar-tosubscriptnormsubscript^𝑥𝑁subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑡𝑁\displaystyle{\|\widehat{x}_{N}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s({\ln n}+t)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}+t}{N}}. , 2 = . , 3 = (27)
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch variant of the algorithm satisfies with probability ≥1−Ce−tlnNabsent1𝐶superscript𝑒𝑡𝑁\geq 1-Ce^{-t}\ln N,
, 1 = ‖x^N(b)−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn(lnn+t)Nless-than-or-similar-tosubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑛𝑡𝑁{\|\widehat{x}_{N}^{(b)}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s\left({\ln n}+t\right)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}\left({\ln n}+t\right)}{N}}. , 2 = | 841 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Proposition 3.2
For t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq m_{0} (so that at least one stage of the preliminary phase of Algorithm 1 is completed), the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} output satisfies with probability at least 1−Ce−tlnN1𝐶superscript𝑒𝑡𝑁1-Ce^{-t}\ln N,
, 1 = ‖x^N−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn+tNless-than-or-similar-tosubscriptnormsubscript^𝑥𝑁subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑡𝑁\displaystyle{\|\widehat{x}_{N}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s({\ln n}+t)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}+t}{N}}. , 2 = . , 3 = (27)
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch variant of the algorithm satisfies with probability ≥1−Ce−tlnNabsent1𝐶superscript𝑒𝑡𝑁\geq 1-Ce^{-t}\ln N,
, 1 = ‖x^N(b)−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn(lnn+t)Nless-than-or-similar-tosubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑛𝑡𝑁{\|\widehat{x}_{N}^{(b)}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s\left({\ln n}+t\right)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}\left({\ln n}+t\right)}{N}}. , 2 = | 879 |
26 | Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^𝑥𝑁𝑏subscript𝑥\widehat{x}_{N}^{(b)}-x_{*}) established in Proposition 3.2 allows us to quantify prediction error g(x^N)−g(x∗)𝑔subscript^𝑥𝑁𝑔subscript𝑥g({\widehat{x}}_{N})-g(x_{*}) (and g(x^(b)N)−g(x∗)g({\widehat{x}}^{(b})_{N})-g(x_{*}), and also lead to bounds for ‖x^N−x∗‖Σsubscriptnormsubscript^𝑥𝑁subscript𝑥Σ\|{\widehat{x}}_{N}-x_{*}\|_{\Sigma} and ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} (respectively, for ‖x^N(b)−x∗‖Σsubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥Σ\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{\Sigma} and ‖x^N(b)−x∗‖2subscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥2\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{2}).
For instance, Proposition 2.1 in the present setting implies the bound on the prediction error after N𝑁N steps of the algorithm that reads
, 1 = g(x^N)−g(x∗)≲R2κΣr¯sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣr¯Nless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝑅2subscript𝜅Σ¯𝑟𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscript𝜅Σ¯𝑟𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim{R^{2}\kappa_{\Sigma}{\underline{r}}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa_{\Sigma}{\underline{r}}N}. , 2 =
with probability ≥1−ClnNe−tabsent1𝐶𝑁superscript𝑒𝑡\geq 1-C\ln Ne^{-t}. We conclude by (16) that
, 1 = ‖x^N−x∗‖22≤κΣ−1‖x^N−x∗‖Σ2≤2κΣ−1r¯−1[g(x^N)−g(x∗)]superscriptsubscriptnormsubscript^𝑥𝑁subscript𝑥22superscriptsubscript𝜅Σ1subscriptsuperscriptnormsubscript^𝑥𝑁subscript𝑥2Σ2superscriptsubscript𝜅Σ1superscript¯𝑟1delimited-[]𝑔subscript^𝑥𝑁𝑔subscript𝑥\displaystyle\|{\widehat{x}}_{N}-x_{*}\|_{2}^{2}\leq\kappa_{\Sigma}^{-1}\|{\widehat{x}}_{N}-x_{*}\|^{2}_{\Sigma}\leq 2\kappa_{\Sigma}^{-1}{\underline{r}}^{-1}[g({\widehat{x}}_{N})-g(x_{*})]. , 2 = . , 1 = ≲R2sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣ2r¯2N.less-than-or-similar-toabsentsuperscript𝑅2𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscriptsuperscript𝜅2Σsuperscript¯𝑟2𝑁\displaystyle\quad\lesssim{R^{2}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa^{2}_{\Sigma}{\underline{r}}^{2}N}.. , 2 =
In other words, the error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} converges geometrically to the “asymptotic rate” σν¯κΣr¯s(Θ+t)N𝜎¯𝜈subscript𝜅Σ¯𝑟𝑠Θ𝑡𝑁{\sigma{\overline{\nu}}\over\kappa_{\Sigma}{\underline{r}}}\sqrt{s(\Theta+t)\over N} which is the “standard” rate established in the setting (cf. [1, 5, 35], etc). | 1,564 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Remark 3.2
Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^𝑥𝑁𝑏subscript𝑥\widehat{x}_{N}^{(b)}-x_{*}) established in Proposition 3.2 allows us to quantify prediction error g(x^N)−g(x∗)𝑔subscript^𝑥𝑁𝑔subscript𝑥g({\widehat{x}}_{N})-g(x_{*}) (and g(x^(b)N)−g(x∗)g({\widehat{x}}^{(b})_{N})-g(x_{*}), and also lead to bounds for ‖x^N−x∗‖Σsubscriptnormsubscript^𝑥𝑁subscript𝑥Σ\|{\widehat{x}}_{N}-x_{*}\|_{\Sigma} and ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} (respectively, for ‖x^N(b)−x∗‖Σsubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥Σ\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{\Sigma} and ‖x^N(b)−x∗‖2subscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥2\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{2}).
For instance, Proposition 2.1 in the present setting implies the bound on the prediction error after N𝑁N steps of the algorithm that reads
, 1 = g(x^N)−g(x∗)≲R2κΣr¯sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣr¯Nless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝑅2subscript𝜅Σ¯𝑟𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscript𝜅Σ¯𝑟𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim{R^{2}\kappa_{\Sigma}{\underline{r}}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa_{\Sigma}{\underline{r}}N}. , 2 =
with probability ≥1−ClnNe−tabsent1𝐶𝑁superscript𝑒𝑡\geq 1-C\ln Ne^{-t}. We conclude by (16) that
, 1 = ‖x^N−x∗‖22≤κΣ−1‖x^N−x∗‖Σ2≤2κΣ−1r¯−1[g(x^N)−g(x∗)]superscriptsubscriptnormsubscript^𝑥𝑁subscript𝑥22superscriptsubscript𝜅Σ1subscriptsuperscriptnormsubscript^𝑥𝑁subscript𝑥2Σ2superscriptsubscript𝜅Σ1superscript¯𝑟1delimited-[]𝑔subscript^𝑥𝑁𝑔subscript𝑥\displaystyle\|{\widehat{x}}_{N}-x_{*}\|_{2}^{2}\leq\kappa_{\Sigma}^{-1}\|{\widehat{x}}_{N}-x_{*}\|^{2}_{\Sigma}\leq 2\kappa_{\Sigma}^{-1}{\underline{r}}^{-1}[g({\widehat{x}}_{N})-g(x_{*})]. , 2 = . , 1 = ≲R2sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣ2r¯2N.less-than-or-similar-toabsentsuperscript𝑅2𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscriptsuperscript𝜅2Σsuperscript¯𝑟2𝑁\displaystyle\quad\lesssim{R^{2}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa^{2}_{\Sigma}{\underline{r}}^{2}N}.. , 2 =
In other words, the error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} converges geometrically to the “asymptotic rate” σν¯κΣr¯s(Θ+t)N𝜎¯𝜈subscript𝜅Σ¯𝑟𝑠Θ𝑡𝑁{\sigma{\overline{\nu}}\over\kappa_{\Sigma}{\underline{r}}}\sqrt{s(\Theta+t)\over N} which is the “standard” rate established in the setting (cf. [1, 5, 35], etc). | 1,601 |
27 | The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random regressors with i.i.d sub-Gaussian entries such that
, 1 = ∀j≤n,𝐄[exp([ϕi]j2ϰ2)]≤1.formulae-sequencefor-all𝑗𝑛𝐄delimited-[]superscriptsubscriptdelimited-[]subscriptitalic-ϕ𝑖𝑗2superscriptitalic-ϰ21\forall j\leq n,\quad{\mathbf{E}}\left[\exp\left(\tfrac{[\phi_{i}]_{j}^{2}}{\varkappa^{2}}\right)\right]\leq 1.. , 2 =
Using the fact that the maximum of uniform norms ‖ϕi‖∞subscriptnormsubscriptitalic-ϕ𝑖\|\phi_{i}\|_{\infty}, 1≤i≤m1𝑖𝑚1\leq i\leq m, concentrates around ϰlnmnitalic-ϰ𝑚𝑛\varkappa\sqrt{\ln mn} along with independence of noises ξisubscript𝜉𝑖\xi_{i} of ϕisubscriptitalic-ϕ𝑖\phi_{i}, the “smoothness” and “sub-Gaussianity” assumptions of Proposition 3.2 can be stated “conditionally” to the event {ω:maxi≤m‖ϕi‖∞2≲ϰ2(ln[mn]+t)}conditional-set𝜔less-than-or-similar-tosubscript𝑖𝑚superscriptsubscriptnormsubscriptitalic-ϕ𝑖2superscriptitalic-ϰ2𝑚𝑛𝑡\left\{\omega:\;\max_{i\leq m}\|\phi_{i}\|_{\infty}^{2}\lesssim\varkappa^{2}(\ln[mn]+t)\right\} of probability greater than 1−e−t1superscript𝑒𝑡1-e^{-t}. For instance, when replacing the bound for the uniform norm of regressors with ϰ2(ln[mn]+t)superscriptitalic-ϰ2𝑚𝑛𝑡\varkappa^{2}(\ln[mn]+t) in the definition of algorithm parameters and combining with appropriate deviation inequality for martingales (cf., e.g., [4]), one arrives at the bound for the error ‖x^N−x∗‖1subscriptnormsubscript^𝑥𝑁subscript𝑥1\|\widehat{x}_{N}-x_{*}\|_{1} of Algorithm 1 which is similar to (27) of Proposition 3.2 in which ν¯¯𝜈{\overline{\nu}} is replaced
with ϰln[mn]+titalic-ϰ𝑚𝑛𝑡\varkappa\sqrt{\ln[mn]+t}. | 711 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Remark 3.3
The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random regressors with i.i.d sub-Gaussian entries such that
, 1 = ∀j≤n,𝐄[exp([ϕi]j2ϰ2)]≤1.formulae-sequencefor-all𝑗𝑛𝐄delimited-[]superscriptsubscriptdelimited-[]subscriptitalic-ϕ𝑖𝑗2superscriptitalic-ϰ21\forall j\leq n,\quad{\mathbf{E}}\left[\exp\left(\tfrac{[\phi_{i}]_{j}^{2}}{\varkappa^{2}}\right)\right]\leq 1.. , 2 =
Using the fact that the maximum of uniform norms ‖ϕi‖∞subscriptnormsubscriptitalic-ϕ𝑖\|\phi_{i}\|_{\infty}, 1≤i≤m1𝑖𝑚1\leq i\leq m, concentrates around ϰlnmnitalic-ϰ𝑚𝑛\varkappa\sqrt{\ln mn} along with independence of noises ξisubscript𝜉𝑖\xi_{i} of ϕisubscriptitalic-ϕ𝑖\phi_{i}, the “smoothness” and “sub-Gaussianity” assumptions of Proposition 3.2 can be stated “conditionally” to the event {ω:maxi≤m‖ϕi‖∞2≲ϰ2(ln[mn]+t)}conditional-set𝜔less-than-or-similar-tosubscript𝑖𝑚superscriptsubscriptnormsubscriptitalic-ϕ𝑖2superscriptitalic-ϰ2𝑚𝑛𝑡\left\{\omega:\;\max_{i\leq m}\|\phi_{i}\|_{\infty}^{2}\lesssim\varkappa^{2}(\ln[mn]+t)\right\} of probability greater than 1−e−t1superscript𝑒𝑡1-e^{-t}. For instance, when replacing the bound for the uniform norm of regressors with ϰ2(ln[mn]+t)superscriptitalic-ϰ2𝑚𝑛𝑡\varkappa^{2}(\ln[mn]+t) in the definition of algorithm parameters and combining with appropriate deviation inequality for martingales (cf., e.g., [4]), one arrives at the bound for the error ‖x^N−x∗‖1subscriptnormsubscript^𝑥𝑁subscript𝑥1\|\widehat{x}_{N}-x_{*}\|_{1} of Algorithm 1 which is similar to (27) of Proposition 3.2 in which ν¯¯𝜈{\overline{\nu}} is replaced
with ϰln[mn]+titalic-ϰ𝑚𝑛𝑡\varkappa\sqrt{\ln[mn]+t}. | 748 |
28 | In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of the supplementary material for more experimental results. We consider the GLR model (15) with activation function (21) where α=1/2𝛼12\alpha=1/2.
In our simulations, x∗subscript𝑥x_{*} is an s𝑠s-sparse vector with s𝑠s nonvanishing components sampled independently from the standard s𝑠s-dimensional Gaussian distribution; regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are sampled from a multivariate Gaussian distribution ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim\mathcal{N}(0,\Sigma), where ΣΣ\Sigma is a diagonal covariance matrix with diagonal entries σ1≤…≤σnsubscript𝜎1…subscript𝜎𝑛\sigma_{1}\leq...\leq\sigma_{n}. In Figure 2 we report on the experiment in which we compare the performance of the CSMD-SR algorithm from Section 2.3 to that of four other methods. The contenders are (1) “vanilla” non-Euclidean SMD algorithm constrained to the ℓ1subscriptℓ1\ell_{1}-ball equipped with the distance generating function (26), (2) composite non-Euclidean dual averaging algorithm (p𝑝p-Norm RDA) from [47], (3) multistage SMD-SR of [23], and (4) “vanilla” Euclidean SGD.
The regularization parameter of the ℓ1subscriptℓ1\ell_{1} penalty in (2) is set to the theoretically optimal value λ=2σ2log(n)/T𝜆2𝜎2𝑛𝑇\lambda=2\sigma\sqrt{2\log(n)/T}. The corresponding dimension of the parameter space is n=500000𝑛500000n=500000, the sparsity level of the optimal point x∗subscript𝑥x_{*} is s=200𝑠200s=200, and the “total budget” of oracle calls is N=250000𝑁250000N=250000; we use the identity regressor covariance matrix (Σ=InΣsubscript𝐼𝑛\Sigma=I_{n}) and σ∈{0.001,0.1}𝜎0.0010.1\sigma\in\{0.001,0.1\}.
To reduce computation time we use the minibatch versions of the multi-stage algorithms—CSMD-SR and algorithm (3)), the data to compute stochastic gradient realizations ∇G(xi,ω)=ϕ(𝔯(ϕTxi)−η)∇𝐺subscript𝑥𝑖𝜔italic-ϕ𝔯superscriptitalic-ϕ𝑇subscript𝑥𝑖𝜂\nabla G(x_{i},\omega)=\phi(\mathfrak{r}(\phi^{T}x_{i})-\eta) at the current search point xisubscript𝑥𝑖x_{i} being generated “on the fly.”
We repeat simulations 20 times and plot the median value along with the first and the last deciles of the error
‖x^i−x∗‖1subscriptnormsubscript^𝑥𝑖subscript𝑥1\|{\widehat{x}}_{i}-x_{*}\|_{1} at each iteration of the algorithm against the number of oracle calls.
The proposed method outperforms other algorithms which struggle to reach the regime where the stochastic noise is dominant.
In the second experiment we report on here, we study the behavior of the multistage algorithm derived from Algorithm 2 in which, instead of using independent data samples, we reuse the same data at each stage of the method. In Figure 3 we present results of comparison of the CSMD-SR algorithm with its variant with data recycle. This version is of interest as it attains fast the noise regime while using limited amount of samples.
In our first experiment, we consider linear regression problem with parameter dimension n=100 000𝑛100000n=100\,000 and sparsity level s=75𝑠75s=75 of the optimal solution; we consider the GLR model (15) with activation function 𝔯1/10(t)subscript𝔯110𝑡\mathfrak{r}_{1/10}(t) in the second experiment. We choose Σ=InΣsubscript𝐼𝑛\Sigma=I_{n} and σ=0.001𝜎0.001\sigma=0.001; we run 14 (preliminary) stages of the algorithm with m0=3500subscript𝑚03500m_{0}=3500 in the first simulation and m0=4500subscript𝑚04500m_{0}=4500 in the second. We believe that the results speak for themselves.
Acknowledgements
This work was supported by Multidisciplinary Institute in Artificial intelligence MIAI @ Grenoble Alpes (ANR-19-P3IA-0003), “Investissements d’avenir” program (ANR20-CE23-0007-01), FAIRPLAY project, LabEx Ecodec (ANR11-LABX-0047), and ANR-19-CE23-0026. The authors would also like to acknowledge CRITEO AI Lab for supporting this work. | 1,254 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.3 Numerical experiments
In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of the supplementary material for more experimental results. We consider the GLR model (15) with activation function (21) where α=1/2𝛼12\alpha=1/2.
In our simulations, x∗subscript𝑥x_{*} is an s𝑠s-sparse vector with s𝑠s nonvanishing components sampled independently from the standard s𝑠s-dimensional Gaussian distribution; regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are sampled from a multivariate Gaussian distribution ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim\mathcal{N}(0,\Sigma), where ΣΣ\Sigma is a diagonal covariance matrix with diagonal entries σ1≤…≤σnsubscript𝜎1…subscript𝜎𝑛\sigma_{1}\leq...\leq\sigma_{n}. In Figure 2 we report on the experiment in which we compare the performance of the CSMD-SR algorithm from Section 2.3 to that of four other methods. The contenders are (1) “vanilla” non-Euclidean SMD algorithm constrained to the ℓ1subscriptℓ1\ell_{1}-ball equipped with the distance generating function (26), (2) composite non-Euclidean dual averaging algorithm (p𝑝p-Norm RDA) from [47], (3) multistage SMD-SR of [23], and (4) “vanilla” Euclidean SGD.
The regularization parameter of the ℓ1subscriptℓ1\ell_{1} penalty in (2) is set to the theoretically optimal value λ=2σ2log(n)/T𝜆2𝜎2𝑛𝑇\lambda=2\sigma\sqrt{2\log(n)/T}. The corresponding dimension of the parameter space is n=500000𝑛500000n=500000, the sparsity level of the optimal point x∗subscript𝑥x_{*} is s=200𝑠200s=200, and the “total budget” of oracle calls is N=250000𝑁250000N=250000; we use the identity regressor covariance matrix (Σ=InΣsubscript𝐼𝑛\Sigma=I_{n}) and σ∈{0.001,0.1}𝜎0.0010.1\sigma\in\{0.001,0.1\}.
To reduce computation time we use the minibatch versions of the multi-stage algorithms—CSMD-SR and algorithm (3)), the data to compute stochastic gradient realizations ∇G(xi,ω)=ϕ(𝔯(ϕTxi)−η)∇𝐺subscript𝑥𝑖𝜔italic-ϕ𝔯superscriptitalic-ϕ𝑇subscript𝑥𝑖𝜂\nabla G(x_{i},\omega)=\phi(\mathfrak{r}(\phi^{T}x_{i})-\eta) at the current search point xisubscript𝑥𝑖x_{i} being generated “on the fly.”
We repeat simulations 20 times and plot the median value along with the first and the last deciles of the error
‖x^i−x∗‖1subscriptnormsubscript^𝑥𝑖subscript𝑥1\|{\widehat{x}}_{i}-x_{*}\|_{1} at each iteration of the algorithm against the number of oracle calls.
The proposed method outperforms other algorithms which struggle to reach the regime where the stochastic noise is dominant.
In the second experiment we report on here, we study the behavior of the multistage algorithm derived from Algorithm 2 in which, instead of using independent data samples, we reuse the same data at each stage of the method. In Figure 3 we present results of comparison of the CSMD-SR algorithm with its variant with data recycle. This version is of interest as it attains fast the noise regime while using limited amount of samples.
In our first experiment, we consider linear regression problem with parameter dimension n=100 000𝑛100000n=100\,000 and sparsity level s=75𝑠75s=75 of the optimal solution; we consider the GLR model (15) with activation function 𝔯1/10(t)subscript𝔯110𝑡\mathfrak{r}_{1/10}(t) in the second experiment. We choose Σ=InΣsubscript𝐼𝑛\Sigma=I_{n} and σ=0.001𝜎0.001\sigma=0.001; we run 14 (preliminary) stages of the algorithm with m0=3500subscript𝑚03500m_{0}=3500 in the first simulation and m0=4500subscript𝑚04500m_{0}=4500 in the second. We believe that the results speak for themselves.
Acknowledgements
This work was supported by Multidisciplinary Institute in Artificial intelligence MIAI @ Grenoble Alpes (ANR-19-P3IA-0003), “Investissements d’avenir” program (ANR20-CE23-0007-01), FAIRPLAY project, LabEx Ecodec (ANR11-LABX-0047), and ANR-19-CE23-0026. The authors would also like to acknowledge CRITEO AI Lab for supporting this work. | 1,282 |
29 | We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}. | 73 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}. | 91 |
30 | The result of Proposition 2.1 is an immediate consequence of the following statement. | 17 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
The result of Proposition 2.1 is an immediate consequence of the following statement. | 46 |
31 | Let
, 1 = f(x)=12g(x)+h(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 =
In the situation of Section 2.2, let γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1} for all i=0,1,…𝑖01…i=0,1,..., and let x^msubscript^𝑥𝑚\widehat{x}_{m} be defined in (10), where xisubscript𝑥𝑖x_{i}
are iterations (9). Then for any t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m} there is Ω¯m⊂Ωsubscript¯Ω𝑚Ω\overline{\Omega}_{m}\subset\Omega such that Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and for all ωm=[ω1,…,ωm]∈Ω¯msuperscript𝜔𝑚subscript𝜔1…subscript𝜔𝑚subscript¯Ω𝑚\omega^{m}=[\omega_{1},...,\omega_{m}]\in\overline{\Omega}_{m},
, 1 = (∑i=0m−1γi)[f(x^m)−f(x∗)]superscriptsubscript𝑖0𝑚1subscript𝛾𝑖delimited-[]𝑓subscript^𝑥𝑚𝑓subscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)[f(\widehat{x}_{m})-f(x_{*})]. , 2 = ≤∑i=0m−1[12γi⟨∇g(xi),xi−x∗⟩+γi+1(h(xi+1)−h(x∗))]absentsuperscriptsubscript𝑖0𝑚1delimited-[]12subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\leq\sum_{i=0}^{m-1}\Big{[}\mbox{\small$\frac{1}{2}$}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+\gamma_{i+1}(h(x_{i+1})-h(x_{*}))\Big{]}. , 3 = . , 4 = . , 1 = . , 2 = ≤V(x0,x∗)+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)]absent𝑉subscript𝑥0subscript𝑥subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V({x_{0}},x_{*})+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})]. , 3 = . , 4 = . , 1 = . , 2 = +V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2].𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\qquad+V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right].. , 3 = . , 4 = (28)
In particular, when using the constant stepsize strategy with γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma, 0<γ≤(4ν)−10𝛾superscript4𝜈10<\gamma\leq(4\nu)^{-1}, one has
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\qquad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+{24t\over m}\right).. , 2 = . , 3 = (29) | 1,580 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
Proposition A.1
Let
, 1 = f(x)=12g(x)+h(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 =
In the situation of Section 2.2, let γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1} for all i=0,1,…𝑖01…i=0,1,..., and let x^msubscript^𝑥𝑚\widehat{x}_{m} be defined in (10), where xisubscript𝑥𝑖x_{i}
are iterations (9). Then for any t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m} there is Ω¯m⊂Ωsubscript¯Ω𝑚Ω\overline{\Omega}_{m}\subset\Omega such that Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and for all ωm=[ω1,…,ωm]∈Ω¯msuperscript𝜔𝑚subscript𝜔1…subscript𝜔𝑚subscript¯Ω𝑚\omega^{m}=[\omega_{1},...,\omega_{m}]\in\overline{\Omega}_{m},
, 1 = (∑i=0m−1γi)[f(x^m)−f(x∗)]superscriptsubscript𝑖0𝑚1subscript𝛾𝑖delimited-[]𝑓subscript^𝑥𝑚𝑓subscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)[f(\widehat{x}_{m})-f(x_{*})]. , 2 = ≤∑i=0m−1[12γi⟨∇g(xi),xi−x∗⟩+γi+1(h(xi+1)−h(x∗))]absentsuperscriptsubscript𝑖0𝑚1delimited-[]12subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\leq\sum_{i=0}^{m-1}\Big{[}\mbox{\small$\frac{1}{2}$}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+\gamma_{i+1}(h(x_{i+1})-h(x_{*}))\Big{]}. , 3 = . , 4 = . , 1 = . , 2 = ≤V(x0,x∗)+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)]absent𝑉subscript𝑥0subscript𝑥subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V({x_{0}},x_{*})+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})]. , 3 = . , 4 = . , 1 = . , 2 = +V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2].𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\qquad+V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right].. , 3 = . , 4 = (28)
In particular, when using the constant stepsize strategy with γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma, 0<γ≤(4ν)−10𝛾superscript4𝜈10<\gamma\leq(4\nu)^{-1}, one has
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\qquad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+{24t\over m}\right).. , 2 = . , 3 = (29) | 1,615 |
32 | Denote Hi=∇G(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ(z)italic-ϑ𝑧\vartheta(z) and V(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R(z)superscriptsubscriptitalic-ϑsubscript𝑥0𝑅𝑧\vartheta_{x_{0}}^{R}(z) and Vx0(x,z)subscript𝑉subscript𝑥0𝑥𝑧V_{x_{0}}(x,z) when exact values x0subscript𝑥0x_{0} and R𝑅R are clear from the context. | 220 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
Proof.
Denote Hi=∇G(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ(z)italic-ϑ𝑧\vartheta(z) and V(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R(z)superscriptsubscriptitalic-ϑsubscript𝑥0𝑅𝑧\vartheta_{x_{0}}^{R}(z) and Vx0(x,z)subscript𝑉subscript𝑥0𝑥𝑧V_{x_{0}}(x,z) when exact values x0subscript𝑥0x_{0} and R𝑅R are clear from the context. | 251 |
33 | From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that
, 1 = ⟨γi−1Hi+γiηi+∇ϑ(xi)−∇ϑ(xi−1),z−xi⟩≥0,∀z∈𝒳,formulae-sequencesubscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖∇italic-ϑsubscript𝑥𝑖∇italic-ϑsubscript𝑥𝑖1𝑧subscript𝑥𝑖0for-all𝑧𝒳{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i}+\nabla\vartheta(x_{i})-\nabla\vartheta(x_{i-1}),z-x_{i}{\rangle}\geq 0,\;\;\forall\;z\in{\cal X},. , 2 =
implying, as usual [12], that ∀z∈𝒳for-all𝑧𝒳\forall z\in{\cal X}
, 1 = ⟨γi−1Hi+γiηi,xi−z⟩≤V(xi−1,z)−V(xi,z)−V(xi−1,xi).subscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1𝑧𝑉subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1subscript𝑥𝑖{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i},x_{i}-z{\rangle}\leq V(x_{i-1},z)-V(x_{i},z)-V(x_{i-1},x_{i}).. , 2 =
In particular,
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)−V(xi−1,xi)+γi−1⟨Hi,xi−1−xi⟩absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥𝑉subscript𝑥𝑖1subscript𝑥𝑖subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})-V(x_{i-1},x_{i})+\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{i}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+12γi−12‖Hi‖∗2.absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥12subscriptsuperscript𝛾2𝑖1superscriptsubscriptnormsubscript𝐻𝑖2\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\mbox{\small$\frac{1}{2}$}\gamma^{2}_{i-1}\|H_{i}\|_{*}^{2}.. , 2 =
Observe that due to the Lipschitz continuity of ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) one has
, 1 = ν⟨∇G(x,ω)−∇G(x′,ω),x−x′⟩≥‖∇G(x,ω)−∇G(x′,ω)‖∗2,∀x,x′∈𝒳,formulae-sequence𝜈∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔𝑥superscript𝑥′superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔2for-all𝑥superscript𝑥′𝒳\displaystyle\nu{\langle}\nabla G(x,\omega)-\nabla G(x^{\prime},\omega),x-x^{\prime}{\rangle}\geq\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|_{*}^{2},\quad\forall x,x^{\prime}\in{\cal X},. , 2 = . , 3 = (30)
so that | 1,389 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that
, 1 = ⟨γi−1Hi+γiηi+∇ϑ(xi)−∇ϑ(xi−1),z−xi⟩≥0,∀z∈𝒳,formulae-sequencesubscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖∇italic-ϑsubscript𝑥𝑖∇italic-ϑsubscript𝑥𝑖1𝑧subscript𝑥𝑖0for-all𝑧𝒳{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i}+\nabla\vartheta(x_{i})-\nabla\vartheta(x_{i-1}),z-x_{i}{\rangle}\geq 0,\;\;\forall\;z\in{\cal X},. , 2 =
implying, as usual [12], that ∀z∈𝒳for-all𝑧𝒳\forall z\in{\cal X}
, 1 = ⟨γi−1Hi+γiηi,xi−z⟩≤V(xi−1,z)−V(xi,z)−V(xi−1,xi).subscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1𝑧𝑉subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1subscript𝑥𝑖{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i},x_{i}-z{\rangle}\leq V(x_{i-1},z)-V(x_{i},z)-V(x_{i-1},x_{i}).. , 2 =
In particular,
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)−V(xi−1,xi)+γi−1⟨Hi,xi−1−xi⟩absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥𝑉subscript𝑥𝑖1subscript𝑥𝑖subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})-V(x_{i-1},x_{i})+\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{i}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+12γi−12‖Hi‖∗2.absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥12subscriptsuperscript𝛾2𝑖1superscriptsubscriptnormsubscript𝐻𝑖2\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\mbox{\small$\frac{1}{2}$}\gamma^{2}_{i-1}\|H_{i}\|_{*}^{2}.. , 2 =
Observe that due to the Lipschitz continuity of ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) one has
, 1 = ν⟨∇G(x,ω)−∇G(x′,ω),x−x′⟩≥‖∇G(x,ω)−∇G(x′,ω)‖∗2,∀x,x′∈𝒳,formulae-sequence𝜈∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔𝑥superscript𝑥′superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔2for-all𝑥superscript𝑥′𝒳\displaystyle\nu{\langle}\nabla G(x,\omega)-\nabla G(x^{\prime},\omega),x-x^{\prime}{\rangle}\geq\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|_{*}^{2},\quad\forall x,x^{\prime}\in{\cal X},. , 2 = . , 3 = (30)
so that | 1,421 |
34 | , 1 = ‖∇G(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2‖∇G(x,ω)−∇G(x∗,ω)‖∗2+2‖∇G(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\|\nabla G(x,\omega)-\nabla G(x_{*},\omega)\|_{*}^{2}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = ≤2ν⟨∇G(x,ω)−∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\nu{\langle}\nabla G(x,\omega)-\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = =2ν⟨∇G(x,ω),x−x∗⟩−2ν⟨∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔𝑥subscript𝑥2𝜈∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle=2\nu{\langle}\nabla G(x,\omega),x-x_{*}{\rangle}-2\nu{\langle}\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 =
so that
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,,xi−x∗⟩\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},,x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+γi−12[ν⟨Hi,xi−1−x∗⟩−νζi+τi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscriptsuperscript𝛾2𝑖1delimited-[]𝜈subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝜈subscript𝜁𝑖subscript𝜏𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\gamma^{2}_{i-1}[\nu{\langle}H_{i},x_{i-1}-x_{*}{\rangle}-\nu\zeta_{i}+\tau_{i}]. , 2 =
where ζi=⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝜁𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\zeta_{i}={\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle} and τi=‖∇G(x∗,ω)‖∗2subscript𝜏𝑖superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\tau_{i}=\|\nabla G(x_{*},\omega)\|_{*}^{2}.
As a result, by convexity of hℎh we have for γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}
, 1 = 34γi−1⟨∇g(xi−1),xi−1−x∗⟩+γi[h(xi)−h(x∗)]34subscript𝛾𝑖1∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\mbox{\small$\frac{3}{4}$}\gamma_{i-1}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}[h(x_{i})-h(x_{*})]. , 2 = . , 1 = ≤(γi−1−γi−12ν)⟨∇g(xi−1),xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩absentsubscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\leq(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+(γi−1−γi−12ν)⟨ξi,xi−1−x∗⟩+γi−12[τi−νζi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥superscriptsubscript𝛾𝑖12delimited-[]subscript𝜏𝑖𝜈subscript𝜁𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i-1}^{2}[\tau_{i}-\nu\zeta_{i}]. , 2 =
where we put ξi=Hi−∇g(xi−1)subscript𝜉𝑖subscript𝐻𝑖∇𝑔subscript𝑥𝑖1\xi_{i}=H_{i}-\nabla g(x_{i-1}).
When summing from i=1𝑖1i=1 to m𝑚m we obtain | 1,970 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
, 1 = ‖∇G(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2‖∇G(x,ω)−∇G(x∗,ω)‖∗2+2‖∇G(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\|\nabla G(x,\omega)-\nabla G(x_{*},\omega)\|_{*}^{2}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = ≤2ν⟨∇G(x,ω)−∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\nu{\langle}\nabla G(x,\omega)-\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = =2ν⟨∇G(x,ω),x−x∗⟩−2ν⟨∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔𝑥subscript𝑥2𝜈∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle=2\nu{\langle}\nabla G(x,\omega),x-x_{*}{\rangle}-2\nu{\langle}\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 =
so that
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,,xi−x∗⟩\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},,x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+γi−12[ν⟨Hi,xi−1−x∗⟩−νζi+τi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscriptsuperscript𝛾2𝑖1delimited-[]𝜈subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝜈subscript𝜁𝑖subscript𝜏𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\gamma^{2}_{i-1}[\nu{\langle}H_{i},x_{i-1}-x_{*}{\rangle}-\nu\zeta_{i}+\tau_{i}]. , 2 =
where ζi=⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝜁𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\zeta_{i}={\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle} and τi=‖∇G(x∗,ω)‖∗2subscript𝜏𝑖superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\tau_{i}=\|\nabla G(x_{*},\omega)\|_{*}^{2}.
As a result, by convexity of hℎh we have for γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}
, 1 = 34γi−1⟨∇g(xi−1),xi−1−x∗⟩+γi[h(xi)−h(x∗)]34subscript𝛾𝑖1∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\mbox{\small$\frac{3}{4}$}\gamma_{i-1}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}[h(x_{i})-h(x_{*})]. , 2 = . , 1 = ≤(γi−1−γi−12ν)⟨∇g(xi−1),xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩absentsubscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\leq(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+(γi−1−γi−12ν)⟨ξi,xi−1−x∗⟩+γi−12[τi−νζi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥superscriptsubscript𝛾𝑖12delimited-[]subscript𝜏𝑖𝜈subscript𝜁𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i-1}^{2}[\tau_{i}-\nu\zeta_{i}]. , 2 =
where we put ξi=Hi−∇g(xi−1)subscript𝜉𝑖subscript𝐻𝑖∇𝑔subscript𝑥𝑖1\xi_{i}=H_{i}-\nabla g(x_{i-1}).
When summing from i=1𝑖1i=1 to m𝑚m we obtain | 2,002 |
35 | , 1 = ∑i=1mγi−1(34⟨∇g(xi−1),xi−1−x∗⟩+[h(xi−1)−h(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{3}{4}$}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+∑i=1m[γi−12(τi−νζi)+γi−1(1−γi−1ν)⟨ξi,xi−1−x∗⟩]⏟=:Rmabsent𝑉subscript𝑥0subscript𝑥subscript⏟superscriptsubscript𝑖1𝑚delimited-[]superscriptsubscript𝛾𝑖12subscript𝜏𝑖𝜈subscript𝜁𝑖subscript𝛾𝑖11subscript𝛾𝑖1𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥:absentsubscript𝑅𝑚\displaystyle\qquad\leq V(x_{0},x_{*})+\underbrace{\sum_{i=1}^{m}[\gamma_{i-1}^{2}(\tau_{i}-\nu\zeta_{i})+\gamma_{i-1}(1-\gamma_{i-1}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}]}_{=:R_{m}}. , 2 = . , 3 = . , 1 = +γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\qquad\qquad+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 = . , 3 = (31) | 668 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
, 1 = ∑i=1mγi−1(34⟨∇g(xi−1),xi−1−x∗⟩+[h(xi−1)−h(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{3}{4}$}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+∑i=1m[γi−12(τi−νζi)+γi−1(1−γi−1ν)⟨ξi,xi−1−x∗⟩]⏟=:Rmabsent𝑉subscript𝑥0subscript𝑥subscript⏟superscriptsubscript𝑖1𝑚delimited-[]superscriptsubscript𝛾𝑖12subscript𝜏𝑖𝜈subscript𝜁𝑖subscript𝛾𝑖11subscript𝛾𝑖1𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥:absentsubscript𝑅𝑚\displaystyle\qquad\leq V(x_{0},x_{*})+\underbrace{\sum_{i=1}^{m}[\gamma_{i-1}^{2}(\tau_{i}-\nu\zeta_{i})+\gamma_{i-1}(1-\gamma_{i-1}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}]}_{=:R_{m}}. , 2 = . , 3 = . , 1 = +γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\qquad\qquad+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 = . , 3 = (31) | 700 |
36 | We have
, 1 = γi−1⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1⟨[∇G(xi−1,ωi)−∇G(x∗,ωi)]−∇g(xi−1),xi−1−x∗⟩⏞υisubscript𝛾𝑖1superscript⏞delimited-[]∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝜐𝑖\displaystyle\gamma_{i-1}\overbrace{{\langle}[\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})]-\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}}^{\upsilon_{i}}. , 4 = . , 1 = . , 2 = . , 3 = +γi−1⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝛾𝑖1∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\displaystyle+\gamma_{i-1}{\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = γi−1[υi+ζi],subscript𝛾𝑖1delimited-[]subscript𝜐𝑖subscript𝜁𝑖\displaystyle\gamma_{i-1}[\upsilon_{i}+\zeta_{i}],. , 4 =
so that
, 1 = Rm=∑i=1mγi−12τi+∑i=1m(γi−1−γi−12ν)υi+∑i=1m(γi−1−2νγi−12)ζi=:rm(1)+rm(2)+rm(3).\displaystyle R_{m}={\sum_{i=1}^{m}\gamma_{i-1}^{2}\tau_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-\gamma^{2}_{i-1}\nu)\upsilon_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-2\nu\gamma_{i-1}^{2})\zeta_{i}}=:r^{(1)}_{m}+r^{(2)}_{m}+r^{(3)}_{m}.. , 2 = . , 3 = (32)
Note that rm(3)subscriptsuperscript𝑟3𝑚r^{(3)}_{m} is a sub-Gaussian martingale. Indeed, one has 𝐄i−1{ζi}=0subscript𝐄𝑖1subscript𝜁𝑖0{\mathbf{E}}_{i-1}\{\zeta_{i}\}=0 a.s.,333We use notation 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} for the conditional expectation given x0,ω1,…,ωi−1subscript𝑥0subscript𝜔1…subscript𝜔𝑖1x_{0},\omega_{1},...,\omega_{i-1}. and
, 1 = |ζi|≤‖xi−1−x∗‖‖∇G(x∗,ω)‖∗,subscript𝜁𝑖normsubscript𝑥𝑖1subscript𝑥subscriptnorm∇𝐺subscript𝑥𝜔|\zeta_{i}|\leq\|x_{i-1}-x_{*}\|\,\|\nabla G(x_{*},\omega)\|_{*},. , 2 =
so that by the sub-Gaussian hypothesis (6), 𝐄i−1{exp(ζi24R2σ∗2⏟ν∗2)}≤exp(1)subscript𝐄𝑖1subscript⏟subscriptsuperscript𝜁2𝑖4superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝜈21{\mathbf{E}}_{i-1}\Big{\{}\exp\Big{(}\underbrace{\zeta^{2}_{i}\over 4R^{2}\sigma_{*}^{2}}_{\nu_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).
As a result (cf. the proof of Proposition 4.2 in [28]),
, 1 = ∀t𝐄i−1{etζi}≤exp(t𝐄i−1{ζi}+34t2ν∗2)=exp(3t2R2σ∗2),for-all𝑡subscript𝐄𝑖1superscript𝑒𝑡subscript𝜁𝑖𝑡subscript𝐄𝑖1subscript𝜁𝑖34superscript𝑡2subscriptsuperscript𝜈23superscript𝑡2superscript𝑅2subscriptsuperscript𝜎2\forall t\quad{\mathbf{E}}_{i-1}\left\{e^{t\zeta_{i}}\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\zeta_{i}\}+{\tfrac{3}{4}t^{2}\nu^{2}_{*}}\right)=\exp\left({3t^{2}R^{2}\sigma^{2}_{*}}\right),. , 2 =
and applying (37a) to Sm=rm(3)subscript𝑆𝑚superscriptsubscript𝑟𝑚3S_{m}=r_{m}^{(3)} with
, 1 = rm=6R2σ∗2∑i=0m−1(γi−2νγi2)2≤6R2σ∗2∑i=0m−1γi2subscript𝑟𝑚6superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2𝜈superscriptsubscript𝛾𝑖226superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1subscriptsuperscript𝛾2𝑖r_{m}=6R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}(\gamma_{i}-2\nu\gamma_{i}^{2})^{2}\leq{6}R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma^{2}_{i}. , 2 =
we conclude that for some Ωm(3)subscriptsuperscriptΩ3𝑚\Omega^{(3)}_{m} such that Prob(Ωm(3))≥1−e−tProbsubscriptsuperscriptΩ3𝑚1superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(3)}_{m})\geq 1-e^{-t} and all ωm∈Ωm(3)superscript𝜔𝑚subscriptsuperscriptΩ3𝑚\omega^{m}\in\Omega^{(3)}_{m} | 1,951 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
We have
, 1 = γi−1⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1⟨[∇G(xi−1,ωi)−∇G(x∗,ωi)]−∇g(xi−1),xi−1−x∗⟩⏞υisubscript𝛾𝑖1superscript⏞delimited-[]∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝜐𝑖\displaystyle\gamma_{i-1}\overbrace{{\langle}[\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})]-\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}}^{\upsilon_{i}}. , 4 = . , 1 = . , 2 = . , 3 = +γi−1⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝛾𝑖1∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\displaystyle+\gamma_{i-1}{\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = γi−1[υi+ζi],subscript𝛾𝑖1delimited-[]subscript𝜐𝑖subscript𝜁𝑖\displaystyle\gamma_{i-1}[\upsilon_{i}+\zeta_{i}],. , 4 =
so that
, 1 = Rm=∑i=1mγi−12τi+∑i=1m(γi−1−γi−12ν)υi+∑i=1m(γi−1−2νγi−12)ζi=:rm(1)+rm(2)+rm(3).\displaystyle R_{m}={\sum_{i=1}^{m}\gamma_{i-1}^{2}\tau_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-\gamma^{2}_{i-1}\nu)\upsilon_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-2\nu\gamma_{i-1}^{2})\zeta_{i}}=:r^{(1)}_{m}+r^{(2)}_{m}+r^{(3)}_{m}.. , 2 = . , 3 = (32)
Note that rm(3)subscriptsuperscript𝑟3𝑚r^{(3)}_{m} is a sub-Gaussian martingale. Indeed, one has 𝐄i−1{ζi}=0subscript𝐄𝑖1subscript𝜁𝑖0{\mathbf{E}}_{i-1}\{\zeta_{i}\}=0 a.s.,333We use notation 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} for the conditional expectation given x0,ω1,…,ωi−1subscript𝑥0subscript𝜔1…subscript𝜔𝑖1x_{0},\omega_{1},...,\omega_{i-1}. and
, 1 = |ζi|≤‖xi−1−x∗‖‖∇G(x∗,ω)‖∗,subscript𝜁𝑖normsubscript𝑥𝑖1subscript𝑥subscriptnorm∇𝐺subscript𝑥𝜔|\zeta_{i}|\leq\|x_{i-1}-x_{*}\|\,\|\nabla G(x_{*},\omega)\|_{*},. , 2 =
so that by the sub-Gaussian hypothesis (6), 𝐄i−1{exp(ζi24R2σ∗2⏟ν∗2)}≤exp(1)subscript𝐄𝑖1subscript⏟subscriptsuperscript𝜁2𝑖4superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝜈21{\mathbf{E}}_{i-1}\Big{\{}\exp\Big{(}\underbrace{\zeta^{2}_{i}\over 4R^{2}\sigma_{*}^{2}}_{\nu_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).
As a result (cf. the proof of Proposition 4.2 in [28]),
, 1 = ∀t𝐄i−1{etζi}≤exp(t𝐄i−1{ζi}+34t2ν∗2)=exp(3t2R2σ∗2),for-all𝑡subscript𝐄𝑖1superscript𝑒𝑡subscript𝜁𝑖𝑡subscript𝐄𝑖1subscript𝜁𝑖34superscript𝑡2subscriptsuperscript𝜈23superscript𝑡2superscript𝑅2subscriptsuperscript𝜎2\forall t\quad{\mathbf{E}}_{i-1}\left\{e^{t\zeta_{i}}\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\zeta_{i}\}+{\tfrac{3}{4}t^{2}\nu^{2}_{*}}\right)=\exp\left({3t^{2}R^{2}\sigma^{2}_{*}}\right),. , 2 =
and applying (37a) to Sm=rm(3)subscript𝑆𝑚superscriptsubscript𝑟𝑚3S_{m}=r_{m}^{(3)} with
, 1 = rm=6R2σ∗2∑i=0m−1(γi−2νγi2)2≤6R2σ∗2∑i=0m−1γi2subscript𝑟𝑚6superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2𝜈superscriptsubscript𝛾𝑖226superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1subscriptsuperscript𝛾2𝑖r_{m}=6R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}(\gamma_{i}-2\nu\gamma_{i}^{2})^{2}\leq{6}R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma^{2}_{i}. , 2 =
we conclude that for some Ωm(3)subscriptsuperscriptΩ3𝑚\Omega^{(3)}_{m} such that Prob(Ωm(3))≥1−e−tProbsubscriptsuperscriptΩ3𝑚1superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(3)}_{m})\geq 1-e^{-t} and all ωm∈Ωm(3)superscript𝜔𝑚subscriptsuperscriptΩ3𝑚\omega^{m}\in\Omega^{(3)}_{m} | 1,983 |
37 | , 1 = rm(3)≤23tR2σ∗2∑i=0m−1γi2≤3tR2+3σ∗2∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle r^{(3)}_{m}\leq 2\sqrt{{3tR^{2}\sigma_{*}^{2}}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq{{3}tR^{2}}+{3}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}.. , 2 = . , 3 = (33)
Next, again by (6), due to the Jensen inequality, 𝐄i−1{τi}≤σ∗2subscript𝐄𝑖1subscript𝜏𝑖superscriptsubscript𝜎2{\mathbf{E}}_{i-1}\{\tau_{i}\}\leq\sigma_{*}^{2}, and
, 1 = 𝐄i−1{exp(t‖∇G(x∗,ωi)‖∗)}≤exp(t𝐄i−1{‖∇G(x∗,ωi)‖∗}+34t2σ∗2)≤exp(tσ∗+34t2σ∗2).subscript𝐄𝑖1𝑡subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖𝑡subscript𝐄𝑖1subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖34superscript𝑡2subscriptsuperscript𝜎2𝑡subscript𝜎34superscript𝑡2subscriptsuperscript𝜎2{\mathbf{E}}_{i-1}\left\{\exp\left(t\|\nabla G(x_{*},\omega_{i})\|_{*}\right)\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\|\nabla G(x_{*},\omega_{i})\|_{*}\}+{\tfrac{3}{4}t^{2}\sigma^{2}_{*}}\right)\leq\exp\left(t\sigma_{*}+\tfrac{3}{4}t^{2}\sigma^{2}_{*}\right).. , 2 =
Thus, when setting
, 1 = μi=γi−1σ∗,si2=32γi−1σ∗2,s¯=maxiγisi,formulae-sequencesubscript𝜇𝑖subscript𝛾𝑖1subscript𝜎formulae-sequencesuperscriptsubscript𝑠𝑖232subscript𝛾𝑖1superscriptsubscript𝜎2¯𝑠subscript𝑖subscript𝛾𝑖subscript𝑠𝑖\mu_{i}=\gamma_{i-1}\sigma_{*},\;\;s_{i}^{2}=\tfrac{3}{2}\gamma_{i-1}\sigma_{*}^{2},\;\;\overline{s}=\max_{i}\gamma_{i}s_{i},. , 2 =
Mm=rm(1)subscript𝑀𝑚subscriptsuperscript𝑟1𝑚M_{m}=r^{(1)}_{m},
vm+hm=214σ∗4∑i=0m−1γi4subscript𝑣𝑚subscriptℎ𝑚214superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4v_{m}+h_{m}=\tfrac{21}{4}\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4},
and applying the bound (37b) of Lemma A.1 we obtain
, 1 = rm(1)≤3σ∗2∑i=0m−1γi2+21tσ∗4∑i=0m−1γi4⏟=:Δm(1)+3tγ¯2σ∗2subscriptsuperscript𝑟1𝑚3subscriptsuperscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2subscript⏟21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4:absentsubscriptsuperscriptΔ1𝑚3𝑡superscript¯𝛾2superscriptsubscript𝜎2r^{(1)}_{m}\leq 3\sigma^{2}_{*}\sum_{i=0}^{m-1}\gamma_{i}^{2}+\underbrace{\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}}_{=:\Delta^{(1)}_{m}}+3t\overline{\gamma}^{2}\sigma_{*}^{2}. , 2 =
for γ¯=maxiγi¯𝛾subscript𝑖subscript𝛾𝑖\overline{\gamma}=\max_{i}\gamma_{i} and ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m} where Ωm(1)subscriptsuperscriptΩ1𝑚\Omega^{(1)}_{m} is of probability at least 1−e−x1superscript𝑒𝑥1-e^{-x}. Because
, 1 = γ¯2∑i=0m−1γi2≥∑i=0m−1γi4,superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{4},. , 2 =
whenever
21tσ∗4∑i=0m−1γi4≥∑i=0m−1γi2,21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2}, one has
21tγ¯2≥∑i=0m−1γi221𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221t\overline{\gamma}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2} and | 1,900 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
, 1 = rm(3)≤23tR2σ∗2∑i=0m−1γi2≤3tR2+3σ∗2∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle r^{(3)}_{m}\leq 2\sqrt{{3tR^{2}\sigma_{*}^{2}}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq{{3}tR^{2}}+{3}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}.. , 2 = . , 3 = (33)
Next, again by (6), due to the Jensen inequality, 𝐄i−1{τi}≤σ∗2subscript𝐄𝑖1subscript𝜏𝑖superscriptsubscript𝜎2{\mathbf{E}}_{i-1}\{\tau_{i}\}\leq\sigma_{*}^{2}, and
, 1 = 𝐄i−1{exp(t‖∇G(x∗,ωi)‖∗)}≤exp(t𝐄i−1{‖∇G(x∗,ωi)‖∗}+34t2σ∗2)≤exp(tσ∗+34t2σ∗2).subscript𝐄𝑖1𝑡subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖𝑡subscript𝐄𝑖1subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖34superscript𝑡2subscriptsuperscript𝜎2𝑡subscript𝜎34superscript𝑡2subscriptsuperscript𝜎2{\mathbf{E}}_{i-1}\left\{\exp\left(t\|\nabla G(x_{*},\omega_{i})\|_{*}\right)\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\|\nabla G(x_{*},\omega_{i})\|_{*}\}+{\tfrac{3}{4}t^{2}\sigma^{2}_{*}}\right)\leq\exp\left(t\sigma_{*}+\tfrac{3}{4}t^{2}\sigma^{2}_{*}\right).. , 2 =
Thus, when setting
, 1 = μi=γi−1σ∗,si2=32γi−1σ∗2,s¯=maxiγisi,formulae-sequencesubscript𝜇𝑖subscript𝛾𝑖1subscript𝜎formulae-sequencesuperscriptsubscript𝑠𝑖232subscript𝛾𝑖1superscriptsubscript𝜎2¯𝑠subscript𝑖subscript𝛾𝑖subscript𝑠𝑖\mu_{i}=\gamma_{i-1}\sigma_{*},\;\;s_{i}^{2}=\tfrac{3}{2}\gamma_{i-1}\sigma_{*}^{2},\;\;\overline{s}=\max_{i}\gamma_{i}s_{i},. , 2 =
Mm=rm(1)subscript𝑀𝑚subscriptsuperscript𝑟1𝑚M_{m}=r^{(1)}_{m},
vm+hm=214σ∗4∑i=0m−1γi4subscript𝑣𝑚subscriptℎ𝑚214superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4v_{m}+h_{m}=\tfrac{21}{4}\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4},
and applying the bound (37b) of Lemma A.1 we obtain
, 1 = rm(1)≤3σ∗2∑i=0m−1γi2+21tσ∗4∑i=0m−1γi4⏟=:Δm(1)+3tγ¯2σ∗2subscriptsuperscript𝑟1𝑚3subscriptsuperscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2subscript⏟21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4:absentsubscriptsuperscriptΔ1𝑚3𝑡superscript¯𝛾2superscriptsubscript𝜎2r^{(1)}_{m}\leq 3\sigma^{2}_{*}\sum_{i=0}^{m-1}\gamma_{i}^{2}+\underbrace{\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}}_{=:\Delta^{(1)}_{m}}+3t\overline{\gamma}^{2}\sigma_{*}^{2}. , 2 =
for γ¯=maxiγi¯𝛾subscript𝑖subscript𝛾𝑖\overline{\gamma}=\max_{i}\gamma_{i} and ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m} where Ωm(1)subscriptsuperscriptΩ1𝑚\Omega^{(1)}_{m} is of probability at least 1−e−x1superscript𝑒𝑥1-e^{-x}. Because
, 1 = γ¯2∑i=0m−1γi2≥∑i=0m−1γi4,superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{4},. , 2 =
whenever
21tσ∗4∑i=0m−1γi4≥∑i=0m−1γi2,21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2}, one has
21tγ¯2≥∑i=0m−1γi221𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221t\overline{\gamma}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2} and | 1,932 |
38 | , 1 = 21t∑i=0m−1γi4≤21tγ¯2∑i=0m−1γi2≤(21tγ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221t\sum_{i=0}^{m-1}\gamma_{i}^{4}\leq{21t\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq(21t\overline{\gamma}^{2})^{2}. , 2 =
Thus,
, 1 = Δm(1)≤min[21tσ∗2γ¯2,σ∗2∑i=0m−1γi2]≤21tσ∗2γ¯2+σ∗2∑i=0m−1γi2,subscriptsuperscriptΔ1𝑚21𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\Delta^{(1)}_{m}\leq\min\left[21t\sigma^{2}_{*}\overline{\gamma}^{2},\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\right]\leq 21t\sigma^{2}_{*}\overline{\gamma}^{2}+\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2},. , 2 =
and
, 1 = rm(1)≤σ∗2[4∑i=0m−1γi2+24tγ¯2]subscriptsuperscript𝑟1𝑚subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle r^{(1)}_{m}\leq\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 2 = . , 3 = (34)
for ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m}.
Finally, by the Lipschitz continuity of ∇G∇𝐺\nabla G (cf. (30)), when taking expectation w.r.t. the distribution of ωisubscript𝜔𝑖\omega_{i}, we get
, 1 = 𝐄i−1{υi2}subscript𝐄𝑖1superscriptsubscript𝜐𝑖2\displaystyle{\mathbf{E}}_{i-1}\{\upsilon_{i}^{2}\}. , 2 = ≤\displaystyle\leq. , 3 = 4R2𝐄i−1{‖∇G(xi−1,ωi)−∇G(x∗,ωi)‖∗2}4superscript𝑅2subscript𝐄𝑖1superscriptsubscriptnorm∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖2\displaystyle 4R^{2}{\mathbf{E}}_{i-1}\{\|\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})\|_{*}^{2}\}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 4R2ν𝐄i−1{⟨∇G(xi−1,ωi)−∇G(x∗,ωi),xi−1−x∗⟩}=4R2ν⟨∇g(xi−1),xi−1−x∗⟩.4superscript𝑅2𝜈subscript𝐄𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥4superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\displaystyle 4R^{2}\nu{\mathbf{E}}_{i-1}\{{\langle}\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}\}=4R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}.. , 4 =
On the other hand, one also has |υi|≤2ν‖xi−1−xi‖2≤8νR2subscript𝜐𝑖2𝜈superscriptnormsubscript𝑥𝑖1subscript𝑥𝑖28𝜈superscript𝑅2|\upsilon_{i}|\leq 2\nu\|x_{i-1}-x_{i}\|^{2}\leq 8\nu R^{2}. We can now apply Lemma A.2 with
σi2=4γi−12R2ν⟨∇g(xi−1),xi−1−x∗⟩superscriptsubscript𝜎𝑖24superscriptsubscript𝛾𝑖12superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\sigma_{i}^{2}=4\gamma_{i-1}^{2}R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle} to conclude that for t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}
, 1 = rm(2)≤4tR2ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩⏟=:Δm(2)+16tνR2γ¯superscriptsubscript𝑟𝑚24subscript⏟𝑡superscript𝑅2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥:absentsubscriptsuperscriptΔ2𝑚16𝑡𝜈superscript𝑅2¯𝛾r_{m}^{(2)}\leq{4}\underbrace{\sqrt{tR^{2}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}}}_{=:\Delta^{(2)}_{m}}+{{16}t\nu R^{2}\overline{\gamma}}. , 2 = | 1,943 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
, 1 = 21t∑i=0m−1γi4≤21tγ¯2∑i=0m−1γi2≤(21tγ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221t\sum_{i=0}^{m-1}\gamma_{i}^{4}\leq{21t\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq(21t\overline{\gamma}^{2})^{2}. , 2 =
Thus,
, 1 = Δm(1)≤min[21tσ∗2γ¯2,σ∗2∑i=0m−1γi2]≤21tσ∗2γ¯2+σ∗2∑i=0m−1γi2,subscriptsuperscriptΔ1𝑚21𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\Delta^{(1)}_{m}\leq\min\left[21t\sigma^{2}_{*}\overline{\gamma}^{2},\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\right]\leq 21t\sigma^{2}_{*}\overline{\gamma}^{2}+\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2},. , 2 =
and
, 1 = rm(1)≤σ∗2[4∑i=0m−1γi2+24tγ¯2]subscriptsuperscript𝑟1𝑚subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle r^{(1)}_{m}\leq\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 2 = . , 3 = (34)
for ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m}.
Finally, by the Lipschitz continuity of ∇G∇𝐺\nabla G (cf. (30)), when taking expectation w.r.t. the distribution of ωisubscript𝜔𝑖\omega_{i}, we get
, 1 = 𝐄i−1{υi2}subscript𝐄𝑖1superscriptsubscript𝜐𝑖2\displaystyle{\mathbf{E}}_{i-1}\{\upsilon_{i}^{2}\}. , 2 = ≤\displaystyle\leq. , 3 = 4R2𝐄i−1{‖∇G(xi−1,ωi)−∇G(x∗,ωi)‖∗2}4superscript𝑅2subscript𝐄𝑖1superscriptsubscriptnorm∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖2\displaystyle 4R^{2}{\mathbf{E}}_{i-1}\{\|\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})\|_{*}^{2}\}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 4R2ν𝐄i−1{⟨∇G(xi−1,ωi)−∇G(x∗,ωi),xi−1−x∗⟩}=4R2ν⟨∇g(xi−1),xi−1−x∗⟩.4superscript𝑅2𝜈subscript𝐄𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥4superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\displaystyle 4R^{2}\nu{\mathbf{E}}_{i-1}\{{\langle}\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}\}=4R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}.. , 4 =
On the other hand, one also has |υi|≤2ν‖xi−1−xi‖2≤8νR2subscript𝜐𝑖2𝜈superscriptnormsubscript𝑥𝑖1subscript𝑥𝑖28𝜈superscript𝑅2|\upsilon_{i}|\leq 2\nu\|x_{i-1}-x_{i}\|^{2}\leq 8\nu R^{2}. We can now apply Lemma A.2 with
σi2=4γi−12R2ν⟨∇g(xi−1),xi−1−x∗⟩superscriptsubscript𝜎𝑖24superscriptsubscript𝛾𝑖12superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\sigma_{i}^{2}=4\gamma_{i-1}^{2}R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle} to conclude that for t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}
, 1 = rm(2)≤4tR2ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩⏟=:Δm(2)+16tνR2γ¯superscriptsubscript𝑟𝑚24subscript⏟𝑡superscript𝑅2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥:absentsubscriptsuperscriptΔ2𝑚16𝑡𝜈superscript𝑅2¯𝛾r_{m}^{(2)}\leq{4}\underbrace{\sqrt{tR^{2}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}}}_{=:\Delta^{(2)}_{m}}+{{16}t\nu R^{2}\overline{\gamma}}. , 2 = | 1,975 |
39 | for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob(Ωm(2))≥1−2e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. Note that
, 1 = Δm(2)≤2tR2+14ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩,superscriptsubscriptΔ𝑚22𝑡superscript𝑅214𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥\Delta_{m}^{(2)}\leq{2}{tR^{2}}+{\tfrac{1}{4}}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle},. , 2 =
and γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}, so that
, 1 = rm(2)≤ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩+12tR2≤14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2superscriptsubscript𝑟𝑚2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2\displaystyle r_{m}^{(2)}\leq\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}\leq\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}. , 2 = . , 3 = (35)
for ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m}. | 755 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob(Ωm(2))≥1−2e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. Note that
, 1 = Δm(2)≤2tR2+14ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩,superscriptsubscriptΔ𝑚22𝑡superscript𝑅214𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥\Delta_{m}^{(2)}\leq{2}{tR^{2}}+{\tfrac{1}{4}}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle},. , 2 =
and γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}, so that
, 1 = rm(2)≤ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩+12tR2≤14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2superscriptsubscript𝑟𝑚2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2\displaystyle r_{m}^{(2)}\leq\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}\leq\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}. , 2 = . , 3 = (35)
for ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m}. | 787 |
40 | When substituting bounds (33)–(35) into (32) we obtain
, 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2+σ∗2[4∑i=0m−1γi2+24tγ¯2]+23tR2σ∗2∑i=0m−1γi214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾223𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}+\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+{2}\sqrt{{3}tR^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]14superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 4 =
for all ωm∈Ω¯m=⋂i=13Ωm(i)superscript𝜔𝑚subscript¯Ω𝑚superscriptsubscript𝑖13subscriptsuperscriptΩ𝑖𝑚\omega^{m}\in\overline{\Omega}_{m}=\bigcap_{i=1}^{3}\Omega^{(i)}_{m} with
Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}.
When substituting the latter bound into (31) and utilizing the convexity of g𝑔g and hℎh we arrive at
, 1 = (∑i=0m−1γi)(12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)])≤∑i=0m−1γi(12[g(xi)−g(x∗)]+[h(xi)−h(x∗)])superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript𝑥𝑖𝑔subscript𝑥delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)\Big{(}\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]\Big{)}\leq\sum_{i=0}^{m-1}\gamma_{i}\Big{(}\mbox{\small$\frac{1}{2}$}[g(x_{i})-g(x_{*})]+[h(x_{i})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤∑i=1mγi−1(12⟨∇g(xi−1,xi−1−x∗⟩+[h(xi−1)−h(x∗)])\displaystyle\leq\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{1}{2}$}{\langle}\nabla g(x_{i-1},x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 =
In particular, for constant stepsizes γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma we get | 1,808 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
3o.
When substituting bounds (33)–(35) into (32) we obtain
, 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2+σ∗2[4∑i=0m−1γi2+24tγ¯2]+23tR2σ∗2∑i=0m−1γi214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾223𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}+\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+{2}\sqrt{{3}tR^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]14superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 4 =
for all ωm∈Ω¯m=⋂i=13Ωm(i)superscript𝜔𝑚subscript¯Ω𝑚superscriptsubscript𝑖13subscriptsuperscriptΩ𝑖𝑚\omega^{m}\in\overline{\Omega}_{m}=\bigcap_{i=1}^{3}\Omega^{(i)}_{m} with
Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}.
When substituting the latter bound into (31) and utilizing the convexity of g𝑔g and hℎh we arrive at
, 1 = (∑i=0m−1γi)(12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)])≤∑i=0m−1γi(12[g(xi)−g(x∗)]+[h(xi)−h(x∗)])superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript𝑥𝑖𝑔subscript𝑥delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)\Big{(}\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]\Big{)}\leq\sum_{i=0}^{m-1}\gamma_{i}\Big{(}\mbox{\small$\frac{1}{2}$}[g(x_{i})-g(x_{*})]+[h(x_{i})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤∑i=1mγi−1(12⟨∇g(xi−1,xi−1−x∗⟩+[h(xi−1)−h(x∗)])\displaystyle\leq\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{1}{2}$}{\langle}\nabla g(x_{i-1},x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 =
In particular, for constant stepsizes γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma we get | 1,840 |
41 | , 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\quad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+\frac{24t}{m}\right).. , 2 =
This implies the first statement of the proposition. | 351 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
3o.
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\quad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+\frac{24t}{m}\right).. , 2 =
This implies the first statement of the proposition. | 383 |
42 | To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1∑i=0m−1γixi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿{\widehat{x}}_{m}^{(L)}=\left(\sum_{i=0}^{m-1}\gamma_{i}\right)^{-1}\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)},
it suffices to note that minibatch gradient observation H(x,ω(L))𝐻𝑥superscript𝜔𝐿H(x,\omega^{(L)}) is Lipschitz-continuous with Lipschitz constant ν𝜈\nu, and that
H(x∗,ω(L))𝐻subscript𝑥superscript𝜔𝐿H(x_{*},\omega^{(L)}) is sub-Gaussian with parameter σ∗2superscriptsubscript𝜎2\sigma_{*}^{2} replaced with σ¯∗,L2≲Θσ∗2Lless-than-or-similar-tosubscriptsuperscript¯𝜎2𝐿Θsuperscriptsubscript𝜎2𝐿{\overline{\sigma}}^{2}_{*,L}\lesssim{\Theta\sigma_{*}^{2}\over L}, see Lemma A.3. □□\Box | 392 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
5o.
To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1∑i=0m−1γixi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿{\widehat{x}}_{m}^{(L)}=\left(\sum_{i=0}^{m-1}\gamma_{i}\right)^{-1}\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)},
it suffices to note that minibatch gradient observation H(x,ω(L))𝐻𝑥superscript𝜔𝐿H(x,\omega^{(L)}) is Lipschitz-continuous with Lipschitz constant ν𝜈\nu, and that
H(x∗,ω(L))𝐻subscript𝑥superscript𝜔𝐿H(x_{*},\omega^{(L)}) is sub-Gaussian with parameter σ∗2superscriptsubscript𝜎2\sigma_{*}^{2} replaced with σ¯∗,L2≲Θσ∗2Lless-than-or-similar-tosubscriptsuperscript¯𝜎2𝐿Θsuperscriptsubscript𝜎2𝐿{\overline{\sigma}}^{2}_{*,L}\lesssim{\Theta\sigma_{*}^{2}\over L}, see Lemma A.3. □□\Box | 424 |
43 | Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} the expectation conditional to ℱi−1subscriptℱ𝑖1{\cal F}_{i-1}.
, 1 = 𝐄i−1{etξi}≤etμi+t2si22,a.s.formulae-sequencesubscript𝐄𝑖1superscript𝑒𝑡subscript𝜉𝑖superscript𝑒𝑡subscript𝜇𝑖superscript𝑡2superscriptsubscript𝑠𝑖22𝑎𝑠{\mathbf{E}}_{i-1}\left\{e^{t\xi_{i}}\right\}\leq e^{t\mu_{i}+\frac{t^{2}s_{i}^{2}}{2}},\hskip 14.22636pta.s.. , 2 = . , 3 = (36)
for some nonrandom μi,sisubscript𝜇𝑖subscript𝑠𝑖\mu_{i},s_{i}, si≤s¯subscript𝑠𝑖¯𝑠s_{i}\leq\overline{s}.
We denote by Sn=∑i=1nξi−μisubscript𝑆𝑛superscriptsubscript𝑖1𝑛subscript𝜉𝑖subscript𝜇𝑖S_{n}=\sum_{i=1}^{n}{\xi_{i}-\mu_{i}}, rn=∑i=1nsi2subscript𝑟𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖2r_{n}=\sum_{i=1}^{n}{s_{i}^{2}}, vn=∑i=1nsi4,Mn=∑i=1nξi2−(si2+μi2)formulae-sequencesubscript𝑣𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖4subscript𝑀𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜉𝑖2superscriptsubscript𝑠𝑖2superscriptsubscript𝜇𝑖2v_{n}=\sum_{i=1}^{n}{s_{i}^{4}},M_{n}=\sum_{i=1}^{n}\xi_{i}^{2}-(s_{i}^{2}+\mu_{i}^{2}), and hn=∑i=1n2μi2si2.subscriptℎ𝑛superscriptsubscript𝑖1𝑛2superscriptsubscript𝜇𝑖2superscriptsubscript𝑠𝑖2h_{n}=\sum_{i=1}^{n}2\mu_{i}^{2}s_{i}^{2}. The following well known result is provided for reader’s convenience. | 787 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} the expectation conditional to ℱi−1subscriptℱ𝑖1{\cal F}_{i-1}.
, 1 = 𝐄i−1{etξi}≤etμi+t2si22,a.s.formulae-sequencesubscript𝐄𝑖1superscript𝑒𝑡subscript𝜉𝑖superscript𝑒𝑡subscript𝜇𝑖superscript𝑡2superscriptsubscript𝑠𝑖22𝑎𝑠{\mathbf{E}}_{i-1}\left\{e^{t\xi_{i}}\right\}\leq e^{t\mu_{i}+\frac{t^{2}s_{i}^{2}}{2}},\hskip 14.22636pta.s.. , 2 = . , 3 = (36)
for some nonrandom μi,sisubscript𝜇𝑖subscript𝑠𝑖\mu_{i},s_{i}, si≤s¯subscript𝑠𝑖¯𝑠s_{i}\leq\overline{s}.
We denote by Sn=∑i=1nξi−μisubscript𝑆𝑛superscriptsubscript𝑖1𝑛subscript𝜉𝑖subscript𝜇𝑖S_{n}=\sum_{i=1}^{n}{\xi_{i}-\mu_{i}}, rn=∑i=1nsi2subscript𝑟𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖2r_{n}=\sum_{i=1}^{n}{s_{i}^{2}}, vn=∑i=1nsi4,Mn=∑i=1nξi2−(si2+μi2)formulae-sequencesubscript𝑣𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖4subscript𝑀𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜉𝑖2superscriptsubscript𝑠𝑖2superscriptsubscript𝜇𝑖2v_{n}=\sum_{i=1}^{n}{s_{i}^{4}},M_{n}=\sum_{i=1}^{n}\xi_{i}^{2}-(s_{i}^{2}+\mu_{i}^{2}), and hn=∑i=1n2μi2si2.subscriptℎ𝑛superscriptsubscript𝑖1𝑛2superscriptsubscript𝜇𝑖2superscriptsubscript𝑠𝑖2h_{n}=\sum_{i=1}^{n}2\mu_{i}^{2}s_{i}^{2}. The following well known result is provided for reader’s convenience. | 812 |
44 | For all x>0𝑥0x>0 one has
, 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob{Sn≥2xrn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscript𝑒𝑥\displaystyle\leq e^{-x},. , 3 = . , 4 = (37a). , 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}Probsubscript𝑀𝑛2𝑥subscript𝑣𝑛subscriptℎ𝑛2𝑥superscript¯𝑠2\displaystyle\hbox{\rm Prob}\left\{M_{n}\geq 2\sqrt{x(v_{n}+h_{n})}+2x{\overline{s}}^{2}\right\}. , 2 = ≤e−x.absentsuperscript𝑒𝑥\displaystyle\leq e^{-x}.. , 3 = . , 4 = (37b) | 304 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Lemma A.1
For all x>0𝑥0x>0 one has
, 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob{Sn≥2xrn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscript𝑒𝑥\displaystyle\leq e^{-x},. , 3 = . , 4 = (37a). , 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}Probsubscript𝑀𝑛2𝑥subscript𝑣𝑛subscriptℎ𝑛2𝑥superscript¯𝑠2\displaystyle\hbox{\rm Prob}\left\{M_{n}\geq 2\sqrt{x(v_{n}+h_{n})}+2x{\overline{s}}^{2}\right\}. , 2 = ≤e−x.absentsuperscript𝑒𝑥\displaystyle\leq e^{-x}.. , 3 = . , 4 = (37b) | 334 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 24