url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
39
2.87k
text
stringlengths
1
3.74M
meta
dict
https://arxiv.org/abs/2105.10615
Convergence directions of the randomized Gauss--Seidel method and its extension
The randomized Gauss--Seidel method and its extension have attracted much attention recently and their convergence rates have been considered extensively. However, the convergence rates are usually determined by upper bounds, which cannot fully reflect the actual convergence. In this paper, we make a detailed analysis of their convergence behaviors. The analysis shows that the larger the singular value of $A$ is, the faster the error decays in the corresponding singular vector space, and the convergence directions are mainly driven by the large singular values at the beginning, then gradually driven by the small singular values, and finally by the smallest nonzero singular value. These results explain the phenomenon found in the extensive numerical experiments appearing in the literature that these two methods seem to converge faster at the beginning. Numerical examples are provided to confirm the above findings.
\section{Introduction} Linear least squares problem is a ubiquitous problem arising frequently in data analysis and scientific computing. Specifically, given a data matrix $A\in R^{m\times n}$ and a data vector $b\in R^{m}$, a linear least squares problem can be written as follows \begin{equation} \label{ls} \min \limits _{ x \in R^{n}}\|b-Ax\|^2_{2}. \end{equation} In the literature, several direct methods have been proposed for solving its normal equations $A^TAx=A^Tb$ through either the QR factorization or the singular value decomposition (SVD) of $A^TA$ \cite{bjorck1996numerical, Higham2002}, which can be prohibitive when the matrix is large--scale. Hence, iterative methods are considered for solving large linear least squares problem, such as the famous Gauss--Seidel method \cite{Saad2003}. In \cite{Leventhal2010}, Leventhal and Lewis proved that the randomized Gauss--Seidel (RGS) method, also known as the randomized coordinate descent method, converges to the solution at a linear rate in expectation. This method works on the columns of the matrix $A$ at random with probability proportional to their norms. Later, Ma, Needell and Ramdas \cite{Ma2015} provided a unified theory of the RGS method and the randomized Kaczmarz (RK) method \cite{Strohmer2009}, where the latter method works on the rows of $A$, and showed that the RGS method converges to the minimum Euclidean norm least squares solution $x_{\star}$ of (\ref{ls}) only when the matrix $A$ is of full column rank. To further develop the RGS method for more general matrix, inspired by the randomized extended Kaczmarz (REK) method \cite{Completion2013}, Ma et al. \cite{Ma2015} presented a variant of the RGS mehtod, i.e., randomized extended Gauss--Seidel (REGS) method, and proved that the REGS method converges to $x_{\star}$ regardless of whether the matrix $A$ has full column rank. After that, many variants of the RGS (or REGS) method were developed and studied extensively; see for example \cite{gower2015randomized, nutini2015coordinate, Hefny2017,tu2017breaking, xu2018hybrid,Dukui2019,razaviyayn2019linearly} and references therein. To the best of our knowledge, when studying the convergence properties of the RGS and REGS methods, people mainly pay attention to their convergence rates and usually give corresponding upper bounds, and no work focuses on what determines their convergence rates, what drives their convergence directions, and what their ultimate directions is. As we know, the obtained upper bound of convergence can only be used as a reference for the convergence rate, and cannot truly reflect the empirical convergence of the method. So it is interesting to consider the above three problems. In 2017, Jiao, Jin and Lu \cite{jiao2017preasymptotic} analyzed the preasymptotic convergence of the RK method. Recently, Steinerberger \cite{steinerberger2021randomized} made a more detailed analysis of the convergence property of the RK method for overdetermined full rank linear system. The author showed that the right singular vectors of the matrix $A$ describe the directions of distinguished dynamics and the RK method converges along small right singular vectors. After that, Zhang and Li \cite{zhang2021preconvergence} considered the convergence property of the REK method for all types of linear systems (consistent or inconsistent, overdetermined or underdetermined, full-rank or rank-deficient) and showed that the REK method converges to the minimum Euclidean norm least squares solution $x_{\star}$ with different decay rates in different right singular vectors spaces. In this paper, we analyze the convergence properties of the RGS and REGS methods for linear least squares problem and show that the decay rates of the sequences $\{Ax_{k}\}_{k=1}^{\infty}$ and $\{ x_{k}\}_{k=1}^{\infty}$ (resp., the sequences $\{Az_{k}\}_{k=1}^{\infty}$ and $\{ z_{k}\}_{k=1}^{\infty}$ ) generated by the RGS method (resp., the REGS method) are depend on the size of singular values of $A$. Specifically, the larger the singular value of $A$ is, the faster the error decays in the corresponding singular vector space, and the convergence directions are mainly driven by the large singular values at the beginning, then gradually driven by the small singular values, and finally by the smallest nonzero singular value. The rest of this paper is organized as follows. We first introduce some notations and preliminaries in Section \ref{sec2} and then present our main results about the RGS and REGS methods in Section \ref{sec3} and Section \ref{sec4}, respectively. Numerical experiments are given in Section \ref{sec5}. \section{Notations and preliminaries }\label{sec2} Throughout the paper, for a matrix $A$, $A^T$, $A^{(i)}$, $A_{(j)}$, $\sigma_i(A)$, $\sigma_r(A)$, $\|A\|_F$, and $\mathcal{R}(A)$ denote its transpose, $i$th row (or $i$th entry in the case of a vector), $j$th column, $i$th singular value, smallest nonzero singular value, Frobenius norm, and column space, respectively. For any integer $m\geq1$, let $[m]:=\{1, 2, 3, ..., m\}$. If the matrix $G\in R^{n\times n}$ is positive definite, we define the energy norm of any vector $x\in R^{n}$ as $\| x\|_G:=\sqrt{x^TGx}$. In addition, we denote the identity matrix by $I$, its $j$th column by $e_{(j)}$ and the expectation of any random variable $\xi$ by $\mathbb{E} [\xi]$. In the following, we use $x_{\star}=A^{\dag}b$ to denote the minimum Euclidean norm least squares solution of (\ref{ls}), where $A^{\dag}$ denotes the Moore--Penrose pseudoinverse of the matrix $A$. Because the SVD is the basic tool for the convergence analysis in next two sections, we denote the SVD \cite{golub2013matrix} of $A\in R^{m\times n}$ by \begin{align} A=U\Sigma V^{T}, \notag \end{align} where $U=[u_1, u_2, \ldots u_m]\in R^{m\times m}$ and $V=[v_1, v_2, \ldots v_n]\in R^{n\times n}$ are column orthonormal matrices and their column vectors known as the left and right singular vectors, respectively, and $\Sigma\in R^{m\times n}$ is diagonal with the diagonal elements ordered nonincreasingly, i.e., $\sigma_1(A)\geq \sigma_2(A)\geq \ldots \sigma_r(A)>0$ with $r\leq \min\{m, n\}$. \section{Convergence directions of the RGS method}\label{sec3} We first list the RGS method \cite{Leventhal2010, Ma2015} in Algorithm \ref{alg1} and restate its convergence bound in Theorem \ref{theorem0}. \begin{alg} \label{alg1} The RGS method \begin{enumerate}[ \item \mbox{INPUT:} ~$A$, $b$, $\ell$, $x_{0}\in R^{n }$ \item For $k=1, 2, \ldots, \ell-1$ do \item ~~~~Select $j\in [n]$ with probability $\frac{\|A_{(j)}\|^2_2}{\|A\|^2_F}$ \item ~~~~Set $x_{k}=x_{k-1}-\frac{A_{(j)}^T ( Ax_{k-1}-b)}{ \| A_{(j)} \|_{2}^{2}}e_{(j)}$ \item End for \end{enumerate} \end{alg} \begin{thm} \label{theorem0}\cite{Leventhal2010, Ma2015} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $x_k$ be the $k$th approximation of the RGS method generated by Algorithm \ref{alg1} with initial guess $x_{0}\in R^{n }$. Then \begin{align} \mathbb{E}[\| Ax_{k}- Ax_{\star}\|_2^2]\leq(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \|Ax_{0}- Ax_{\star}\|_2^2.\label{th0} \end{align} \end{thm} \begin{rmk} \label{rmk1} Theorem \ref{theorem0} shows that $Ax_{k}$ converges linearly in expectation to $Ax_{\star}$ regardless of whether the matrix $A$ has full rank. Since $\| Ax_{k}- Ax_{\star}\|_2^2=\| x_{k}- x_{\star}\|_{A^TA}^2$, it follows from (\ref{th0}) that \begin{align} \mathbb{E}[\| x_{k}- x_{\star}\|_{A^TA}^2]\leq(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \| x_{0}- x_{\star}\|_{A^TA}^2,\notag \end{align} which implies that $ x_{k}$ converges linearly in expectation to the minimum Euclidean norm least squares solution $ x_{\star}$ when the matrix $A$ is overdetermined and of full column rank, but can not converge to $ x_{\star}$ when $A$ is not full column rank. So, we assume that the matrix $A$ is of full column rank in this section. \end{rmk} Now, we give our three main results of the RGS method. \begin{thm} \label{theorem1} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $x_k$ be the $k$th approximation of the RGS method generated by Algorithm \ref{alg1} with initial guess $x_{0}\in R^{n }$. Then \begin{align} \mathbb{E}[\langle Ax_{k}- Ax_{\star}, u_{\ell} \rangle]= (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle Ax_{0}- Ax_{\star}, u_{\ell} \rangle.\label{th1} \end{align} \end{thm} \begin{pf} Let $\mathbb{E}_{k-1}[\cdot]$ be the conditional expectation conditioned on the first $k-1$ iterations of the RGS method. Then, from Algorithm \ref{alg1}, we have \begin{align} & \mathbb{E}_{k-1}[\langle Ax_{k}- Ax_{\star}, u_{\ell} \rangle]\notag \\ &= \sum\limits_{j=1}^{n}\frac{ \|A_{ (j )} \|_{2}^{2}}{\|A\|_{F}^{2}} \langle Ax_{k-1}-\frac{A_{ (j )}^T(Ax_{k-1}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star}, u_{\ell} \rangle \notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{ 1}{\|A\|_{F}^{2}} \sum\limits_{j=1}^{n} \langle A_{ (j )}^T(Ax_{k-1}-b) A_{ (j )} , u_{\ell} \rangle \notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{ 1}{\|A\|_{F}^{2}} \sum\limits_{j=1}^{n} \langle A_{ (j )},Ax_{k-1}-b \rangle \langle A_{ (j )}, u_{\ell} \rangle \notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{ 1}{\|A\|_{F}^{2}} \langle A^T(Ax_{k-1}-b) , A^Tu_{\ell} \rangle, \notag \end{align} which together with the facts $A^T(b-Ax_{\star})=0$ and $A^Tu_{\ell}=\sigma_{\ell}(A) v_{\ell} $ yields \begin{align} & \mathbb{E}_{k-1}[\langle Ax_{k}- Ax_{\star}, u_{\ell} \rangle]\notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{ 1}{\|A\|_{F}^{2}} \langle A^T(Ax_{k-1}-Ax_{\star}) , A^Tu_{\ell} \rangle \notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{ 1}{\|A\|_{F}^{2}} \langle A^T(\sum\limits_{i=1}^{m} \langle Ax_{k-1}-Ax_{\star}, u_i\rangle u_i) , A^Tu_{\ell} \rangle \notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{ 1}{\|A\|_{F}^{2}} \langle (\sum\limits_{i=1}^{m} \langle Ax_{k-1}-Ax_{\star}, u_i\rangle \sigma_{i}(A) v_{i}) , \sigma_{\ell}(A) v_{\ell} \rangle \notag \\ &= \langle Ax_{k-1}- Ax_{\star}, u_{\ell} \rangle - \frac{\sigma_{\ell}^2(A)}{\|A\|_{F}^{2}} \langle Ax_{k-1}-Ax_{\star}, u_{\ell}\rangle \notag \\ &= (1- \frac{\sigma_{\ell}^2(A)}{\|A\|_{F}^{2}} ) \langle Ax_{k-1}-Ax_{\star}, u_{\ell}\rangle. \notag \end{align} Thus, by taking the full expectation on both sides, we have \begin{align} \mathbb{E}[\langle Ax_{k}- Ax_{\star}, u_{\ell} \rangle] = (1- \frac{\sigma_{\ell}^2(A)}{\|A\|_{F}^{2}} ) \mathbb{E}[ \langle Ax_{k-1}-Ax_{\star}, u_{\ell}\rangle ] = \ldots= (1- \frac{\sigma_{\ell}^2(A)}{\|A\|_{F}^{2}} )^k \langle Ax_{0}-Ax_{\star}, u_{\ell}\rangle, \notag \end{align} which is the estimate (\ref{th1}). \end{pf} \begin{rmk} \label{rmk2} Theorem \ref{theorem1} shows that the decay rates of $\|Ax_k-Ax_{\star}\|_2$ are different in different left singular vectors spaces. Specifically, the decay rates are dependent on the singular values: the larger the singular value of $A$ is, the faster the error decays in the corresponding left singular vector space. This implies that the smallest singular value will lead to the slowest rate of convergence, which is the one in (\ref{th0}). So, the convergence bound presented in \cite{Leventhal2010, Ma2015} is optimal. \end{rmk} \begin{rmk} \label{rmk3} Let $r_k=b-Ax_k$ be the residual vector with respect to the $k$-th approximation $x_k$, and $r_{\star}=b-Ax_{\star}$ be the true residual vector with respect to the minimum Euclidean norm least squares solution $x_{\star}$. It follows from (\ref{th1}) and $Ax_k-Ax_{\star}=-( r_{k}- r_{\star})$ that \begin{align} \mathbb{E}[\langle r_{k}- r_{\star}, u_{\ell} \rangle]= (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle r_{0}- r_{\star}, u_{\ell} \rangle.\notag \end{align} Hence, Theorem \ref{theorem1} also implies that the decay rates of $ \| r_{k}- r_{\star}\|_2 $ of the RGS method depend on the singular values. \end{rmk} \begin{rmk} \label{rmk4} Using the facts $\langle Ax_{k}- Ax_{\star}, u_{\ell} \rangle=\langle x_{k}- x_{\star},A^Tu_{\ell} \rangle$ and $A^Tu_{\ell}=\sigma_{\ell}(A) v_{\ell} $, from (\ref{th1}), we have \begin{align} \mathbb{E}[\langle x_{k}- x_{\star}, v_{\ell} \rangle]= (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle x_{0}- x_{\star}, v_{\ell} \rangle,\notag \end{align} which recovers the decay rates of the RK method in different right singular vectors spaces \cite{steinerberger2021randomized}. In this view, both RGS and RK methods are essentially equivalent. \end{rmk} \begin{thm} \label{theorem2} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $x_k$ be the $k$th approximation of the RGS method generated by Algorithm \ref{alg1} with initial guess $x_{0}\in R^{n }$. Then \begin{align} \mathbb{E}[\| Ax_{k}- Ax_{\star}\|_2^2]= \mathbb{E}[(1-\frac{1}{\|A\|^2_F}\|A^T\frac{Ax_{k-1}- Ax_{\star}}{\|Ax_{k-1}- Ax_{\star}\|_2}\|_2^2)\| Ax_{k-1}- Ax_{\star}\|_2^2]. \notag \end{align} \end{thm} \begin{pf} Similar to the proof of \cite{Ma2015}, we can derive the desired result. \end{pf} \begin{rmk} \label{rmk5} Since $\|A^T\frac{Ax_{k-1}- Ax_{\star}}{\|Ax_{k-1}- Ax_{\star}\|_2}\|_2^2\geq\sigma_r^2(A)$, Theorem \ref{theorem2} implies that the RGS method actually converges faster if $Ax_{k-1}-Ax_{\star}$ is not close to left singular vectors corresponding to the small singular values of $A$ . \end{rmk} \begin{thm} \label{theorem3} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $x_k$ be the $k$th approximation of the RGS method generated by Algorithm \ref{alg1} with initial guess $x_{0}\in R^{n }$. Then \begin{align} \mathbb{E}[\langle \frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}, \frac{Ax_{k+1}- Ax_{\star}}{\|Ax_{k+1}- Ax_{\star}\|_2} \rangle^2]= 1 -\frac{1}{\|A\|^2_F} \mathbb{E}[ \|A^T\frac{Ax_{k }- Ax_{\star}}{\|Ax_{k }- Ax_{\star}\|_2}\|_2^2 ]. \label{th3} \end{align} \end{thm} \begin{pf} From Algorithm \ref{alg1}, we have \begin{align} & \mathbb{E}_{k}[\langle \frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}, \frac{Ax_{k+1}- Ax_{\star}}{\|Ax_{k+1}- Ax_{\star}\|_2} \rangle^2]\notag \\ &= \mathbb{E}_{k}[\langle \frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}, \frac{Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star}}{\|Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star}\|_2} \rangle^2]\notag \\ &= \mathbb{E}_{k}[\frac{1}{\|Ax_{k}- Ax_{\star}\|_2^2\cdot \|Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star}\|_2^2}\langle Ax_{k}- Ax_{\star} , Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star} \rangle^2]. \notag \end{align} Since $\langle Ax_{k}- Ax_{\star} , Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star} \rangle=\|Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star}\|_2^2$, we have \begin{align} & \mathbb{E}_{k}[\langle \frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}, \frac{Ax_{k+1}- Ax_{\star}}{\|Ax_{k+1}- Ax_{\star}\|_2} \rangle^2]\notag \\ &= \mathbb{E}_{k}[\frac{ 1}{\|Ax_{k}- Ax_{\star}\|_2^2 }\|Ax_{k}-\frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )}- Ax_{\star}\|_2^2 ] \notag \\ &= \mathbb{E}_{k}[\frac{ 1}{\|Ax_{k}- Ax_{\star}\|_2^2 }(\|Ax_{k} - Ax_{\star}\|_2^2-2 \langle Ax_{k}- Ax_{\star}, \frac{A_{ (j )}^T(Ax_{k}-b)}{\|A_{ (j )} \|_{2}^{2}}A_{ (j )} \rangle +\frac{(A_{ (j )}^T(Ax_{k}-b))^2}{\|A_{ (j )} \|_{2}^{2}}) ] \notag \\ &= \mathbb{E}_{k}[\frac{ 1}{\|Ax_{k}- Ax_{\star}\|_2^2 }(\|Ax_{k} - Ax_{\star}\|_2^2-\frac{(A_{ (j )}^T(Ax_{k}-Ax_{\star}))^2}{\|A_{ (j )} \|_{2}^{2}}) ] \notag \\ &= \mathbb{E}_{k}[1-\frac{(A_{ (j )}^T \frac{Ax_{k}-Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2 } )^2}{\|A_{ (j )} \|_{2}^{2}} ] \notag \\ &= \sum\limits_{j=1}^{n}\frac{ \|A_{ (j )} \|_{2}^{2}}{\|A\|_{F}^{2}} (1-\frac{(A_{ (j )}^T \frac{Ax_{k}-Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2 } )^2}{\|A_{ (j )} \|_{2}^{2}}) \notag \\ &= 1-\frac{ 1}{\|A\|_{F}^{2}} \| A^T \frac{Ax_{k}-Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2 } \|_2^2. \notag \end{align} Thus, by taking the full expectation on both sides, we obtain the desired result (\ref{th3}). \end{pf} \begin{rmk} \label{rmk6} Let $u$ and $v$ are two unit vectors, i.e., $\|u\|_2=1$ and $\|v\|_2=1$. We use inner quantity $\langle u, v\rangle^2$ to represent the angle between $u$ and $v$, and the bigger the angle is, the bigger the fluctuation becomes from $u$ to $v$. Theorem \ref{theorem3} shows the fluctuation of two adjacent iterations. Specifically, when $\|A^T\frac{Ax_{k }- Ax_{\star}}{\|Ax_{k }- Ax_{\star}\|_2}\|_2^2$ is large, the angle between $\frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}$ and $\frac{Ax_{k+1}- Ax_{\star}}{\|Ax_{k+1}- Ax_{\star}\|_2}$ is large, which implies that $\frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}$ has a large fluctuation; when $\|A^T\frac{Ax_{k }- Ax_{\star}}{\|Ax_{k }- Ax_{\star}\|_2}\|_2^2$ is small, the angle between $\frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}$ and $\frac{Ax_{k+1}- Ax_{\star}}{\|Ax_{k+1}- Ax_{\star}\|_2}$ is small, which implies that $\frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}$ has very little fluctuation. Since $\|A^T\frac{Ax_{k}- Ax_{\star}}{\|Ax_{k}- Ax_{\star}\|_2}\|_2^2\geq\sigma_r^2(A)$, Theorem \ref{theorem3} implies that if $Ax_{k}-Ax_{\star}$ is mainly composed of left singular vectors corresponding to the small singular values of $A$, its direction hardly changes, which means that the RGS method finally converges along left singular vector corresponding to the small singular value of $A$. \end{rmk} \section{Convergence directions of the REGS method}\label{sec4} Recalling Remark \ref{rmk1}, when the matrix $A$ is not full column rank, the sequence $\{x_{k}\}_{k=1}^{\infty}$ generated by the RGS method does not converge to the minimum Euclidean norm least squares solution $x_{\star}$, even though $Ax_{k}$ does converge to $Ax_{\star}$. In \cite{Ma2015}, Ma et al. proposed an extended variant of the RGS method, i.e., the REGS method, to allow for convergence to $x_{\star}$ regardless of whether $A$ has full column rank or not. Now, we list the REGS method presented in \cite{ Dukui2019} in Algorithm \ref{alg2}, which is a equivalent variant of the original REGS method \cite{Ma2015}, and restate its convergence bound presented in \cite{Dukui2019} in Theorem \ref{theorem5}. From the algorithm we find that, in each iteration, $x_k$ is the $k$th approximation of the RGS method and $z_k$ is a one-step RK update for the linear system $Az=Ax_{k}$ from $z_{k-1}$. \begin{alg} \label{alg2} The REGS method \begin{enumerate}[ \item \mbox{INPUT:} ~$A$, $b$, $\ell$, $x_{0}\in R^{n }$, $z_{0}\in \mathcal{R}(A^T)$ \item For $k=1, 2, \ldots, \ell-1$ do \item ~~~~Select $j\in [n]$ with probability $\frac{\|A_{(j)}\|^2_2}{\|A\|^2_F}$ \item ~~~~Set $x_{k}=x_{k-1}-\frac{A_{(j)}^T ( Ax_{k-1}-b)}{ \| A_{(j)} \|_{2}^{2}}e_{(j)}$ \item ~~~~Select $i\in [m]$ with probability $\frac{\|A^{(i)}\|^2_2}{\|A\|^2_F}$ \item ~~~~Set $z_{k}=z_{k-1}-\frac{A^{(i)} ( z_{k-1}-x_{k})}{ \| A^{(i)} \|_{2}^{2}}(A^{(i)})^T$ \item End for \end{enumerate} \end{alg} \begin{thm} \label{theorem5}\cite{Dukui2019} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $z_k$ be the $k$th approximation of the REGS method generated by Algorithm \ref{alg2} with initial $x_{0}\in R^{n }$ and $z_o\in \mathcal{R}(A^T)$. Then \begin{align} \mathbb{E}[\| z_{k}- x_{\star}\|_2^2]\leq(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \|z_{0}- x_{\star}\|_2^2+\frac{k}{\|A\|_F^2}(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \|Ax_0-Ax_{\star}\|_2^2.\label{th5} \end{align} \end{thm} For the REGS method, we first discuss the convergence behavior of $z_{k}- x_{\star}$ in Theorem \ref{theorem6} and Theorem \ref{theorem7}, and then consider its convergence behavior of $Az_{k}- Ax_{\star}$ in Theorem \ref{theorem8}. \begin{thm} \label{theorem6} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $z_k$ be the $k$th approximation of the REGS method generated by Algorithm \ref{alg2} with initial $x_{0}\in R^{n }$ and $z_o\in \mathcal{R}(A^T)$. Then \begin{align} \mathbb{E}[\langle z_{k}- x_{\star}, v_{\ell} \rangle]= (1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} )^k \langle z_{0}-x_{\star}, v_{\ell}\rangle +\frac{k}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle A^T(Ax_{0}- Ax_{\star}), v_{\ell} \rangle .\label{th6} \end{align} \end{thm} \begin{pf} From Algorithm \ref{alg2}, we have \begin{align} &\mathbb{E}[\langle z_{k}- x_{\star}, v_{\ell} \rangle] \notag \\ &= \mathbb{E}[\langle z_{k-1}-\frac{A^{(i)} ( z_{k-1}-x_{k})}{ \| A^{(i)} \|_{2}^{2}}(A^{(i)})^T- x_{\star}, v_{\ell} \rangle] \notag \\ &= \mathbb{E}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle] +\mathbb{E}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle], \label{th65} \end{align} so we next consider $\mathbb{E}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle]$ and $\mathbb{E}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle]$ separately. We first consider$\mathbb{E}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle]$. Let $\mathbb{E}_{k-1}[\cdot]$ be the conditional expectation conditioned on the first $k-1$ iterations of the REGS method. That is, \begin{align} \mathbb{E}_{k-1}[\cdot]= \mathbb{E}[\cdot|j_1, i_1, j_2, i_2, \ldots, j_{k-1}, i_{k-1}], \notag \end{align} where $j_{t^*}$ is the ${t^*}$th column chosen and $i_{t^*}$ is the ${t^*}$th row chosen. We denote the conditional expectation conditioned on the first $k-1$ iterations and the $k$th column chosen as \begin{align} \mathbb{E}_{k-1}^{i}[\cdot]= \mathbb{E}[\cdot|j_1, i_1, j_2, i_2, \ldots, j_{k-1}, i_{k-1}, j_k]. \notag \end{align} Similarly, we denote the conditional expectation conditioned on the first $k-1$ iterations and the $k$th row chosen as \begin{align} \mathbb{E}_{k-1}^{j}[\cdot]= \mathbb{E}[\cdot|j_1, i_1, j_2, i_2, \ldots, j_{k-1}, i_{k-1}, i_k]. \notag \end{align} Then, by the law of total expectation, we have \begin{align} \mathbb{E}_{k-1}[\cdot]= \mathbb{E}_{k-1}^{j}[ \mathbb{E}_{k-1}^{i}[\cdot] ]. \notag \end{align} Thus, we obtain \begin{align} &\mathbb{E}_{k-1}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle] \notag \\ &=\mathbb{E}_{k-1}[\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\langle \frac{A^{(i)} ( z_{k-1}-x_{\star})}{\| A^{(i)} \|_{2}^{2}}(A^{(i)})^T, v_{\ell} \rangle] \notag \\ &=\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\frac{1}{\|A\|^2_F} \sum\limits_{i=1}^{m}\langle A^{(i)} ( z_{k-1}-x_{\star})(A^{(i)})^T, v_{\ell} \rangle \notag \\ &=\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\frac{1}{\|A\|^2_F} \sum\limits_{i=1}^{m}\langle(A^{(i)})^T, z_{k-1}-x_{\star} \rangle\langle (A^{(i)})^T, v_{\ell} \rangle \notag \\ &=\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\frac{1}{\|A\|^2_F} \langle A ( z_{k-1}-x_{\star}),A v_{\ell} \rangle. \notag \end{align} Further, by making use of $z_{k-1}-x_{\star}=\sum\limits_{i=1}^{n}\langle z_{k-1}-x_{\star}, v_i\rangle v_i$ and $Av_i=\sigma_i(A) u_i $, we get \begin{align} &\mathbb{E}_{k-1}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle] \notag \\ &=\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\frac{1}{\|A\|^2_F} \langle A \sum\limits_{i=1}^{n}\langle z_{k-1}-x_{\star}, v_i\rangle v_i,\sigma_{\ell}(A) u_{\ell} \rangle \notag \\ &=\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\frac{1}{\|A\|^2_F} \langle \sum\limits_{i=1}^{n}\langle z_{k-1}-x_{\star}, v_i\rangle \sigma_i(A) u_i,\sigma_{\ell}(A) u_{\ell} \rangle, \notag \end{align} which together with the orthogonality of the left singular vectors $u_i$ yields \begin{align} &\mathbb{E}_{k-1}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle] \notag \\ &=\langle z_{k-1}-x_{\star}, v_{\ell} \rangle -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} \langle z_{k-1}-x_{\star}, v_{\ell}\rangle \notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \langle z_{k-1}-x_{\star}, v_{\ell}\rangle . \notag \end{align} As a result, by taking the full expectation on both sides, we have \begin{align} \mathbb{E}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle]=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \mathbb{E}[ \langle z_{k-1}-x_{\star}, v_{\ell}\rangle ]. \label{th62} \end{align} We now consider $\mathbb{E}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle]$. It follows from \begin{align} &\mathbb{E}_{k-1}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle] \notag \\ &= \mathbb{E}_{k-1}^{j}[ \mathbb{E}_{k-1}^{i} [ \langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle]]\notag \\ &= \mathbb{E}_{k-1}^{j}[\frac{1}{\|A\|^2_F} \sum\limits_{i=1}^{m} \langle (A^{(i)})^TA^{(i)}( x_{k}-x_{\star}), v_{\ell} \rangle] \notag \\ &= \mathbb{E}_{k-1}^{j}[\frac{1}{\|A\|^2_F} \sum\limits_{i=1}^{m} \langle A^{(i)}( x_{k}-x_{\star}), A^{(i)} v_{\ell} \rangle] \notag \\ &= \mathbb{E}_{k-1} [\frac{1}{\|A\|^2_F} \langle A ( x_{k}-x_{\star}), A v_{\ell} \rangle], \notag \end{align} that \begin{align} \mathbb{E} [\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle]=\mathbb{E} [\frac{1}{\|A\|^2_F} \langle A ( x_{k}-x_{\star}), A v_{\ell} \rangle].\label{th63} \end{align} Since \begin{align} \mathbb{E} [\frac{1}{\|A\|^2_F} \langle A ( x_{k}-x_{\star}), A v_{\ell} \rangle]=\frac{\sigma_{\ell} (A)}{\|A\|^2_F} \mathbb{E} [ \langle A ( x_{k}-x_{\star}), u_{\ell} \rangle], \notag \end{align} by exploiting (\ref{th1}) in Theorem \ref{theorem1}, we get \begin{align} &\mathbb{E} [\frac{1}{\|A\|^2_F} \langle A ( x_{k}-x_{\star}), A v_{\ell} \rangle] \notag \\ &=\frac{\sigma_{\ell} (A)}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle Ax_{0}- Ax_{\star}, u_{\ell} \rangle \notag \\ &=\frac{1}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle Ax_{0}- Ax_{\star}, A v_{\ell} \rangle .\notag \end{align} Thus, substituting the above equality into (\ref{th63}), we have \begin{align} \mathbb{E} [\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle] &=\frac{1}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle Ax_{0}- Ax_{\star}, A v_{\ell} \rangle \notag \\ &=\frac{1}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle A^T(Ax_{0}- Ax_{\star}), v_{\ell} \rangle . \label{th64} \end{align} Combining (\ref{th65}), (\ref{th62}) and (\ref{th64}) yields \begin{align} &\mathbb{E}[\langle z_{k}- x_{\star}, v_{\ell} \rangle] \notag \\ &= \mathbb{E}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle] +\mathbb{E}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle] \notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \mathbb{E}[ \langle z_{k-1}-x_{\star}, v_{\ell}\rangle ]+\frac{1}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle A^T(Ax_{0}- Ax_{\star}), v_{\ell} \rangle \notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} )^2 \mathbb{E}[ \langle z_{k-2}-x_{\star}, v_{\ell}\rangle ]+\frac{2}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle A^T(Ax_{0}- Ax_{\star}), v_{\ell} \rangle \notag \\ &=\ldots=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} )^k \langle z_{0}-x_{\star}, v_{\ell}\rangle +\frac{k}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle A^T(Ax_{0}- Ax_{\star}), v_{\ell} \rangle, \notag \end{align} which is the desired result (\ref{th6}). \end{pf} \begin{rmk} \label{rmk61} Theorem \ref{theorem6} shows that the decay rates of $\|z_k-x_{\star}\|_2$ are different in different right singular vectors spaces and the smallest singular value will lead to the slowest rate of convergence, which is the one in (\ref{th5}). So, the convergence bound presented by Du \cite{ Dukui2019} is optimal. \end{rmk} \begin{thm} \label{theorem7} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $z_k$ be the $k$th approximation of the REGS method generated by Algorithm \ref{alg2} with initial $x_{0}\in R^{n }$ and $z_o\in \mathcal{R}(A^T)$. Then \begin{align} \mathbb{E}[ \|z_{k}- x_{\star}\|^2_2]\leq \mathbb{E}[(1-\frac{1}{\|A\|^2_F}\|A\frac{z_{k-1}-x_{\star}}{\|z_{k-1}-x_{\star}\|_2}\|_2^2)\|z_{k-1}-x_{\star}\|_2^2]+ \frac{1}{\|A\|^2_F}(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \| Ax_{0}-Ax_{\star} \|^2_2. \label{th7} \end{align} \end{thm} \begin{pf} Following an analogous argument to Theorem 4 of \cite{Dukui2019}, we get \begin{align} \mathbb{E} [\|z_{k}- x_{\star}\|_2^2 ]&= \mathbb{E}[ \|(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star})\|_2^2]+\mathbb{E} [\| \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star})\|_2^2], \notag \end{align} \begin{align} \mathbb{E}[ \|(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star})\|_2^2] &=\mathbb{E} [(z_{k-1}-x_{\star})^T(I-\frac{A^TA}{\|A\|_F^2})(z_{k-1}-x_{\star})]\notag \\ &=\mathbb{E}[(\|z_{k-1}-x_{\star}\|_2^2- \frac{1}{\|A\|^2_F}\|A(z_{k-1}-x_{\star})\|_2^2)]\notag \\ &=\mathbb{E}[(1-\frac{1}{\|A\|^2_F}\|A\frac{z_{k-1}-x_{\star}}{\|z_{k-1}-x_{\star}\|_2}\|_2^2)\|z_{k-1}-x_{\star}\|_2^2],\notag \end{align} and \begin{align} \mathbb{E} [\| \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star})\|_2^2]\leq\frac{1}{\|A\|^2_F}(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \| Ax_{0}-Ax_{\star} \|^2_2. \notag \end{align} Combining the above three equations, we have \begin{align} \mathbb{E} [ \|z_{k}- x_{\star}\|_2^2] \leq\mathbb{E}[(1-\frac{1}{\|A\|^2_F}\|A\frac{z_{k-1}-x_{\star}}{\|z_{k-1}-x_{\star}\|_2}\|_2^2)\|z_{k-1}-x_{\star}\|_2^2]+ \frac{1}{\|A\|^2_F}(1-\frac{\sigma_r^2(A)}{\|A\|^2_F})^k \| Ax_{0}-Ax_{\star} \|^2_2, \notag \end{align} which implies the desired result (\ref{th7}). \end{pf} \begin{rmk} \label{rmk71} Since $\|A\frac{z_{k-1}-x_{\star}}{\|z_{k-1}-x_{\star}\|_2}\|_2^2\geq\sigma_r^2(A)$, Theorem \ref{theorem7} implies that $z_{k}$ of the REGS method actually converges faster if $z_{k-1}-x_{\star}$ is not close to right singular vectors corresponding to the small singular values of $A$ . \end{rmk} \begin{thm} \label{theorem8} Let $A\in R^{m\times n}$, $b\in R^{m}$, $x_{\star}=A^{\dag}b$ be the minimum Euclidean norm least squares solution, and $z_k$ be the $k$th approximation of the REGS method generated by Algorithm \ref{alg2} with initial $x_{0}\in R^{n }$ and $z_o\in \mathcal{R}(A^T)$. Then \begin{align} \mathbb{E}[\langle A z_{k}- Ax_{\star}, u_{\ell} \rangle]=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} )^k \langle Az_{0}-Ax_{\star}, v_{\ell}\rangle +\frac{k}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle AA^T(Ax_{0}- Ax_{\star}), u_{\ell} \rangle .\label{th8} \end{align} \end{thm} \begin{pf} Similar to the proof of (\ref{th65}) in Theorem \ref{theorem6}, we obtain \begin{align} \mathbb{E}[\langle Az_{k}- Ax_{\star}, u_{\ell} \rangle] = \mathbb{E}[\langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle] +\mathbb{E}[\langle A \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), u_{\ell} \rangle]. \label{th80} \end{align} Then, we consider $\mathbb{E}[\langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle]$ and $\mathbb{E}[\langle A \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), u_{\ell} \rangle]$ separately. We first consider $\mathbb{E}[\langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle]$. It follows from $$ \langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle = \langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), A^Tu_{\ell} \rangle $$ and $A^Tu_{\ell}=\sigma_{\ell}(A)v_{\ell}$, that \begin{align} \mathbb{E}[\langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle] = \sigma_{\ell}(A)\mathbb{E}[\langle (I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), v_{\ell} \rangle], \notag \end{align} which together with (\ref{th62}), yields \begin{align} \mathbb{E}[\langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle] &= \sigma_{\ell}(A)(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \mathbb{E}[ \langle z_{k-1}-x_{\star}, v_{\ell}\rangle ] \notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \mathbb{E}[ \langle z_{k-1}-x_{\star},\sigma_{\ell}(A) v_{\ell}\rangle ]\notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \mathbb{E}[ \langle Az_{k-1}-Ax_{\star},u_{\ell}\rangle ]. \label{th81} \end{align} We now consider $\mathbb{E}[\langle A \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), u_{\ell} \rangle]$. Exploiting (\ref{th64}), we have \begin{align} \mathbb{E}[\langle A \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), u_{\ell} \rangle] &=\mathbb{E}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), A^T u_{\ell} \rangle]\notag \\ &=\sigma_\ell (A)\mathbb{E}[\langle \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), v_{\ell} \rangle]\notag \\ &=\frac{\sigma_\ell (A)}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle A^T(Ax_{0}- Ax_{\star}), v_{\ell} \rangle \notag \\ &=\frac{1}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle AA^T(Ax_{0}- Ax_{\star}), u_{\ell} \rangle. \label{th82} \end{align} Thus, combining (\ref{th80}), (\ref{th81}) and (\ref{th82}) yields \begin{align} &\mathbb{E}[\langle Az_{k}- Ax_{\star}, u_{\ell} \rangle] \notag \\ &= \mathbb{E}[\langle A(I-\frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}})( z_{k-1}-x_{\star}), u_{\ell} \rangle] +\mathbb{E}[\langle A \frac{(A^{(i)})^TA^{(i)}}{\| A^{(i)} \|_{2}^{2}}( x_{k}-x_{\star}), u_{\ell} \rangle] \notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} ) \mathbb{E}[ \langle Az_{k-1}-Ax_{\star},u_{\ell}\rangle ]+\frac{1}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle AA^T(Ax_{0}- Ax_{\star}), u_{\ell} \rangle \notag \\ &=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} )^2 \mathbb{E}[ \langle Az_{k-2}-Ax_{\star},u_{\ell}\rangle ]+\frac{2}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle AA^T(Ax_{0}- Ax_{\star}), u_{\ell} \rangle \notag \\ &=\ldots=(1 -\frac{\sigma_{\ell}^2(A)}{\|A\|^2_F} )^k \langle Az_{0}-Ax_{\star}, v_{\ell}\rangle +\frac{k}{\|A\|^2_F} (1-\frac{\sigma_\ell^2(A)}{\|A\|^2_F})^k \langle AA^T(Ax_{0}- Ax_{\star}), u_{\ell} \rangle, \notag \end{align} which is the desired result (\ref{th8}). \end{pf} \begin{rmk} \label{rmk81} Theorem \ref{theorem8} shows the decay rates of $\|Az_{k}- Ax_{\star}\|$ of the REGS method and suggests that small singular values lead to poor convergence rates and vice versa. We note similar issues arise for the RK, REK, and RGS methods discussed in \cite{steinerberger2021randomized}, \cite{zhang2021preconvergence}, and Theorem \ref{theorem1}, respectively. \end{rmk} \section{Numerical experiments }\label{sec5} Now we present two simple examples to illustrate that the convergence directions of the RGS and REGS methods. To this end, let $G_0\in R^{500\times 500}$ be a Gaussian matrix with i.i.d. $N(0, 1)$ entries and $D\in R^{500\times 500}$ be a diagonal matrix whose diagonal elements are all 100. Further, we set $G_1=G_0+D$ and replace its last row $G_1^{(500)}$ by a tiny perturbation of $G_1^{(499)}$, i.e., adding 0.01 to each entry of $G_1^{(499)}$. Then, we normalize all rows of $G_1$, i.e., set $\|G_1^{(i)}\|_2=1$, $i=1, 2, \ldots, 500$. After that, we set $A_1=\begin{bmatrix} G_1\\ G_2 \end{bmatrix} \in R^{600\times 500}$ and $A_2=\begin{bmatrix} G_1, G_3 \end{bmatrix} \in R^{500\times 600}$, where $G_2\in R^{100\times 500}$ and $G_3\in R^{500\times 100}$ are zero matrices. So, the first 499 singular values of the matrices $A_1$ and $A_2$ are between $\sim 0.6$ and $\sim 1.5$, and the smallest nonzero singular value is $\sim 10^{-4}$. We first consider convergence directions of $Ax_k-Ax_{\star}$ and $x_k-x_{\star}$ of the RGS method. We generate a vector $x \in R^{500}$ using the MATLAB function \texttt{randn}, set the full column rank coefficient matrix $A=A_1$ and set the right-hand side $b=A x +z$, where $z$ is a nonzero vector belonging to the null space of $A^{T}$, which is generated by the MATLAB function \texttt{null}. With $x_0=0$, we plot $|\langle (Ax_k-Ax_{\star})/\|Ax_k-Ax_{\star}\|_2, u_{500} \rangle|$ and $\frac{\|A(x_k-x_{\star})\|_2 }{\|x_k-x_{\star}\|_2}$ in Figure \ref{fig1} and Figure \ref{fig2}, respectively. \begin{figure}[ht] \begin{center} \includegraphics [height=5.5cm,width=8.5cm ]{RGS-Ax-u-600-500.eps} \end{center} \caption{A sample evolution of $ |\langle (Ax_k-Ax_{\star})/\|Ax_k-Ax_{\star}\|_2, u_{500} \rangle|$ of the RGS method. }\label{fig1} \end{figure} From Figure \ref{fig1}, we find that $|\langle (Ax_k-Ax_{\star})/\|Ax_k-Ax_{\star}\|_2, u_{500} \rangle|$ initially is very small and almost is 0, which indicates that $Ax_k-Ax_{\star} $ is not close to the left singular vector $u_{500}$. Considering the analysis of Remark \ref{rmk5}, the phenomenon implies the `preconvergence' behavior of the RGS method, that is, the RGS method seems to converge quickly at the beginning. In addition, as $k\rightarrow\infty$, $|\langle (Ax_k-Ax_{\star})/\|Ax_k-Ax_{\star}\|_2, u_{500} \rangle|\rightarrow 1$. This phenomenon implies that $Ax_{k}-Ax_{\star}$ tends to the left singular vector corresponding to the smallest singular value of $A$. \begin{figure}[ht] \begin{center} \includegraphics [height=5.5cm,width=8.5cm ]{RGS-x-600-500.eps} \end{center} \caption{A sample evolution of $\frac{\|A(x_k-x_{\star})\|_2 }{\|x_k-x_{\star}\|_2}$ of the RGS method. }\label{fig2} \end{figure} From Figure \ref{fig2}, we observe that the values of $\frac{\|A(x_k-x_{\star})\|_2 }{\|x_k-x_{\star}\|_2}$ decreases with $k$ and finally approaches the small singular value. This phenomenon implies that the forward direction of $x_k-x_{\star}$ is mainly determined by the right singular vectors corresponding to the large singular values of $A$ at the beginning. With the increase of $k$, the direction is mainly determined by the right singular vectors corresponding to the small singular values. Finally, $x_k-x_{\star}$ tends to the right singular vector space corresponding to the smallest singular value. Furthermore, this phenomenon also allows for an interesting application, i.e., finding nonzero vectors $x$ such that $\frac{\|Ax\|_2}{\|x\|_2}$ is small. We now consider convergence directions of $Az_k-Ax_{\star}$ and $z_k-x_{\star}$ of the REGS method. We generate a vector $x \in R^{600}$ using the MATLAB function \texttt{randn}, set the coefficient matrix $A=A_2$ which does not have full column rank, and set the right-hand side $b=Ax $. With $x_0=0$ and $z_0=0$, we plot $|\langle (Az_k-Ax_{\star})/\|Az_k-Ax_{\star}\|_2, u_{500} \rangle|$ and $\frac{\|A(z_k-x_{\star})\|_2 }{\|z_k-x_{\star}\|_2}$ in Figure \ref{fig3} and Figure \ref{fig4}, respectively. \begin{figure}[ht] \begin{center} \includegraphics [height=5.5cm,width=8.5cm ]{REGS-Az-u-500-600.eps} \end{center} \caption{A sample evolution of $ |\langle (Az_k-Ax_{\star})/\|Az_k-Ax_{\star}\|_2, u_{500} \rangle|$ of the REGS method. }\label{fig3} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics [height=5.5cm,width=8.5cm ]{REGS-z-500-600.eps} \end{center} \caption{A sample evolution of $\frac{\|A(z_k-x_{\star})\|_2 }{\|z_k-x_{\star}\|_2}$ of the REGS method. }\label{fig4} \end{figure} Figure \ref{fig3} and Figure \ref{fig4} show the similar results obtained in the RGS method. That is, the convergence directions of $A z_k-Ax_{\star} $ and $ z_k- x_{\star} $ of the REGS method initially are depending on the large singular values and then mainly depending on the small singular values, and finally depending on the smallest singular value of $A$.
{ "timestamp": "2021-05-25T02:05:27", "yymm": "2105", "arxiv_id": "2105.10615", "language": "en", "url": "https://arxiv.org/abs/2105.10615", "abstract": "The randomized Gauss--Seidel method and its extension have attracted much attention recently and their convergence rates have been considered extensively. However, the convergence rates are usually determined by upper bounds, which cannot fully reflect the actual convergence. In this paper, we make a detailed analysis of their convergence behaviors. The analysis shows that the larger the singular value of $A$ is, the faster the error decays in the corresponding singular vector space, and the convergence directions are mainly driven by the large singular values at the beginning, then gradually driven by the small singular values, and finally by the smallest nonzero singular value. These results explain the phenomenon found in the extensive numerical experiments appearing in the literature that these two methods seem to converge faster at the beginning. Numerical examples are provided to confirm the above findings.", "subjects": "Numerical Analysis (math.NA)", "title": "Convergence directions of the randomized Gauss--Seidel method and its extension", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319863454372, "lm_q2_score": 0.8175744761936437, "lm_q1q2_score": 0.809997184784659 }
https://arxiv.org/abs/1912.01763
A note on semi-infinite program bounding methods
Semi-infinite programs are a class of mathematical optimization problems with a finite number of decision variables and infinite constraints. As shown by Blankenship and Falk (Blankenship and Falk. "Infinitely constrained optimization problems." Journal of Optimization Theory and Applications 19.2 (1976): 261-281.), a sequence of lower bounds which converges to the optimal objective value may be obtained with specially constructed finite approximations of the constraint set. In (Mitsos. "Global optimization of semi-infinite programs via restriction of the right-hand side." Optimization 60.10-11 (2011): 1291-1308.), it is claimed that a modification of this lower bounding method involving approximate solution of the lower-level program yields convergent lower bounds. We show with a counterexample that this claim is false, and discuss what kind of approximate solution of the lower-level program is sufficient for correct behavior.
\section{Introduction} This note discusses methods for the global solution of semi-infinite programs (SIP). Specifically, the method from \cite{mitsos11} is considered, and it is shown with a counterexample that the lower bounds do not always converge. Throughout we use notation as close as possible to that used in \cite{mitsos11}, embellishing it only as necessary with, for instance, iteration counters. Consider a SIP in the general form \begin{alignat}{2} \tag{SIP} \label{eq:sip} f^* = \inf_{x}\; & f(x) \\ \st \notag & x \in X, \\ \notag & g(x,y) \le 0,\; \forall y \in Y, \end{alignat} for subsets $X$, $Y$ of finite dimensional real vector spaces and $f : X \to \mbb{R}$, $g : X \times Y \to \mbb{R}$. We may view $Y$ as an index set, with potentially uncountably infinite cardinality. Important to validating the feasibility of a point $x$ is the lower-level program: \begin{equation} \label{eq:llp} \tag{LLP} \sup_y \set{ g(x,y) : y \in Y}. \end{equation} Global solution of \eqref{eq:sip} often involves the construction of convergent upper and lower bounds. The approach in \cite{mitsos11} to obtain a lower bound is a modification of the constraint-generation/discretization method of \cite{blankenshipEA76}. The claim is that the lower-level program may be solved approximately; the exact nature of the approximation is important to the convergence of the lower bounds and this is the subject of the present note. \section{Sketch of the lower bounding procedure and claim} The setting of the method is the following. The method is iterative and at iteration $k$, for a given finite subset $Y^{LBD,k} \subset Y$, a lower bound of $f^*$ is obtained from the finite program \begin{alignat}{2} \label{eq:sip_lower} f^{LBD,k} = \inf_{x}\; & f(x) \\ \st \notag & x \in X, \\ \notag & g(x,y) \le 0, \;\forall y \in Y^{LBD,k}. \end{alignat} This is indeed a lower bound since fewer constraints are enforced, and thus \eqref{eq:sip_lower} is a relaxation of \eqref{eq:sip}. Assume that the lower bounding problem \eqref{eq:sip_lower} is feasible (otherwise we can conclude that \eqref{eq:sip} is infeasible). Let $\bar{x}^k$ be a (global) minimizer of the lower bounding problem \eqref{eq:sip_lower}. In \cite{mitsos11}, Lemma~2.2 states that we either verify $\sup_y \set{g(\bar{x}^k,y) : y \in Y} \le 0$, \textbf{or else} find $\bar{y}^k \in Y$ such that $g(\bar{x}^k,\bar{y}^k) > 0$. If $\sup_y \set{g(\bar{x}^k,y) : y \in Y} \le 0$, then $\bar{x}^k$ is feasible in \eqref{eq:sip} and thus optimal (since it also solves a relaxation). Otherwise, set $Y^{LBD,k+1} = Y^{LBD,k} \cup \set{\bar{y}^k}$ and we iterate. The precise statement of the claim is repeated here (again, with only minor embellishments to the notation to help keep track of iterations). \begin{lemma}[Lemma~2.2 in \cite{mitsos11}] \label{lem:claim} Take any $Y^{LBD,0} \subset Y$. Assume that $X$ and $Y$ are compact and that $g$ is continuous on $X \times Y$. Suppose that at each iteration of the lower bounding procedure the lower-level program is solved approximately for the solution of the lower bounding problem $\bar{x}^k$ either establishing $\max_{y \in Y} g(\bar{x}^k,y) \le 0$, or furnishing a point $\bar{y}^k$ such that $g(\bar{x}^k,\bar{y}^k) > 0$. Then, the lower bounding procedure converges to the optimal objective value, i.e. $f^{LBD,k} \to f^*$. \end{lemma} \section{Correction} \subsection{Counterexample} We now present a counterexample to the claim in Lemma~\ref{lem:claim}. Consider \begin{alignat}{2} \tag{CEx} \label{eq:counter_ex} \inf_{x}\; & -x \\ \st \notag & x \in [-1,1], \\ \notag & 2x - y \le 0,\; \forall y \in [-1,1], \end{alignat} thus we define $X = Y = [-1,1]$, $f : x \mapsto -x$, $g : (x,y) \mapsto 2x - y$. The behavior to note is this: We are trying to maximize $x$; The feasible set is \[ \set{x \in [-1,1] : x \le (\sfrac{1}{2})y, \forall y \in[-1,1] } = [-1,-\sfrac{1}{2}]; \] The infimum, consequently, is $\sfrac{1}{2}$. See Figure~\ref{fig:cex1}. \begin{figure} \begin{center} \begin{tikzpicture}[xscale=1.5,yscale=1.5] \draw[fill=gray] (-0.5,-1) -- (0.5,1) -- (1,1) -- (1,-1) -- cycle; \draw[latex-latex] (0,-1.3) -- (0,1.3) node[above]{$y$}; \draw[latex-latex] (-1.3,0) -- (1.3,0) node[right]{$x$}; \draw (-1,-1) rectangle (1,1); \draw[dashed,domain=0:1] plot (\x, \x); \end{tikzpicture} \end{center} \caption{Visualization of counterexample~\eqref{eq:counter_ex}. The box represents $[-1,1] \times [-1,1]$. The shaded grey area is the subset of $(x,y)$ such that $2x - y > 0$. The dashed line represents the approximate minimizers used in the counterexample.} \label{fig:cex1} \end{figure} Beginning with $Y^{LBD,1} = \emptyset$, the minimizer of the lower bounding problem is $\bar{x}^1 = 1$. Now, assume that solving the resulting \eqref{eq:llp} approximately, we get $\bar{y}^1 = 1$ which we note satisfies \[ 2\bar{x}^1 - \bar{y}^1 = 1 > 0 \] as required by Lemma~\ref{lem:claim}. The next iteration, with $Y^{LBD,2} = \set{1}$, adds the constraint $2x - 1 \le 0$ to the lower bounding problem; the feasible set is $[-1,\sfrac{1}{2}]$ so the minimizer is $\bar{x}^2 = \sfrac{1}{2}$. Again, assume that solving the lower-level program approximately yields $\bar{y}^2 = \sfrac{1}{2}$; again we get \[ 2\bar{x}^2 - \bar{y}^2 = \sfrac{1}{2} > 0 \] as required by Lemma~\ref{lem:claim}. The third iteration, with $Y^{LBD,3} = \set{1, \sfrac{1}{2}}$, adds the constraint $2x - \sfrac{1}{2} \le 0$ to the lower bounding problem; the feasible set is $[-1,\sfrac{1}{4}]$ so the minimizer is $\bar{x}^3 = \sfrac{1}{4}$. Again, assume that solving the lower-level program approximately yields $\bar{y}^3 = \sfrac{1}{4}$; again we get \[ 2\bar{x}^3 - \bar{y}^3 = \sfrac{1}{4} > 0 \] as required by Lemma~\ref{lem:claim}. Proceeding in this way, we construct $\bar{x}^k$ and $\bar{y}^k$ so that $g(\bar{x}^k,\bar{y}^k) > 0$ and the lower bounds satisfy $f^{LBD,k} = -\bar{x}^k = -\frac{1}{2^{k-1}}$, for all $k$. Consequently, they converge to $0$, which we note is strictly less than the infimum of $\sfrac{1}{2}$. \subsection{Modified claim} We now present a modification of the claim in order to demonstrate what kind of approximate solution of the lower-level program suffices to establish convergence of the lower bounds. To state the result, let the optimal objective value of \eqref{eq:llp} as a function of $x$ be \[ g^*(x) = \sup_y\set{g(x,y) : y \in Y}. \] The proof of the following result has a similar structure to the original proof of \cite[Lemma~2.2]{mitsos11}. \begin{lemma} \label{lem:claim_mod} Choose any finite $Y^{LBD,0} \subset Y$, and $\alpha \in (0,1)$. Assume that $X$ and $Y$ are compact and that $f$ and $g$ are continuous. Suppose that at each iteration $k$ of the lower bounding procedure \eqref{eq:llp} is solved approximately for the solution $\bar{x}^k$ of the lower bounding problem~\eqref{eq:sip_lower}, either establishing that $g^*(\bar{x}^k) \le 0$ or furnishing a point $\bar{y}^k$ such that \[ g(\bar{x}^k,\bar{y}^k) \ge \alpha g^*(\bar{x}^k) > 0. \] Then, the lower bounding procedure converges to the optimal objective value, i.e. $f^{LBD,k} \to f^*$. \end{lemma} \begin{proof} First, if the lower bounding problem~\eqref{eq:sip_lower} is ever infeasible for some iteration $k$, then \eqref{eq:sip} is infeasible and we can set $f^{LBD,k} = +\infty = f^*$. Otherwise, since $X$ is compact, $Y^{LBD,k}$ is finite, and $f$ and $g$ are continuous, for every iteration the lower bounding problem has a solution by Weierstrass' (extreme value) theorem. If at some iteration $k$ the lower bounding problem furnishes a point $\bar{x}^k$ for which $g^*(\bar{x}^k) \le 0$, then $\bar{x}^k$ is feasible for \eqref{eq:sip}, and thus optimal. The corresponding lower bound $f^{LBD,k}$, and all subsequent lower bounds, equal $f^*$. Otherwise, we have an infinite sequence of solutions to the lower bounding problems. Since $X$ is compact we can move to a subsequence $\seq[k \in \mbb{N}]{\bar{x}^{k}} \subset X$ which converges to $x^* \in X$. By construction of the lower bounding problem we have \[ g(\bar{x}^{\ell},\bar{y}^k) \le 0, \quad \forall \ell,k : \ell > k. \] By continuity and compactness of $X \times Y$ we have uniform continuity of $g$, and so for any $\epsilon > 0$, there exists a $\delta > 0$ such that \begin{equation} \label{eq:gee} g(x,\bar{y}^k) < \epsilon, \quad\forall x : \norm{x - \bar{x}^{\ell}} < \delta, \quad\forall \ell,k : \ell > k. \end{equation} Since the (sub)sequence $\seq[k \in \mbb{N}]{\bar{x}^k}$ converges, there is an index $K$ sufficiently large that \begin{equation} \label{eq:tails} \norm{\bar{x}^{\ell} - \bar{x}^k} < \delta, \quad \forall \ell,k : \ell > k \ge K. \end{equation} Using \eqref{eq:tails}, we can substitute $x = \bar{x}^k$ in \eqref{eq:gee} to get that for any $\epsilon > 0$, there exists $K$ such that \[ g(\bar{x}^k,\bar{y}^k) < \epsilon, \quad \forall k \ge K. \] By assumption $g(\bar{x}^k,\bar{y}^k) > 0$ for all $k$, and so combined with the above we have that $g(\bar{x}^k,\bar{y}^k) \to 0$. Combining $g(\bar{x}^k,\bar{y}^k) \to 0$ with $ g(\bar{x}^k,\bar{y}^k) \ge \alpha g^*(\bar{x}^k) > 0, $ for all $k$, we see $ g^*(\bar{x}^k) \to 0. $ Meanwhile $g^* : X \to \mbb{R}$ is a continuous function, by classic parametric optimization results like \cite[Theorem~1.4.16]{aubin_frankowska} (using continuity of $g$ and compactness of $Y$). Thus \[ g^*(x^*) = \lim_{k \to \infty} g^*(\bar{x}^k) = 0. \] Thus $x^*$ is feasible in \eqref{eq:sip} and so $f^* \le f(x^*)$. But since the lower bounding problem is a relaxation, $f^{LBD,k} = f(\bar{x}^k) \le f^*$ for all $k$, and so by continuity of $f$, $f(x^*) \le f^*$. Combining these inequalities we see $f^{LBD,k} \to f(x^*) = f^*$. Since the entire sequence of lower bounds is an increasing sequence, we see that the entire sequence converges to $f^*$ (without moving to a subsequence). \end{proof} \section{Remarks} The main contribution of \cite{mitsos11} is a novel \emph{upper} bounding procedure, which still stands, and combined with the modified lower bounding procedure from Lemma~\ref{lem:claim_mod} or the original procedure from \cite{blankenshipEA76}, the overall global solution method for \eqref{eq:sip} is still effective. The counterexample that has been presented may seem contrived. However, as the lower bounding method for SIP from \cite{mitsos11} is adapted to give a lower bounding method for \emph{generalized} semi-infinite programs (GSIP) in \cite{mitsosEA15}, a modification of the counterexample reveals that similar behavior may occur (and in a more natural way) when constructing the lower bounds for a GSIP. Consequently, the lower bounds fail to converge to the infimum. See \cite{Harwood19_GSIP}.
{ "timestamp": "2019-12-05T02:06:35", "yymm": "1912", "arxiv_id": "1912.01763", "language": "en", "url": "https://arxiv.org/abs/1912.01763", "abstract": "Semi-infinite programs are a class of mathematical optimization problems with a finite number of decision variables and infinite constraints. As shown by Blankenship and Falk (Blankenship and Falk. \"Infinitely constrained optimization problems.\" Journal of Optimization Theory and Applications 19.2 (1976): 261-281.), a sequence of lower bounds which converges to the optimal objective value may be obtained with specially constructed finite approximations of the constraint set. In (Mitsos. \"Global optimization of semi-infinite programs via restriction of the right-hand side.\" Optimization 60.10-11 (2011): 1291-1308.), it is claimed that a modification of this lower bounding method involving approximate solution of the lower-level program yields convergent lower bounds. We show with a counterexample that this claim is false, and discuss what kind of approximate solution of the lower-level program is sufficient for correct behavior.", "subjects": "Optimization and Control (math.OC)", "title": "A note on semi-infinite program bounding methods", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347860767305, "lm_q2_score": 0.831143054132195, "lm_q1q2_score": 0.8098947041524659 }
https://arxiv.org/abs/math/0610707
A fixed point theorem for the infinite-dimensional simplex
We define the infinite dimensional simplex to be the closure of the convex hull of the standard basis vectors in R^infinity, and prove that this space has the 'fixed point property': any continuous function from the space into itself has a fixed point. Our proof is constructive, in the sense that it can be used to find an approximate fixed point; the proof relies on elementary analysis and Sperner's lemma. The fixed point theorem is shown to imply Schauder's fixed point theorem on infinite-dimensional compact convex subsets of normed spaces.
\section{Introduction} In finite dimensions, one of the simplest methods for proving the Brouwer fixed point theorem is via a combinatorial result known as Sperner's lemma \cite{Sper28}, which is a statement about labelled triangulations of a simplex in $\ensuremath{\mathbb{R}} ^n$. In this paper, we use Sperner's lemma to prove a fixed point theorem on an infinite-dimensional simplex in $\ensuremath{\mathbb{R}} ^\infty$. We also show that this theorem implies the infinite-dimensional case of Schauder's fixed point theorem on normed spaces. Since $\ensuremath{\mathbb{R}} ^\infty$ is locally convex, our theorem is a consequence of Tychonoff's fixed point theorem \cite{Smar74}. However, some notable advantages of our approach are: (1) the constructive nature of Sperner's lemma provides a method for producing approximate fixed points for functions on the infinite-dimensional simplex, (2) the proof is based on elementary methods in topology and analysis, and (3) our proof provides another route to Schauder's theorem. Fixed point theorems and their constructive proofs have found many important applications, ranging from proofs of the Inverse Function Theorem \cite{Lang97}, to proofs of the existence of equilibria in economics \cite{Todd76, Yang99}, to the existence of solutions of differential equations \cite{Brow93, Smar74}. \section{Working in $\ensuremath{\mathbb{R}} ^\infty$} Let $\ensuremath{\mathbb{R}} ^\infty$ and $I^\infty = \prod [0,1]$ be the product of countably many copies of $\ensuremath{\mathbb{R}} $, and $I=[0,1]$, respectively. We equip $\ensuremath{\mathbb{R}} ^\infty$ with the standard product topology, which is metrizable \cite{BePe75} by the complete metric \[ \bar d(x,y) = \sum_{i=1}^\infty \frac{|x_i - y_i|}{2^i(1+|x_i-y_i|)}. \] In $\ensuremath{\mathbb{R}} ^n$, a $k$-dimensional simplex, or \textit{$k$-simplex}, $\sigma^k$ is the convex hull of $k+1$ affinely independent points. The \textit{standard $n$-simplex} in $\ensuremath{\mathbb{R}} ^{n+1}$, denoted $\Delta^n$, is the convex hull of the $n+1$ standard basis vectors of $\ensuremath{\mathbb{R}} ^n$. The natural extension of this definition to $\ensuremath{\mathbb{R}} ^\infty$ is to consider $\Delta^\infty$, the convex hull of the standard basis vectors $\{e_i\}$ in $\ensuremath{\mathbb{R}} ^\infty$, where $(e_i)_j= \delta_{ij}$, the Kronecker delta function. As convex combinations are finite sums, this convex hull is: \[ \ensuremath{\Delta^\infty} = \{x\in \ensuremath{\mathbb{R}} ^\infty | \sum_{i=1}^\infty x_i = 1,\ 0\leq x_i \leq 1 \mbox{, and only finitely many $x_i$ are non-zero}\}. \] Unfortunately, $\ensuremath{\Delta^\infty} $ is not closed; under the metric $\bar d$ the sequence $\{e_i\}$ converges to $\mathbf{0}$, which is not in $\ensuremath{\Delta^\infty} $. So consider, instead $\ensuremath{\Delta^\infty_0} $, the closure of $\ensuremath{\Delta^\infty} $, which can be shown to be: \[ \ensuremath{\Delta^\infty_0} = \{x\in \ensuremath{\mathbb{R}} ^\infty | \sum_{i=1}^\infty x_i \leq 1 \mbox{ and } 0\leq x_i\leq 1\}. \] It is easy to see that $\ensuremath{\Delta^\infty_0} $ is convex. It is also the closure of the convex hull of the standard basis vectors $\{e_i\}$ and $\mathbf{0}$. It is also compact because it is a closed subset of $I^\infty$, which is compact by Tychonoff's Theorem. \footnote{If one would like to avoid the Axiom of Choice, which is equivalent to Tychonoff's Theorem, it is not difficult to show that $I^\infty$ is a closed and totally bounded subset of the complete space $\ensuremath{\mathbb{R}} ^\infty$, which implies compactness.} We call $\ensuremath{\Delta^\infty_0} $ the {\em standard infinite-dimensional simplex}. It will be important for our purposes later to consider $F^n$, the $n$-dimensional face of $\ensuremath{\Delta^\infty_0} $ given by $F^n = conv\{e_1,e_2,\dots, e_{n+1}\}$. Notice that each $F^n$ is closed and thus compact. \section{Some Preliminary Machinery} \begin{comment Before stating Sperner's Lemma, which will be indispensable in our proof, some definitions are necessary. Let $\sigma^k = conv(x_0,...,x_k)$ be a $k$-simplex in $\ensuremath{\mathbb{R}} ^n$. Further, let $A$ be the simplexes of a triangulation of $\sigma^k$ and $B$ be the set of vertices of $A$. An A-Sperner map for this triangulation of $\sigma^k$ is a map $h:B\rightarrow \{0,\dots,k\}$ such that, if \[ J \subseteq \{0,...,k\} \mbox{ and } v \in conv\{x_j | j \in J\} \mbox{ then } h(v) \in J.\] A simplex in the triangulation of $\sigma^k$ is called full if the image of its vertices under $h$ maps onto $\{0,\dots,k\}$. (Definitions from van Mill p. 103) \begin{sperner} (van Mill p.103) Let $\sigma^k$ be a $k$-simplex in $\ensuremath{\mathbb{R}} ^n$ and let $A$ be the set of simplexes in a triangulation of $\sigma^k$. If $h$ is an A-Sperner map for $\sigma^k$ then the number of full simplexes in $A$ is odd and hence non-zero. \end{sperner} \end{comment Let $\sigma^k = conv(x_0,...,x_k)$ be a $k$-simplex in $\ensuremath{\mathbb{R}} ^n$. Let $T$ be a triangulation of $\sigma^k$ and $V$ be the set of vertices of $T$ (i.e., the vertices of simplices in $T$). A \emph{Sperner labelling} of the triangulation $T$ is a labelling function $\ell:V\rightarrow \{0,\dots,k\}$ such that \[ \mbox{ if } J \subseteq \{0,\dots,k\} \mbox{ and } v \in conv\{x_j | j \in J \} \mbox{, then } h(v) \in J. \] A $k$-simplex $\tau$ of $T$ is called a {\em fully-labelled simplex} (or {\em full}) if the image of the vertices of $\tau$ under $\ell$ maps onto $\{0,\dots,k\}$. Note that $\tau$ has exactly $k+1$ vertices, so all the vertices have distinct labels. \begin{sperner} Let $\sigma^k$ be a $k$-simplex in $\ensuremath{\mathbb{R}} ^n$ with triangulation $T$ and let $\ell$ be a Sperner-labelling of $T$. Then the number of full simplices of $T$ is odd (and hence, non-zero). \end{sperner} Though we will not prove this theorem here, an exposition of such proofs can be found in \cite{Su99}. In particular, there are constructive ``path-following'' proofs that locate the full simplex by tracing a path of simplices through the triangulation. Such path-following proofs have formed the basis of algorithms for locating fixed points of functions in finite-dimensional spaces, e.g., see \cite{Todd76} for a nice survey. In Section \ref{sec:fixed-dio}, we show how to use Sperner's lemma for a fixed point theorem in the infinite-dimensional space $\ensuremath{\Delta^\infty_0} $. Another crucial theorem for our purposes states that, under appropriate hypotheses, the existence of approximate fixed points implies the existence of fixed points. On the metric space $(X,d)$, we can quantify the notion of an approximate fixed point by defining an \textit{$\epsilon$-fixed point}, which for a given function $f$ is a point $x\in X$ such that $d(x,f(x))<\epsilon$. Versions of the following lemma may be found in, e.g., \cite{DuGr82, Smar74}. \begin{lemma}[Epsilon Fixed Point Theorem] \label{le epsilonfixed} Suppose that $A$ is a compact subset of the metric space $(X,d)$ and that $f:A\rightarrow A$ is continuous. If $f$ has an $\epsilon$-fixed point for every $\epsilon > 0$ then $f$ has a fixed point. \end{lemma} \begin{proof} Let $\{a_n\}$ be a sequence of $1/n$-fixed points. That is, $d(a_n,f(a_n)) < 1/n$ for all $n$. Since $A$ is compact it is sequentially compact and thus $\{a_n\}$ has a convergent subsequence, which we denote $\{a'_n\}$ with $a_n'\rightarrow x \in A$. Let $\epsilon >0$. Since $a_n'\rightarrow x$ there exists $N_1$ such that $n\geq N_1$ implies that $d(a'_n,x) < \epsilon /2$. Let $N = \max ( N_1, 2/\epsilon)$. Then $n\geq N$ implies that \[ d(x, f(a'_n)) \leq d(x,a'_n) + d(a'_n,f(a'_n)) < \epsilon, \] so that $f(a'_n) \rightarrow x$. However, since $f$ is continuous, we also know that $f(a'_n)\rightarrow f(x)$. Since limits are unique, we conclude that $f(x)=x$, which completes the proof. \end{proof} Later it will be desirable to have an isometry between $\Delta^{n-1}$, the standard $(n-1)$-simplex in $\ensuremath{\mathbb{R}} ^{n}$, and $F^{n-1}$. The easiest way to do this is to consider $\ensuremath{\mathbb{R}} ^n$ as a subspace of $\ensuremath{\mathbb{R}} ^\infty$ by projection onto the first $n$ factors, and restricting the metric on $\ensuremath{\mathbb{R}} ^\infty$ to $\ensuremath{\mathbb{R}} ^{n}$. Call this metric $\bar d_{n}$ and consider $\Delta^{n-1}$ in the metric space $(\ensuremath{\mathbb{R}} ^{n}, \bar d_{n})$. It is worthwhile to ensure that $(\ensuremath{\mathbb{R}} ^{n}, \bar d_n)$ has a rich supply of continuous functions. Before proceeding, recall that all norms on $\ensuremath{\mathbb{R}} ^n$ are equivalent and thus essentially interchangeable; we now prove that $\bar d_n$ is interchangeable with norm-induced metrics on bounded sets. \begin{lemma} \label{le metricequivalence} Let $A$ be a bounded subset of the normed space $(\ensuremath{\mathbb{R}} ^{n} , \| \cdot \|_\infty)$. On $A$, the metric $\bar d_n$ is equivalent to the metric induced by the norm $\| \cdot \|_\infty$. \end{lemma} \begin{proof} Suppose that $x,y \in \ensuremath{\mathbb{R}} ^n$. We see that \[ \bar d_n(x,y) = \sum_{i=1}^n \frac{|x_i - y_i|}{2^i(1+|x_i-y_i|)} \leq n\|x-y\|_\infty. \] Now, since $A$ is bounded, there is some $M$ such $\|x-y\|_\infty \leq M$ for $x,y \in A$. Thus we see that \[\frac{\|x-y\|_\infty}{2^n(1+M)} \leq \frac{\|x-y\|_\infty}{2^n(1+\|x-y\|_\infty)} \leq \bar d_n(x,y), \] which implies that \begin{equation} \label{eq:norm-bound} \|x-y\|_\infty \leq 2^n(1+M)\bar d_n(x,y). \end{equation} Thus $\bar d_n$ is equivalent to the metric induced by the norm on $A$.\end{proof} Lemma \ref{le metricequivalence} tells us that bounded subsets of $\ensuremath{\mathbb{R}} ^n$ have the same continuous functions regardless of whether they are considered as subsets of a normed space or as subsets of $(\ensuremath{\mathbb{R}} ^n , \bar d_n)$. Importantly, notice that $\Delta^{n-1}$ is bounded. Furthermore, the isometry $f:\Delta^{n-1} \rightarrow F^{n-1}$ between $\Delta^{n-1}$ in $(\ensuremath{\mathbb{R}} ^n,\bar d_n)$ and $F^{n-1}$ in $\ensuremath{\mathbb{R}} ^\infty$ is clearly given by $f(x) = f(x_1,x_2,\dots,x_n) = (x_1,x_2,\dots,x_n,0,0,\dots)$. This is important because it implies that $F^{n-1}$ has an arbitrarily small barycentric subdivision. Recall that the diameter of a set $X$ is $d(X) = \sup_{x,y\in X} d(x,y)$ and if $\mathscr T$ is a family of sets, then $size(\mathscr T) = \sup_{\sigma \in \mathscr T} d(\sigma)$. Thus, given $\epsilon >0$, $F^{n-1}$ has a barycentric subdivision $\mathscr T$ with $size(\mathscr T) < \epsilon$. \begin{comment}**************** \begin{proof} As proved in (include reference number) the standard $(n-1)$-simplex in the normed space $\ensuremath{\mathbb{R}} ^n$ as an arbitrarily small barycentric subdivision, and thus Lemma 2 tells us that the standard $(n-1)$-simplex in the metric space $(\ensuremath{\mathbb{R}} ^n,\bar d_n)$ has an arbitrarily fine subdivision. It is clear that this subdivision is preserved by the isometry between $F^{n-1}$ and $\Delta^{n-1}$. \end{proof} \end{comment}************************ Now we are ready to prove a fixed point theorem for $\ensuremath{\Delta^\infty_0} $. \section{A Fixed Point Theorem for $\ensuremath{\Delta^\infty_0} $} \label{sec:fixed-dio} \begin{theorem} \label{fixed-dio} Suppose that $f:\ensuremath{\Delta^\infty_0} \rightarrow \ensuremath{\Delta^\infty_0} $ is continuous. Then $f$ has a fixed point. \end{theorem} \begin{proof} Since $\ensuremath{\Delta^\infty_0} $ is compact, by Lemma \ref{le epsilonfixed}, it is sufficient to show that $f$ has an $\epsilon$-fixed point for each $\epsilon >0$. Let $\epsilon >0$ be given. Choose $N \geq \log_2(2/\epsilon)+1$. Notice that for $x,y \in \ensuremath{\Delta^\infty_0} $, this implies that \begin{equation} \label{eq:eps-over-2-N+1on} \sum_{i=N+1}^\infty \frac{|x_i - y_i|}{2^i(1+|x_i-y_i|)} \leq \sum_{i=N+1}^\infty \frac{1}{2^i} < \frac{\epsilon}{2}. \end{equation} Since $f$ maps between countably infinite-dimensional spaces, we can write $f$ in terms of its components: $f(x) = (f_1(x),f_2(x),\dots)$. Since $f$ is continuous, $f_i$ is continuous for each $i$. Consider the function \[ g(x) =(g_1(x),g_2(x),\dots) = (f_1(x),f_2(x),\dots, f_N(x) , 1-\sum_{i=1}^N f_i(x), 0,0,0,\dots ). \] Since each $f_i$ is continuous and finite sums of continuous function are continuous, $g_i$ is continuous for each $i$. Furthermore, we see that $g:F^N \rightarrow F^N$. Consequently, $g$ is continuous. Let $\epsilon_0 = \frac{\epsilon}{8(N+1)}$ and $\epsilon_1 = \frac{\epsilon}{2^{N+5}(N+1)}$. Since $g$ is continuous on a compact set, it is uniformly continuous. Thus there exists $\delta_1 >0$ such that $\bar d(x,y) < \delta_1$ implies that $\bar d(g(x),g(y)) < \epsilon_1$. Let $\delta = \min ( \delta_1 , \epsilon_1)$. Since $F^N$ can be triangulated with an arbitrarily small triangulation, let $\mathscr T$ be a triangulation with $size(\mathscr T) < \delta$. Label the vertices of $\mathscr T$ with the map \[ \ell(x) = \mbox{argmax}_i (x_i - g_i(x)). \] Recall that the \emph{argmax} function returns the index of the largest element of the argument, and if there are multiple indices that give the maximum value, the argmax function returns the least of these indices. Observe that $\ell(x)$ produces a Sperner labeling on the vertices of $\mathscr T$. Thus by Sperner's Lemma, there exists a fully-labeled simplex in $\mathscr T$. This simplex can be found using the path-following method described in \cite{Su99}. Let $\{x^1,x^2, \dots x^{N+1}\}$ be the vertices of this simplex where the index of each vertex is its Sperner label. From this, we see that for all $j$, \[ x^i_i - g_i(x^i) \geq x^i_j - g_j(x^i). \] Furthermore, since for each $x$ in $F^N$, we have \[ \sum_{j=1}^{N+1} x_j = \sum_{j=1}^{N+1} g_j(x) =1, \] there is at least one $j$ such that $g_j(x) \leq x_j$. In particular, since $\ell(x^i)=i$, this implies that for each $x^i$, \[ x^i_i - g_i(x^i) = \max_j ( x^i_j - g_j(x^i)) \geq 0. \] Since $size(\mathscr T) < \delta$ we have that, for all $i$, $\bar d(x^1, x^i) < \delta$. From the bound (\ref{eq:norm-bound}) in Lemma \ref{le metricequivalence} (note in this case $M=1$ and $n=N+1$), we find that for all $i, j$, \begin{equation} \label{eq:delta-bound} |x^1_j - x^i_j| < 2^{N+2}\delta \leq 2^{N+2} \epsilon_1 \leq \epsilon_0. \end{equation} By the same logic, we have that for all $i, j$, \begin{equation} \label{eq:delta-eps-bound} |g_j(x^1) - g_j(x^i)| < 2^{N+2}\epsilon_1 \leq \epsilon_0. \end{equation} Consequently, we have that \[ x^1_j + \epsilon_0 > x^i_j \quad \mbox{ and } \quad -g_j(x^i) < \epsilon_0 - g_j(x^1) \] which, in turn, implies that \[ 2\epsilon_0 + x^1_j - g_j(x^1) > x^i_j - g_j(x^i) \] for all $i$ and $j$. In particular, this implies that the following list of inequalities hold (simply let $i=j$ and run through all $i$): \[ \begin{array}{cccc} 2\epsilon_0 + x^1_1 - g_1(x^1) & > &x^1_1 - g_1(x^1)& \geq 0 , \\ 2\epsilon_0 + x^1_2 - g_2(x^1) & > &x^2_2 - g_2(x^2)& \geq 0 , \\ \vdots & & \vdots \\ 2\epsilon_0 + x^1_{N+1} - g_{N+1}(x^1) & > & x^{N+1}_{N+1} - g_{N+1}(x^{N+1})& \geq 0 . \end{array} \] Summing down each column yields the following inequality. \[2\epsilon_0(N+1) + \sum_{i=1}^{N+1} x^1_i - \sum_{i=1}^{N+1} g_i(x^1) > \sum_{i=1}^{N+1}\left( x_i^i-g_i(x^i)\right) \geq 0. \] Now we recall that for all $i$, $x_i^i-g_i(x^i) \geq 0$ and \[ \sum_{i=1}^{N+1} x^1_i - \sum_{i=1}^{N+1} g_i(x^1)=1-1 =0. \] Consequently, \[ \begin{split} 2\epsilon_0(N+1)& = 2\epsilon_0(N+1)+ \sum_{i=1}^{N+1} x^1_i - \sum_{i=1}^{N+1} g_i(x^1) \\ & >\sum_{i=1}^{N+1}\left( x_i^i-g_i(x^i)\right)\\ &=\sum_{i=1}^{N+1}\left| x_i^i-g_i(x^i)\right|. \end{split} \] Using (\ref{eq:delta-bound}) and (\ref{eq:delta-eps-bound}) and the continuity of $g$, for all $i$, we have that: $ |x^1_i - g_i(x^1)| \leq |x^1_i - x^i_i| + |x^i_i - g_i(x^i)| + |g_i(x^i)-g_i(x^1)| < 2 \epsilon_0 + |x^i_i - g_i(x^i)| $. Hence, \[ \begin{split} \bar d(x^1 , g(x^1)) = \sum_{i=1}^{N+1} \frac{|x^1_i - g_i(x^1)|}{2^i(1+|x^1_i - g_i(x^1)|)} & \leq \sum_{i=1}^{N+1} |x^1_i - g_i(x^1)| \\ & < \sum_{i=1}^{N+1} \left(2 \epsilon_0 + |x^i_i - g_i(x^i)|\right) \\ & < 4(N+1)\epsilon_0 \\ & = \frac{\epsilon}{2}. \end{split} \] Let $y = (x^1_1,x^1_2,\dots,x^1_N,0,0,0,\dots)$. We see that \begin{equation} \label{eq:eps-over-2-1toN} \begin{split} \sum_{i=1}^{N} \frac{|y_i - f_i(y)|}{2^i(1+|y_i - f_i(y)|)} &=\sum_{i=1}^{N} \frac{|y_i - g_i(y)|}{2^i(1+|y_i - g_i(y)|)}\\ & = \sum_{i=1}^{N} \frac{|x^1_i - g_i(x^1)|}{2^i(1+|x^1_i - g_i(x^1)|)} \\ & \leq \sum_{i=1}^{N+1} \frac{|x^1_i - g_i(x^1)|}{2^i(1+|x^1_i - g_i(x^1)|)} \\ & < \frac{\epsilon}{2}. \end{split} \end{equation} From (\ref{eq:eps-over-2-N+1on}) and (\ref{eq:eps-over-2-1toN}), we have \[ \begin{split} \bar d(y,f(y)) & = \sum_{i=1}^\infty \frac{|y_i - f_i(y)|}{2^i(1+|y_i - f_i(y)|)} \\ & =\sum_{i=1}^{N} \frac{|y_i - f_i(y)|}{2^i(1+|y_i - f_i(y)|)} +\sum_{i=N+1}^\infty \frac{|y_i - f_i(y)|}{2^i(1+|y_i - f_i(y)|)} \\ & < \frac{\epsilon}{2}+\frac{\epsilon}{2}\\ &=\epsilon . \end{split} \] Therefore, $y$ is the desired $\epsilon$-fixed point.\end{proof} Notice that the construction of the $\epsilon/2$ fixed point in $F^N$ in the proof above is identical to the construction of an $\epsilon/2$ fixed point for an arbitrary continuous function on $\Delta^N$, because of the isometry between the two sets. This construction, in conjunction with Lemmas \ref{le epsilonfixed} and \ref{le metricequivalence}, provides a proof of the Brouwer Fixed Point Theorem on the finite-dimensional simplex, which is similar to constructions found in, e.g., \cite{Todd76}. \section{Schauder's Theorem} A well-known infinite-dimensional fixed point theorem that holds for normed spaces is Schauder's theorem \cite{DuGr82, Smar74}: \begin{sch} Suppose that $X$ is a compact convex subset of the normed space $G$. If $f:X\rightarrow X$ is continuous, then $f$ has a fixed point. \end{sch} In this section we show how our proof of Theorem \ref{fixed-dio} can be used to prove Schauder's Theorem for the case where $X$ is infinite-dimensional. (The finite-dimensional version of Schauder's Theorem reduces to the Brouwer Fixed Point Theorem.) Recall that a space $X$ has the \textit{fixed point property}. if every continuous function $f:X\rightarrow X$ has a fixed point. Note that this is a topological property, so if $X$ is homeomorphic to $Y$ then $Y$ also has the fixed point property. We will establish Schauder's theorem by noting that $\ensuremath{\Delta^\infty_0} $ is homeomorphic to any infinite-dimensional compact convex subset of a normed space. Define the vector space $H$ to be \[H = \{x \in \ensuremath{\mathbb{R}} ^\infty | \sum_{i=1}^\infty \frac{|x_i|}{2^i} < \infty \}.\] It is not difficult to see that $H$ is indeed a vector space. Furthermore, we see that $\|x\| = \sum_{i=1}^\infty \frac{|x_i|}{2^i}$ defines a norm on this space and the closure of the standard simplex in $H$ is \[ \ensuremath{\Delta^H_0} = \{x\in H | \sum_{i=1}^\infty x_i \leq 1 \mbox{ and } 0\leq x_i\leq 1\} . \] \begin{prop} $\ensuremath{\Delta^\infty_0} $ is homeomorphic to $\ensuremath{\Delta^H_0} $. \end{prop} The proof of this lemma is trivial using the homeomorphism $g:\ensuremath{\Delta^\infty_0} \rightarrow \ensuremath{\Delta^H_0} $ being $g(x)=x$. Note that $\ensuremath{\Delta^H_0} $ is an infinite-dimensional compact convex subset of a normed space $H$. Now consider the following proposition \cite{Klee55}: \begin{prop} Every infinite-dimensional compact convex subset of a normed space is homeomorphic to the Hilbert Cube. \end{prop} The significance of these propositions is that \emph{every} infinite-dimensional compact convex subset of a normed space is homeomorphic to $\ensuremath{\Delta^\infty_0} $. Thus Theorem \ref{fixed-dio} implies the infinite-dimensional case of Schauder's Theorem. \begin{comment This result relies on Keller's Theorem, which states that every infinite-dimensional compact convex subset of the Hilbert Cube is homeomorphic to the Hilbert Cube, and the following theorem: \begin{theorem} Every compact convex subset of a normed space is linearly homeomorphic to a compact convex subset of the Hilbert Cube. \end{theorem} Though \cite{Smart74} states this for Banach spaces, he proves it for normed spaces.\footnote{It is worth noting that, to achieve full generality, the proof in (reference Smart) relies on a consequence of the Hahn-Banach Theorem, and thus on the Axiom of Choice. However, in specific cases, such as inner product spaces, the Hahn-Banach Theorem can be avoided. Also, to fully understand the proof in \cite{Smart74} one must recall that compact subsets of normed spaces are separable and that a continuous bijective map from a compact space to a Hausdorff space is a homeomorphism.} Furthermore, notice that linear homeomorphisms preserve the dimension of a set. Thus Theorem 3 gives us that every infinite-dimensional compact convex subset of a normed space is homeomorphic to an infinite dimensional compact convex subset of the Hilbert Cube. Theorem 3, in conjunction with Keller's Theorem, which can be found in \cite{VanM89}, gives us that all infinite-dimensional compact convex subsets of normed spaces are homeomorphic to the Hilbert Cube. \end{comment \bibliographystyle{plain}
{ "timestamp": "2006-10-24T03:10:17", "yymm": "0610", "arxiv_id": "math/0610707", "language": "en", "url": "https://arxiv.org/abs/math/0610707", "abstract": "We define the infinite dimensional simplex to be the closure of the convex hull of the standard basis vectors in R^infinity, and prove that this space has the 'fixed point property': any continuous function from the space into itself has a fixed point. Our proof is constructive, in the sense that it can be used to find an approximate fixed point; the proof relies on elementary analysis and Sperner's lemma. The fixed point theorem is shown to imply Schauder's fixed point theorem on infinite-dimensional compact convex subsets of normed spaces.", "subjects": "General Topology (math.GN); Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)", "title": "A fixed point theorem for the infinite-dimensional simplex", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347860767304, "lm_q2_score": 0.8311430436757313, "lm_q1q2_score": 0.8098946939633238 }
https://arxiv.org/abs/1911.12009
Involution pipe dreams
"Involution Schubert polynomials represent cohomology classes of $K$-orbit closures in the complete (...TRUNCATED)
"\\section{Introduction}\n\nOne can identify the equivariant cohomology rings for the spaces of symm(...TRUNCATED)
{"timestamp":"2020-08-17T02:05:50","yymm":"1911","arxiv_id":"1911.12009","language":"en","url":"http(...TRUNCATED)
https://arxiv.org/abs/1410.6535
A New Fractional Derivative with Classical Properties
"We introduce a new fractional derivative which obeys classical properties including: linearity, pro(...TRUNCATED)
"\\section{Introduction}\nThe derivative of non-integer order has been an interesting research topic(...TRUNCATED)
{"timestamp":"2014-11-11T02:08:51","yymm":"1410","arxiv_id":"1410.6535","language":"en","url":"https(...TRUNCATED)
https://arxiv.org/abs/1810.11932
Computing discrete equivariant harmonic maps
"We present effective methods to compute equivariant harmonic maps from the universal cover of a sur(...TRUNCATED)
"\n\\subsection*{Acknowledgments}\n\\addcontentsline{toc}{section}{Acknowledgments}\n\nThe authors w(...TRUNCATED)
{"timestamp":"2020-01-22T02:09:32","yymm":"1810","arxiv_id":"1810.11932","language":"en","url":"http(...TRUNCATED)
https://arxiv.org/abs/2204.00383
A visualisation for conveying the dynamics of iterative eigenvalue algorithms over PSD matrices
"We propose a new way of visualising the dynamics of iterative eigenvalue algorithms such as the QR (...TRUNCATED)
"\\section{Simple iterative eigenvalue algorithms}\n\nThe (naive) QR algorithm {\\cite{francis1961qr(...TRUNCATED)
{"timestamp":"2022-04-04T02:21:37","yymm":"2204","arxiv_id":"2204.00383","language":"en","url":"http(...TRUNCATED)
https://arxiv.org/abs/2010.15204
Shortest closed curve to inspect a sphere
"We show that in Euclidean 3-space any closed curve which lies outside the unit sphere and contains (...TRUNCATED)
"\\section{Introduction}\nWhat is the shortest closed orbit a satellite may take to inspect the en(...TRUNCATED)
{"timestamp":"2021-07-23T02:04:13","yymm":"2010","arxiv_id":"2010.15204","language":"en","url":"http(...TRUNCATED)
https://arxiv.org/abs/2107.02428
Browder's Theorem through Brouwer's Fixed Point Theorem
"One of the conclusions of Browder (1960) is a parametric version of Brouwer's Fixed Point Theorem, (...TRUNCATED)
"\\section{Introduction}\r\n\r\nBrouwer's Fixed Point Theorem (Hadamard, 1910, Brouwer, 1911) states(...TRUNCATED)
{"timestamp":"2021-07-07T02:12:17","yymm":"2107","arxiv_id":"2107.02428","language":"en","url":"http(...TRUNCATED)
https://arxiv.org/abs/2107.14079
Density of binary disc packings: lower and upper bounds
"We provide, for any $r\\in (0,1)$, lower and upper bounds on the maximal density of a packing in th(...TRUNCATED)
"\\section{Introduction}\n\nA {\\em disc packing} (or {\\em circle packing}) is a set of interior-di(...TRUNCATED)
{"timestamp":"2022-06-07T02:28:12","yymm":"2107","arxiv_id":"2107.14079","language":"en","url":"http(...TRUNCATED)

AutoMathText

AutoMathText is an extensive and carefully curated dataset encompassing around 200 GB of mathematical texts. It's a compilation sourced from a diverse range of platforms including various websites, arXiv, and GitHub (OpenWebMath, RedPajama, Algebraic Stack). This rich repository has been autonomously selected (labeled) by the state-of-the-art open-source language model, Qwen-72B. Each piece of content in the dataset is assigned a score lm_q1q2_score within the range of [0, 1], reflecting its relevance, quality and educational value in the context of mathematical intelligence.

GitHub homepage: https://github.com/yifanzhang-pro/AutoMathText

ArXiv paper: https://arxiv.org/abs/2402.07625

Objective

The primary aim of the AutoMathText dataset is to provide a comprehensive and reliable resource for a wide array of users - from academic researchers and educators to AI practitioners and mathematics enthusiasts. This dataset is particularly geared towards:

  • Facilitating advanced research in the intersection of mathematics and artificial intelligence.
  • Serving as an educational tool for learning and teaching complex mathematical concepts.
  • Providing a foundation for developing and training AI models specialized in processing and understanding mathematical content.

Configs

configs:
  - config_name: web-0.50-to-1.00
    data_files:
      - split: train
        path:
          - data/web/0.95-1.00.jsonl
          - data/web/0.90-0.95.jsonl
          - ...
          - data/web/0.50-0.55.jsonl
    default: true
  - config_name: web-0.60-to-1.00
  - config_name: web-0.70-to-1.00
  - config_name: web-0.80-to-1.00
  - config_name: web-full
    data_files: data/web/*.jsonl
  - config_name: arxiv-0.50-to-1.00
    data_files:
      - split: train
        path:
          - data/arxiv/0.90-1.00/*.jsonl
          - ...
          - data/arxiv/0.50-0.60/*.jsonl
  - config_name: arxiv-0.60-to-1.00
  - config_name: arxiv-0.70-to-1.00
  - config_name: arxiv-0.80-to-1.00
  - config_name: arxiv-full
    data_files: data/arxiv/*/*.jsonl
  - config_name: code-0.50-to-1.00
    data_files:
      - split: train
        path:
          - data/code/*/0.95-1.00.jsonl
          - ...
          - data/code/*/0.50-0.55.jsonl
  - config_name: code-python-0.50-to-1.00
      - split: train
        path:
          - data/code/python/0.95-1.00.jsonl
          - ...
          - data/code/python/0.50-0.55.jsonl
  - config_name: code-python-0.60-to-1.00
  - config_name: code-python-0.70-to-1.00
  - config_name: code-python-0.80-to-1.00
  - config_name: code-jupyter-notebook-0.50-to-1.00
      - split: train
        path:
          - data/code/jupyter-notebook/0.95-1.00.jsonl
          - ...
          - data/code/jupyter-notebook/0.50-0.55.jsonl
  - config_name: code-jupyter-notebook-0.60-to-1.00
  - config_name: code-jupyter-notebook-0.70-to-1.00
  - config_name: code-jupyter-notebook-0.80-to-1.00
  - config_name: code-full
    data_files: data/code/*/*.jsonl

How to load data:

from datasets import load_dataset

ds = load_dataset("math-ai/AutoMathText", "web-0.50-to-1.00") # or any valid config_name

Features

  • Volume: Approximately 200 GB of text data (in natural language and programming language).
  • Content: A diverse collection of mathematical texts, including but not limited to research papers, educational articles, and code documentation.
  • Labeling: Every text is scored by Qwen-72B, a sophisticated language model, ensuring a high standard of relevance and accuracy.
  • Scope: Covers a wide spectrum of mathematical topics, making it suitable for various applications in advanced research and education.

References

Citation

We appreciate your use of AutoMathText in your work. If you find this repository helpful, please consider citing it and star this repo. Feel free to contact zhangyif21@tsinghua.edu.cn or open an issue if you have any questions (GitHub homepage: https://github.com/yifanzhang-pro/AutoMathText).

@article{zhang2024automathtext,
      title={AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts},
      author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},
      journal={arXiv preprint arXiv:2402.07625},
      year={2024},
}
Downloads last month
236
Edit dataset card

Models trained or fine-tuned on math-ai/AutoMathText