\section{Online PCA} \label{sec:inefficient}
Let $\matX=[\x_1,\dots,\x_n] \in \R^{d \times n}$ be the input ``high-dimensional'' vectors and $\ell < d$ be the target dimension. Algorithm~\ref{alg1} returns $\matY=[\y_1,\dots,\y_n] \in \R^{\ell \times n}$ that is ``close'' to $\matX$ in the PCA sense (see Theorem~\ref{thm1} below for a precise statement of our quality-of-approximation result regarding Algorithm~\ref{alg1}). Besides $\matX$ and $\ell,$ the algorithm also requires $\| \matX \|_{\mathrm{F}}^2$ in its input. There is also an assumption that $\|\x_t\|_2^2\le\|\matX\|_{\mathrm{F}}^2/\ell,$ ensuring that there is no such $\x_t$ containing ``too much'' of the energy of $\matX$. 
\begin{algorithm}
\begin{algorithmic}
\STATE {\bf Input}: $\matX=[\x_1,\dots,\x_n] \in \R^{d \times n}$ with $\|\x_t\|_2^2\le\|\matX\|_{\mathrm{F}}^2/\ell$, $\ell < d$, 
$\|\matX\|_{\mathrm{F}}^2$
\STATE $\matU = {\bf 0}_{d \times \ell}$;   
\STATE $\matC = {\bf 0}_{d \times d}$; 
\STATE $\theta = 2 \|\matX\|_{\mathrm{F}}^2 /\ell$;
\FOR {$t = 1,...,n$}
\STATE $\rb_t = \x_t- \matU\matU^\top \x_t$
\WHILE {$\|\matC + \rb_t \rb_t^\top \|_2 \geq \theta $}
	\STATE $[\u,\lambda] \gets \operatorname{TopEigenVectorAndValueOf}(\matC)$
	\STATE add $\u$ to the next zero column of $\matU$
	\STATE $\rb_t \gets \x_t - \matU\matU^\top \x_t$
	\STATE $\matC \gets \matC - \lambda \u \u^\top$ % = C(\matI_d-\matU\matU^\top)$
\ENDWHILE
\STATE $\matC \gets \matC + \rb_t \rb_t^\top $
\STATE $\y_t \gets \matU^\top \x_t$
\ENDFOR 
%{\bf Return} $\matY=[\y_1,\dots,\y_n] \in \R^{2 \ell \times n}$
\end{algorithmic}
\caption{An online algorithm for Principal Component Analysis}\label{alg1}
\end{algorithm}

Algorithm~\ref{alg1} initializes $\matU$ and $\matC$ to $d \times \ell$ and $d \times d $ all-zeros matrices, respectively. It subsequently updates those matrices appropriately. The matrix $\matU$ is the so-called ``projection matrix'', i.e., $\y_t = \matU^\top \x_t$. The matrix $\matC$ is an auxiliary matrix to accumulate the
``residual errors''. The residual for a vector $\x_t$ is: $\rb_t = \x_t - \matU\matU^\top \x_t$.  
The algorithm starts with a rank one update of $\matC$ as $\matC = \matC+\rb_1 \rb_1^\top$. Notice that by the assumption for $\x_t,$ we have that $\| \rb_1 \rb_1^\top\|_2^2 \le \|\matX\|_{\mathrm{F}}^2 /\ell$, and hence for $t=1$ the algorithm will  not enter the while-loop. Then, for the second input vector $\x_2,$ the algorithm proceeds by checking the spectral norm of $\matC+\rb_2 \rb_2^\top = \rb_1 \rb_1^\top + \rb_2 \rb_2^\top$. If this does not exceed the threshold  
$\theta$ the algorithm keeps $\matU$ unchanged, and it can go all the way to $t=n$ if this is the case for all $t>1$.  Notice, then, 
that the fact that $\| \matC \|_2 \le \theta,$ implies that the spectral norm squared of  
$\matR = [\rb_1,\dots,\rb_n]\in \R^{d \times n}$ is bounded by $\theta$, because $\matC = \matR\matR\transp$, and as we will see in the proof of Theorem~\ref{thm1} this is a key technical component  in proving that the algorithm returns $\y_t$ that are close to $\x_t$ in the PCA sense. 

If, however, for some iterate $t$ the spectral norm of $ \matC+\rb_t \rb_t^\top$ exceeds the threshold $\theta$, then, the algorithm does a ``correction'' to $\matU,$ consequently to $\rb_t,$ in order to ensure that this is not the case. Specifically, it updates $\matU$ with the principal eigenvector of $\matC$ at that instance of the algorithm. At the same time it downdates $\matC$ (inside the while-loop) by removing this eigenvector. 
That way the algorithm ensures that at the end of each iterate $t,$ the following relations is true: 
$\matC = \sum_{t} \rb_t \rb_t^\top - \sum_j \lambda_j \u_j \u_j^\top$ with
$\matC (\sum_j \lambda_j \u_j \u_i^\top) = {\bf 0}_{d \times d}.$ 
Hence, when the condition in the while-loop gives $\| \matC + \rb_t \rb_t^\top\|_2^2 \le\theta$, 
which implies that $ \| \matC \|_2^2  \le\theta$,  
it really means (by an orthogonality argument - see Lemma~\ref{lem2}) that 
$\| \sum_{t} \rb_t \rb_t^\top \|_2^2\le  \theta $, which is the ultimate goal of the algorithm as we argued before. 

%The algorithm maintains two matrices. The first is the matrix $U\in\mathbb{R}^{d\times{2\ell}}$ whose columns are either all zero or orthogonal unit vectors. These columns span the subspace onto which the algorithm projects the input points $\x_t\in\mathbb{R}^d$ in order to generate the outputs $\y_t$. The second matrix which the algorithm maintains, $C\in\mathbb{R}^{d\times{d}}$ is used, informally speaking, to account for the observed data that is not currently captured in the subspace spanned by the columns of $\matU$. In the case that upon receiving a new vector $\x_t$ and updating $C$ (with the part of $\x_t$ that is not contained in the subspace spanned by $\matU$, denoted in the algorithm by the vector $\rb_t$), there exists a direction orthogonal to the columns of $\matU$ that captures too much of the observed data (verified by the condition in the while loop), this direction (which turns out to be the leading eigenvector of $C$) is added as a column of $\matU$, and its associated weight is subtracted from $C$.

\subsection{Main result} The following theorem is our main quality-of-approximation result regarding Algorithm~\ref{alg1}. We prove the theorem based on several other facts which we state and prove in the next section. 
\begin{theorem}\label{thm1} Let $\matX=[\x_1,\dots,\x_n] \in \R^{d \times n}$ with $\|\x_t\|_2^2\le\|\matX\|_{\mathrm{F}}^2/\ell$, $\ell < d$, and $\|\matX\|_{\mathrm{F}}^2$ be inputs to Algorithm~\ref{alg1}. For any target dimension $k < d,$ and any accuracy parameter $\varepsilon > 0,$ 
let the target dimension of the algorithm is $\ell = \ceil{ 8 k /\varepsilon^2} < d$.  Then:
\begin{enumerate}
\item The algorithm terminates and the while-loop runs at most $\ell$ times.
\item Upon termination of the algorithm: 
$$
\min_{\matPhi \in \iso_{d, \ell}} \sum_{t=1}^{n} \|\x_t - \matPhi \y_t\|_2^2 \le \OPT_k+ \varepsilon \cdot \|\matX\|_{\mathrm{F}}^2.
$$
\item The algorithm uses $O(d^3 k / \varepsilon^2)$ arithmetic operations per vector $\x_t$.
\item The algorithm uses $O(d^2)$ auxiliary space. 
\end{enumerate}
\end{theorem}
\begin{proof}
To prove the theorem we use several results proven in Section~\ref{sec:lemmas}. 

Termination is ensured in Lemma~\ref{lem3} where we argue that the while-loop in the algorithm can be executed at most $\ell$ times. 

To prove the second item in the theorem, 
let $\matR$ be the $d \times n$ matrix containing $\rb_t$ in its $t$-th column \footnote{Here, $\rb_t$ is the corresponding vector upon termination of the corresponding iterate of the algorithm.}, and let $\matY$ be the $\ell \times n$ matrix containing $\y_t$  in its $t$-th column. In Lemma~\ref{lem1} we prove: 
$$
\min_{\matPhi \in \iso_{d, \ell}} \sum_{t=1}^{n} \|\x_t - \matPhi \y_t\|_2^2 \le \|\matR\|_{\mathrm{F}}^2;
$$
next, in Lemma~\ref{lemRF} we prove: 
$$ 
\|\matR\|_{\mathrm{F}}^2 \le \OPT_k + 2 \cdot \|\matX\|_{\mathrm{F}} \cdot \sqrt{k} \cdot \| \matR \|_2;
$$
and in Lemma~\ref{lem2} we prove:
$$
\|\matR\|_2^2 \leq 2 \|\matX\|_{\mathrm{F}}^2/\ell.
$$
Combining these three bounds gives,
$$
\min_{\matPhi \in \iso_{d, \ell}} \sum_{t=1}^{n} \|\x_t - \matPhi \y_t\|_2^2 \le
\OPT_k + 2 \cdot \|\matX\|_{\mathrm{F}} \cdot \sqrt{k} \cdot \sqrt{2 / \ell} \cdot \|\matX\|_{\mathrm{F}}. 
$$
In words, the reconstruction error of our algorithm depends on the spectral norm of the matrix $\matR$, which we bound in Lemma~\ref{lem2}. Use 
$\ell = \ceil{ 8 k /\varepsilon^2}$ in the latter equation to wrap up the proof of the second item in the theorem. 

To calculate the number of arithmetic operations, fix some $t$. 
The algorithm requires $O(d \ell)$ operations to compute $\rb_t$. 
It requires a number of operations for the while loop, which we will compute later, and then $O(d^2)$ operations to update $\matC$ and another $O(\ell d) $ to compute $\y_t$. The condition of the while loop requires $O(d^3)$ arithmetic operations. Hence for a single vector $\x_t$ the cost, subject the computations taken place inside the while loop, is $O\left( d^3 + d\ell + d^2 \right)$. 
For all $t$ combined, the while loop will be executed exactly $\ell'$ times. 
Each time the cost is $O(d^3)$ arithmetic operations, for a total of $O(\ell' d^3)$.
From Lemma~\ref{lem3}: 
$ \ell' \le  \ell,$ and the result follows. 

Finally, $O(d^2)$ space is sufficient since the algorithm only requires to store $\matC$ and $\matU$.
\end{proof}

\subsection{Finding the best isometry matrix} 
We remark that our algorithm does \emph{not} compute the ``best'' isometry matrix $\matPhi,$ for which the bound in Theorem~\ref{thm1} holds. 
The algorithm indeed only returns the low-dimensional vectors $\y_t$. The ``best'' $\matPhi$ is related to the so-called Procrustes problem:
$$ \argmin_{\matPhi \in \R^{d \times \ell}, \matPhi\transp \matPhi = \matI_{\ell}} \FNormS{\matX - \matPhi \matY}.$$
Let $\matX \matY\transp = \matU \matSig \matV\transp$ be the SVD of 
$\matX \matY\transp$ with $\matU \in \R^{m \times m},$ $\matSig \in \R^{m \times n}$ and $\matV \in \R^{n \times n}$. Then, $\matPhi =  \matU \matV\transp$. Clearly, it is not possible to construct this matrix in an online fashion. However, our algorithm does find an isometry matrix (the matrix $\matU \in \R^{m \times \ell}$ upon termination of the algorithm) which satisfies the bound in Theorem~\ref{thm1}~(see the proof of Lemma~\ref{lem1} to verify this claim). 
\section{Auxiliary Lemmas}\label{sec:lemmas}

\subsection{Notation}\label{sec:notation} 
We introduce notation that though it was not necessary to describe the algorithm, it is useful in the proof of various results regarding Algorithm~\ref{alg1}. 

Let $\ell'$ be the number of vectors $\u$ inserted in $\matU \in \R^{d \times \ell}$. This is exactly the number of times the while-loop is executed. We argue in Lemma~\ref{lem3} that $\ell' \le  \ell$. Hence, we initialize $\matU$ to an all-zeros matrix with $ \ell$ columns, since there will be at most $\ell$ vectors $\u$ added to it. 
After the execution of the algorithm, $\matU,$ however, might still contain some all-zero columns.

We reserve the index $t$ to refer to the iterate of the algorithm where the algorithm receives $\x_t$.
Also, let $\matU_t \in \R^{d \times \ell}$ denote the projection matrix $\matU$ used for the vector $\x_t$. 
%And let $\hat{\mat\Lambda} \in \R^{2 \ell \times 2\ell}$ be a diagonal matrix containing all 
%$\lambda_j$'s into the top $\ell'$ entries along the main diagonal; the rest diagonal entries are zeros. Notice that the $\lambda_j$'s are \emph{not} ordered in non-increasing order. However, the following is true:
%\begin{equation}\label{eqn:lambdas}
%\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top  = \matU_n \hat{\mat\Lambda} \matU_n^\top.
%\end{equation}

We reserve the index $i$ to index the various instances of the matrix $\matC$ during the execution of the algorithm. Hence, $i$ ranges from $i=1$ ($\matC_1=0_{d \times d}$) to $i = z,$ where $z$ is the number of times the algorithm updates $\matC$. I.e., the matrix $\matC$ after the execution of the algorithm is $\matC_z$. Clearly, $z \ge n$, 
because for each $t$ there is at least one such update (outside the while-loop). Also, 
$z = n+ \ell' \le n + \ell,$ since the 
while-loop is executed at most $\ell$ times.  

We reserve the index $j$ to index the various vectors $\u$ added to the matrix $\matU$ during the execution of the algorithm. I.e., for $j=1,2...,\ell',$ $\u_j \in \R^d$ are the vectors $\u$ computed in some iteration in the while-loop in Algorithm~\ref{alg1}, such that $\u_j$ is computed right before $\u_{j+1}$. Notice that $\u_j$ is the eigenvector of some $\matC_i$. Also, let $\lambda_j$ be the corresponding largest eigenvalue of $\matC_i$ ($\matC_i$ is a symmetric matrix as we argue in Eqn.~\eqref{obs0}) such that $ \lambda_j = \u_j^\top \matC_i \u_j.$

Let $\matX \in \R^{d \times n},$ $\matY \in \R^{\ell \times n},$ $\tilde{\matX} \in \R^{d \times n}$ and
$\matR \in \R^{d \times n},$ denote the matrices whose $t$'th column is $\x_t \in \R^d, \y_t \in \R^{\ell}$,
$\tilde{\x}_t = \matU_t \matU_t^\top \x_t \in \R^d$, 
and
$\rb_t \in \R^d$. The vectors $\rb_t$'s are taken as the ones in the end of the corresponding iteration.  

By construction of the algorithm the following relation is also true
\begin{equation}\label{obs0} 
\matC_z =  \matR\matR^\top - \sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top.
\end{equation}
Recall that $\matC_z$ here is just the matrix $\matC$ upon termination of the algorithm. 

\subsection{Auxiliary lemmas}
The following lemma argues that the vectors $\u$ inserted in $\matU$ are orthogonal to each other. 
This is important in interpreting our algorithm as a ``true'' PCA algorithm, since those vectors play the role of the left singular vectors of $\matX,$ and as such should at the very least be orthogonal to each other. 
\begin{lemma} \label{obs2}
Let $\matU_t \in \R^{d \times \ell}$ be the instance of $\matU$ in Algorithm~\ref{alg1} when calculating the corresponding vector 
$\y_t = \matU^\top \x_t$. 
Then, for all $t=1,\dots,n$ and for some $j \in [1,\dots,\ell']$ (not necessarily the same $j$ for different $t$):
$$
\matU_t^\top \matU_t = 
\left( \begin{array}{cc}
\matI_{j}  &  \\
 & {\bf 0}_{(\ell - j) \times (\ell - j)}  
 \end{array}
 \right) \in \R^{\ell \times \ell}. 
$$
\end{lemma}
\begin{proof}
It suffices to show that for all $j > 1,$ $\u_j$ is perpendicular to $\u_{j-1},\dots,\u_{1}$.  
We prove the result by induction on $j$.

\paragraph{Base case}
First, we prove the base case: $\u_2^\top \u_1 =0$. Let $(\lambda_1, \u_1)$ be the top eigen-pair of some $\matC_i$ and $(\lambda_2, \u_2)$ be the top eigen-pair 
of some  $\matC_{i'}$ with $i' > i,$ and 
$$\matC_{i'}=\matC_{i} - \lambda_1 \u_1 \u_1^\top + \sum_{h=1}^{\xi_1} \rb_h \rb_h^\top,$$ 
where $\xi_1 \ge 1$ can be arbitrary.  Note also that for all $h$:  $\rb_h = (\matI_d - \u_1 \u_1^\top)\x_h$,
hence 
$$
\matC_{i'}=\matC_{i} - \lambda_1 \u_1 \u_1^\top + \sum_{h=1}^{\xi_1} (\matI_d - \u_1 \u_1^\top)\x_h \x_h^\top(\matI_d - \u_1 \u_1^\top)^\top. 
$$
Also, notice that, since $(\lambda_1, \u_1)$ is the top eigen-pair of $\matC_i$
$$
\matC_{i} - \lambda_1 \u_1 \u_1^\top =  (\matI_d - \u_1 \u_1^\top) \matG_1 (\matI_d - \u_1 \u_1^\top)^\top,
$$
where $\matG_1 \in \R^{d \times d}$ is diagonal and contains all the eigenvalues of 
$\matC_i$ except $\lambda_1$:
$$
\matG_1 =
\left( \begin{array}{cccc}
 \lambda_2(\matC_i) &  & &\\
   & \dots  & &\\
   &  &  \lambda_{d-1}(\matC_i) &\\
   &  & &0 \\ 
 \end{array}
 \right)
$$ 
Overall,
$$
\matC_{i'} = (\matI_d - \u_1 \u_1^\top)  \left( \matG + \sum_{h=1}^{\xi_1} \x_h \x_h^\top \right) (\matI_d - \u_1 \u_1^\top)^\top. 
$$
Hence, $\u_2$ is in the span of $(\matI_d - \u_1 \u_1^\top),$ 
which shows that $\u_2^\top \u_1=0$. 

\paragraph{Induction hypothesis} Let for some $\phi > 2,$
$(\lambda_{\phi,} \u_{\phi})$ be the top eigen-pair of some $\matC_{i''}$
with $i'' > i' > i$. Also, let
$$
\u_{\phi}\transp \u_{\phi-1} = \u_{\phi}\transp \u_{\phi-2},\dots,\u_{\phi}\transp \u_{1}=0.
$$ 

\paragraph{Induction step} 
Let $(\lambda_{\phi+1},\u_{(\phi+1)})$ be the top eigenvector of some $\matC_{i'''}$ 
with $i''' > i'' > i' > i$. The goal here is - using the induction hypothesis - to argue that 
$$
\u_{\phi+1}\transp \u_{\phi} = \u_{\phi+1}\transp \u_{\phi-1},\dots,\u_{\phi+1}\transp \u_{1}=0.
$$ 
First of all, notice that 
$$\matC_{i'''} = \matC_{i''} -  \lambda_{\phi} \u_{\phi} \u_{\phi}^\top 
+ \sum_{h=1}^{\xi_2} \rb_h \rb_h^\top,$$ 
where $\xi_2 \ge 1$ can be arbitrary. 
Second, for all $h$:  $\rb_h = (\matI_d -\sum_{j=1}^{\phi} \u_j \u_j^\top)\x_h$,
hence 
$$
\matC_{i'''}=\matC_{i''} - \lambda_{\phi} \u_{\phi} \u_{\phi}^\top + \sum_{h=1}^{\xi_2} 
\left(
(\matI_d - \sum_{j=1}^{\phi} \u_j \u_j^\top)\x_h \x_h^\top(\matI_d -\sum_{j=1}^{\phi} \u_j \u_j^\top)^\top
\right)
$$
Third, notice that since $(\lambda_{\phi}, \u_{\phi})$ is the top eigen-pair of 
$\matC_{i''}$: 
$$
\matC_{i''} - \lambda_{\phi} \u_{\phi} \u_{\phi}^\top =  
(\matI_d -  \u_{\phi} \u_{\phi}^\top) \matG_2 
(\matI_d -  \u_{\phi} \u_{\phi}^\top)^\top,
$$
where $\matG_2 \in \R^{d \times d}$ is diagonal and contains all the 
eigenvalues of $\matC_i''$ 
except $\lambda_{\phi}$:
$$
\matG_2 =
\left( \begin{array}{cccc}
 \lambda_2(\matC_{i''}) &  & &\\
   & \dots  & &\\
   &  &  \lambda_{d-1}(\matC_{i''}) &\\
   &  & &0 \\ 
 \end{array}
 \right).
$$ 
%Overall,
%\eqan{
%\matC_{i'''} &=& 
%(\matI_d -  \u_{\phi} \u_{\phi}^\top) \matG_2 
%(\matI_d -  \u_{\phi} \u_{\phi}^\top)\transp + 
%\sum_{h=1}^{\xi_2} 
%\left( 
%(\matI_d - \sum_{j=1}^{\phi} 
%\u_j \u_j^\top)\x_h \x_h^\top(\matI_d -\sum_{j=1}^{\phi} \u_j \u_j^\top)\transp
%\right)\\
%&=&
%(\matI_d -  \u_{\phi} \u_{\phi}^\top) \matG_2 
%(\matI_d -  \u_{\phi} \u_{\phi}^\top)\transp + 
%\sum_{h=1}^{\xi_2} 
%\left( 
%(\matI_d -  \u_{\phi} \u_{\phi}^\top - \sum_{j=1}^{\phi-1} 
%\u_j \u_j^\top)\x_h \x_h^\top(\matI_d -  \u_{\phi} \u_{\phi}^\top - 
%\sum_{j=1}^{\phi-1} \u_j \u_j^\top)\transp
%\right).
%}
%
%\clearpage
Overall,
 $$
\matC_{i'''}= 
\underbrace{
(\matI_d -  \u_{\phi} \u_{\phi}^\top) \matG_2 
(\matI_d -  \u_{\phi} \u_{\phi}^\top)\transp}_{\matA_1} + 
\underbrace{\sum_{h=1}^{\xi_2} 
\left( 
(\matI_d - \sum_{j=1}^{\phi} 
\u_j \u_j^\top)\x_h \x_h^\top(\matI_d -\sum_{j=1}^{\phi} \u_j \u_j^\top)\transp
\right)}_{\matA_2}.
$$

For the matrix $\matA_1$: $span(\matA_1) 
= span( \matI_d -  \u_{\phi} \u_{\phi}^\top )$ and the dimension of that space is $d-1$; for the matrix  $\matA_2$: $span(\matA_2) 
= span( \matI_d -  \sum_{j=1}^{\phi} \u_j \u_j^\top)$ and - using the induction hypothesis, i.e., $\sum_{j=1}^{\phi} \u_j \u_j^\top$ is an orthonormal basis - 
the dimension of that space is $d-\phi$. Also: 
$$span(\matA_2) \subseteq span(\matA_1),$$ 
hence $span(\matC_{i'''}) = span( \matI_d -  \u_{\phi} \u_{\phi}^\top )$ which implies that
$$\u_{\phi+1} \in span( \matI_d -  \u_{\phi} \u_{\phi}^\top ).$$ 
So, $\u_{\phi+1}\transp \u_{\phi} = 0$. 
\end{proof}

The following lemma shows that the vectors $\u$ inserted in $\matU$ are ``perpendiducal'' to the subspace spanned by the columns of the matrix $\matC_z,$ i.e., the matrix $\matC$ upon termination of the algorithm. Recall that in Eqn.~\eqref{obs0} we showed:
$$
\matR\matR^\top =  \matC_z  + \sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top;
$$
hence the lemma argues that $\matC_z$ and  
$\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top$ span two perpendicular subspaces which however combined span the subspace spanned by $\matR\matR^\top$. This will be useful in bounding the spectral norm of $\matR\matR^\top$ in Lemma~\ref{lem2}. The idea is that for such a matrix $\matR\matR^\top,$ its spectral norm can be bounded as the maximum spectral norm of  $\matC_z $ and $ \sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top,$ which, however, are bounded by design of the algorithm. 

\begin{lemma} \label{obs3} Let $\matC_z$ be the matrix $\matC$ upon termination in Algorithm~\ref{alg1}. Let $[\lambda_j, \u_j],$ for $j=1,\dots,\ell'$ be the pairs of eigenvalues/eigevectors computed in Algorithm~\ref{alg1} with $[\lambda_j, \u_j]$ computed before $[\lambda_{j+1}, \u_{j+1}]$. Then,  
$$
\matC_z  \left(\sum_{j=1}^{\ell'} \lambda_j \u_j \u_j^\top\right) = {\bf 0}_{d \times d}.
$$
\end{lemma}
\begin{proof}
Recall the notation from Section~\ref{sec:notation}; 
the matrix $\matC$ in Algorithm~\ref{alg1} evolves as:
$
\matC_1, \matC_2,\dots,\matC_i,\matC_{i+1},\dots,\matC_z;
$ 
and the matrix $\matU$ in Algorithm~\ref{alg1} correspondingly evolves as: 
$\matU_1, \matU_2,\dots,\matC_t,\matC_{t+1},\dots,\matU_n.$
Also, the algorithm either updates $\matC$ outside the while loop or updates 
$\matC$ and $\matU$ together inside the while loop. We will prove the result by induction on 
$i,$ since this covers all the possible changes of $\matC$ and $\matU$ in the algorithm. 


The base case is trivial: $\matC_1 \matU_{1} = {\bf 0}_{d \times d}$.
Let $\matC_i$ and $\matU_t$ be the state of $\matC$ and $\matU$ at some instance of the algorithm with $i > 1$; by the induction hypothesis, 
$\matC_{i} \matU_t = {\bf 0}_{d \times d}$. 

%Let $\matC_{i+1}$ and $\matU_{t+1}$ be the state of $\matC$ and $\matU$ at the next iteration; clearly, it suffices to prove that 
%$\matC_{i+1} \matU_{t+1} = {\bf 0}_{d \times d}$. 

If the algorithm did not enter the while loop 
($\matC$ was updated outside the while-loop as $\matC_{i+1} = \matC_{i} + \rb_t \rb_t^\top$ and $\matU_{t}$ remained the same) we have: 
$$
\matC_{i+1} \matU_{t} = (\matC_{i} + \rb_t \rb_t^\top)\matU_{t} 
= (\matC_{i} + (\matI_d - \matU_{t}{\matU_{t}}^\top)\x_t \x_t^\top(\matI - \matU_{t}{\matU_{t}}^\top))\matU_{t} = \matC_{i} \matU_{t} = {\bf 0}_{d \times d},$$
where the last equality uses the induction hypothesis and Lemma~\ref{obs3} such that to argue that $\matU_{t}$ contains orthonormal columns.
 
If the algorithm entered the while loop ($\matC$ and $\matU$ were updated inside the while-loop as 
$\matC_{i+1} = \matC_{i}- \lambda \u \u^\top$ and $\matU_{t+1} \matU_{t+1}^\top = \matU_{t}{\matU_{t}}^\top + \u \u^\top$) we have:
$$\matC_{i+1} \matU_{t+1} \matU_{t+1}^\top = (\matC_{i}- \lambda \u \u^\top)(\matU_{t}{\matU_{t}}^\top + \u \u^\top) = \matC_{i} \matU_{t}{\matU_{t}}^\top + (\matC_{i} \u -\lambda \u)\u^\top - \lambda \u \u^\top \matU_{t}{\matU_{t}}^\top.
$$
Since $\u$ is an eigenvector of $\matC_{i}$ with eigenvalue $\lambda$ we have: 
$\matC_{i}\u = \lambda \u$. This means 
$(\matC_{i} \u -\lambda \u)= {\bf 0}_{d \times 1}$ 
and $\lambda \u \u^\top \matU_{t}{\matU_{t}}^\top = \u \u^\top \matC_{i} \matU_{t}{\matU_{t}}^\top = 0$; consequently: $\matC_{i+1} \matU_{t+1} (\matU_{t+1})^\top = {\bf 0}_{d \times \ell},$ which implies   
$\matC_{i+1} \matU_{t+1} = {\bf 0}_{d \times d}$, which concludes the proof. 
\end{proof}
%

The following two lemmas prove upper and lower bounds for the values $ \lambda_j $ calculated in the algorithm. The lower bounds are useful in upper bounding the number of times the algorithm enters the while-loop (see Lemma~\ref{lem3}); the upper bounds will be useful in providing an upper bound for the spectral norm of the matrix $\matR$ (see Lemma~\ref{lem2}), which is indeed crucial for providing the error accuracy guarantee of the algorithm in Theorem~\ref{thm1}. 


\begin{lemma}[Upper bound on $\lambda_j$'s]
\label{lem24}
Let $\lambda_j,$ for $j=1,\dots,\ell'$ be the eigenvalues computed in Algorithm~\ref{alg1} with $\lambda_j$ computed before $\lambda_{j+1}$. Then, 
for all $j=1,2,...,\ell'$: 
$$ \lambda_j \le   2 \|\matX\|_{\mathrm{F}}^2/ \ell.$$
\end{lemma}
\begin{proof}
Each $\lambda_j$ corresponds to the largest eigenvalue of some $\matC_i$. Hence, it suffices to argue that for all $i=1,\dots,z$:  $\lambda_1(\matC_i)=\|\matC_i\|_2 \le 2 \|\matX\|_{\mathrm{F}}^2/ \ell$. We prove the result by induction on $i$. For $i=1,$ it is $\matC_1 = {\bf 0}_{d \times d}$ and it is trivial that $\lambda_1(\matC_1) = 0 \le 2 \|\matX\|_{\mathrm{F}}^2/ \ell$. By the induction hypothesis: for some $i>1$ let $\|\matC_i\|_2 \le \|\matX\|_{\mathrm{F}}^2/ \ell$. We want to show that  
$\|\matC_{i+1}\|_2 \le 2 \|\matX\|_{\mathrm{F}}^2/ \ell$. Notice that either
$\matC_{i+1} = \matC_{i} + \rb_t \rb_t^\top$ (for some iterate $t$) or 
$\matC_{i+1} = \matC_{i} - \lambda_j \u_j \u_j^\top$ (for some $j$). 
If $\matC_{i+1} = \matC_{i} + \rb_t \rb_t^\top,$ then, 
$\| \matC_{i} + \rb_t \rb_t^\top \|_2 \le 2 \|\matX\|_{\mathrm{F}}^2/ \ell$ is ensured by the while-loop condition in the algorithm, since that update happens outside the while-loop.  If $\matC_{i+1} = \matC_{i} - \lambda_j \u_j \u_j^\top$ (inside the while-loop), then,
\begin{equation}\label{eqnUP}
\TNorm{\matC_{i+1}} = 
\| \matC_{i} - \lambda_j \u_j \u_j^\top\|_2 
= 
\lambda_1(\matC_{i} - \lambda_j \u_j \u_j^\top)
= \lambda_2 (\matC_{i})
\le
\lambda_1 (\matC_{i})
=
\| \matC_{i}\|_2
\le 2 \|\matX\|_{\mathrm{F}}^2/ \ell,
\end{equation}
where $\|\matC_{i}\|_2 \le 2 \|\matX\|_{\mathrm{F}}^2/ \ell$  uses the induction hypothesis. Also, in the third equality we use the fact that $(\lambda_j, \u_j)$ is the ``top'' eigen-pair of $\matC_{i}$ hence ``removing'' it from $\matC_{i}$ turns the second largest eigenvalue of $\matC_{i}$ to be the largest one in $\matC_{i} - \lambda_j \u_j \u_j^\top$. 
\end{proof}
\begin{lemma}[Lower bound on $\lambda_j$'s]
\label{lem25}
Let $\lambda_j,$ for $j=1,\dots,\ell'$ be the eigenvalues computed in Algorithm~\ref{alg1} with $\lambda_j$ computed before $\lambda_{j+1}$. Assuming that for all $t$, $\|\x_t\|_2^2 \leq \|\matX\|_{\mathrm{F}}^2/\ell$, then, for all $j=1,2,...,\ell'$: 
$$ \lambda_j \ge   \|\matX\|_{\mathrm{F}}^2/\ell.$$
\end{lemma}
\begin{proof}
Each $\lambda_j$ corresponds to the largest eigenvalue of some $\matC_i$. And by design of the algorithm, the extracting of $\lambda_j$ from $\matC_i$ is done inside the while-loop. 
Note that the condition in the while-loop is ``$\|\matC+\rb_t \rb_t^\top\|_2 \geq 2 \|\matX\|_{\mathrm{F}}^2/\ell$''. 
This implies that for the iterate $t$:
$$
\|\matC_i\|_2 \geq \|\matC_i+ \rb_t \rb_t^\top\|_2 -\|\rb_t \rb_t^\top\|_2 \geq \|\matX\|_{\mathrm{F}}^2/\ell.
$$
The first inequality follows by the triangle inequality: 
$\|\matC_i+ \rb_t \rb_t^\top\|_2 \le  \|\matC_i\|_2 +  \|\rb_t \rb_t^\top\|_2.$
The second inequality uses 
$\|\matC_i+\rb_t \rb_t^\top\|_2 \geq 2 \|\matX\|_{\mathrm{F}}^2/\ell$ and 
$\| \rb_t \rb_t^\top \|_2 \le \|\matX\|_{\mathrm{F}}^2/ \ell$.  
To verify that $\| \rb_t \rb_t^\top \|_2 \le \|\matX\|_{\mathrm{F}}^2/ \ell$, recall that $\rb_t = (\matI_d - \matU_t \matU_t^\top) \x_t$. Then, using the fact that  
$(\matI_d - \matU_t \matU_t^\top)$ is a projection matrix: 
$ \| \rb_t \rb_t^\top \|_2 =
 \|(\matI_d - \matU_t\matU_t^\top) \x_t \x_t\transp (\matI_d - \matU_t\matU_t^\top) \|_2^2
 \le
 \|(\matI_d - \matU_t\matU_t^\top) \x_t \|_2^2  \cdot 
 \|\x_t\transp (\matI_d - \matU_t\matU_t^\top) \|_2^2  
 \le  \|\x_t  \|_2^2 
 \le \|\matX\|_{\mathrm{F}}^2/ \ell,$
where the last inequality uses the assumption in the lemma. 
\end{proof}


Next, we show that the matrices $\matX, \tilde{\matX},$ and $\matR$ satisfy some form of the pythagorean theorem for matrices. This result will be useful in a technical manipulation in Lemma~\ref{lemRF}, where we provide an upper bound for $\FNormS{\matR}$. 
\begin{lemma}\label{pythagoras}
Let $\matX \in \R^{d \times n},$ $\tilde{\matX} \in \R^{d \times n}$ and
$\matR \in \R^{d \times n}$ be the matrices whose $t$'th column is $\x_t \in \R^d$, 
$\tilde{\x}_t = \matU_t \matU_t^\top \x_t \in \R^d$,  
and $\rb_t \in \R^d$, respectively (the vectors $\rb_t$'s are taken as the ones in the end of the corresponding iteration in Algorithm~\ref{alg1}). Then,  
$$\FNormS{\matX} = \FNormS{\tilde{\matX}} + \FNormS{\matR}.$$
\end{lemma}
\begin{proof}
We will prove the equivalent relation that $\FNormS{\matX\transp} = \FNormS{\tilde{\matX}^\top} + \FNormS{\matR\transp}$. 
Recall the definition of the matrices $\matX, \tilde{\matX}$ and $\matR$ in Section~\ref{sec:notation}. 
From this definition, $\matX\transp =  \tilde{\matX}\transp + \matR\transp$. 
The pythagorean theorem for matrices indicates that if 
$\trace\left( \tilde{\matX}^\top \matR \right) = \trace\left( \tilde{\matR}^\top \matX\right) = 0$, then 
$\FNormS{\matX^\top} = \FNormS{\tilde{\matX}^\top} + \FNormS{\matR^\top}$\footnote{We  prove this version of the pythagorean theorem for matrices. Let $\matX, \matY$ be matrices with
$\trace\left( \matX \matY^\top \right) = \trace\left( \matY \matX^\top \right) = 0$;~then,
$\FNorm{\matX+\matY}^2 = \FNorm{\matX}^2+\FNorm{\matY}^2$. We prove this bound as follows:
$\FNorm{\matX+\matY}^2 = \Trace{ \left(\matX+\matY\right)\left(\matX+\matY\right)\transp } =
\Trace{ \matX\matX\transp + \matX\matY\transp + \matY\matX\transp + \matY\matY\transp } =
\Trace{ \matX\transp\matX}  + \Trace{\matX\matY\transp} + \Trace{ \matY\matX\transp} + \Trace{\matY\matY\transp } 
= \FNorm{\matX}^2+\FNorm{\matY}^2$}. 
Hence, it suffices to show that $\trace\left( \tilde{\matX}^\top \matR \right)=0.$ The diagonal elements of the matrix $\matH : =\tilde{\matX}^\top \matR \in \R^{n \times n}$ are 
$$ \matH_{tt} = \x_t\transp \matU_t \matU_t^\top \left(\matI_d - \matU_t \matU_t^\top \right) \x_t,$$
for $t=1,\dots,n$. Using Lemma~\ref{obs2}, we conclude that each diagonal element equals zero
because (for a fixed $t$ and some $j$; also recall that $j \le \ell$ is the number of non-zero columns in 
$\matU_t$):
\eqan{
\matU_t \matU_t^\top \left(\matI_d - \matU_t \matU_t^\top \right) 
&=& \matU_t \matU_t^\top - \matU_t \matU_t^\top \matU_t \matU_t^\top  \\
&=& \matU_t \matU_t^\top - \matU_t 
\left( \begin{array}{cc}
\matI_{j}  &  \\
 & {\bf 0}_{(\ell - j) \times (\ell - j)}  
 \end{array}
 \right) \matU_t^\top \\
&=& 
\left( \begin{array}{cc}
\matI_{j}  &  \\
 & {\bf 0}_{(d - j) \times (\ell - j)}  
 \end{array}
 \right) - 
\left( \begin{array}{cc}
\matI_{j}  &  \\
 & {\bf 0}_{(d - j) \times (\ell - j)}  
 \end{array}
 \right) \\
& = &
{\bf 0}_{d \times d},
} 
hence the trace, which is the sum of all the diagonal elements equals zero, and the result follows. 
\end{proof}

The next two lemmas, provide upper bounds for the dimensions of the subspaces spanned by the columns of $\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top$ and $\matC_z,$ respectively. We will use those results in Lemma~\ref{lem2}, where one requires a description of the SVD of the sum $\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top+\matC_z$ to calculate a bound for $\TNormS{\matR}$. 
\begin{lemma}\label{rankB} Let $[\lambda_j, \u_j],$ for $j=1,\dots,\ell'$ be the pairs of eigenvalues/eigevectors computed in Algorithm~\ref{alg1} with $[\lambda_j, \u_j]$ computed before $[\lambda_{j+1}, \u_{j+1}]$. Then,
$$\rank\left(\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top \right)=\ell'.$$
\end{lemma}
\begin{proof}
This result follows immediately from Lemma~\ref{obs2} and Lemma~\ref{lem25}. Specifically, 
from Lemma~\ref{obs2}, we have that the $\u_j$'s are orthogonal to each other and from Lemma~\ref{lem25} we have that 
$\lambda_j > 0$, hence $\rank(\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top) = \ell'$. 
\end{proof}

\begin{lemma}\label{rankA}
Let $\matC_z$ be the matrix $\matC$ upon termination in Algorithm~\ref{alg1}. Then,
$$\rank\left(\matC_z\right) \le d- \ell'.$$ 
\end{lemma}
\begin{proof}
We give a proof by contradiction. Assume that $\rank(\matC_z) := \rho > d - \ell'$. 

For notational convenience,  
let
$\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top := \matB.$
From Lemma~\ref{rankB}, we have that $\rank(\matB) = \ell'$, hence 
$\matB = \matU_{\matB} \matSig_{\matB} \matU_{\matB}^\top$ is the SVD of $\matB$ with
$\matU_{\matB} \in \R^{d \times \ell'}$ and $\matSig_{\matB} \in \R^{\ell' \times \ell'}$. 
From our assumption, we have that $\rank(\matC_z) = \rho$, hence 
$\matC_z = \matU_{\matC_z} \matSig_{\matC_z} \matU_{\matC_z}^\top$ is the SVD of $\matC_z$ with
$\matU_{\matC_z} \in \R^{d \times \rho }$ and $\matSig_{\matC_z} \in \R^{\rho \times \rho}$. Here, $\matSig_{\matC_z}$ contains strictly positive diagonals. 
From Lemma~\ref{obs3} we have that 
$ \matC_z \matB= {\bf 0}_{d \times d},$ which is equivalent to saying that 
$\matU_{\matC_z}\transp \matU_{\matB} = {\bf 0}_{d \times d}$. Also,  
$\matU_{\matC_z}$ contains $\rho > d -\ell'$ columns.

From one hand, based on the relation
$\matU_{\matC_z}\transp \matU_{\matB} = {\bf 0}_{d \times d}$, we have that every column in $\matU_{\matC_z}$ is in $span(\matI - \matU_{\matB} \matU_{\matB}\transp)$.

From the other hand, based on the fact that $\matU_{\matC_z}$ contains $\rho > d - \ell'$ orthonormal columns, we have that the last $\rho - d - \ell' \ge 1$ columns in 
$\matU_{\matC_z}$ are in 
$span(\matI - \matU_{{\matC_z}} \matU_{ \matC_z}\transp)$.

The two statements contradict each other because, if
$ span(\matI - \matU_{\matB} \matU_{\matB}\transp) = span(\matI - \matU_{{\matC_z}} \matU_{ \matC_z}\transp),$ it would have not been possible that $\matU_{\matC_z}\transp \matU_{\matB} = {\bf 0}_{d \times d}$. 
\end{proof}

The next lemma argues that the reconstruction error of our algorithm is bounded from 
$\FNormS{\matR}$. This result is based on the fact that the matrix $\matU$ contains orthonormal columns and the updates on $\matU$ occur by adding columns one after the other, without ever deleting some columns. The first inequality in the derivation in the proof of the lemma also indicates that the bound in Theorem~\ref{thm1} not only holds for the ``best'' isometry matrix $\matPhi$ but also for the matrix $\matU_n,$ i.e., the matrix $\matU$ upon termination Algorithm~\ref{alg1}, which is also an isometry. 
\begin{lemma}\label{lem1}
Let $\matX \in \R^{d \times n},$ $\matY \in \R^{\ell \times n},$  and
$\matR \in \R^{d \times n},$ be the matrices whose $t$'th column is $\x_t \in \R^d, \y_t \in \R^{\ell}$,
and
$\rb_t \in \R^d$ (the vectors $\rb_t$'s are taken as the ones in the end of the corresponding iteration), respectively. Then, 
$$
\min_{ \matPhi \in \iso_{d\times{\ell}}} \|\matX- \matPhi \matY\|_{\mathrm{F}}^2 \le \|\matR\|_{\mathrm{F}}^2.
$$
\end{lemma}
\begin{proof}
We manipulate the term $\min_{ \matPhi \in \iso_{d\times{\ell}}} \|\matX- \matPhi \matY\|_{\mathrm{F}}^2$
as follows: 
$$ \min_{ \matPhi \in \iso_{d\times{\ell}}} \|\matX- \matPhi \matY\|_{\mathrm{F}}^2 \le
\|\matX- \matU_n \matY\|_{\mathrm{F}}^2 
= \sum_{i=1}^{n} \|\x_t - \matU_n\matU_t^\top \x_t\|_2^2 = \sum_{i=1}^{n} \|\x_t - \matU_t\matU_t^\top \x_t\|_2^2 = \|\matR\|_{\mathrm{F}}^2.$$
The first inequality holds because $\matU_n$ is an isometry. This holds because $\matU_n$ contains a subset of $\ell'$ orthonormal columns, according to Lemma~\ref{obs2}; the rest columns are zeros.  Also, the bottom $\ell -\ell'$ rows in $\matY$ are all-zeros. 
In the second equality, $\matU_n\matU_t^\top = \matU_t\matU_t^\top,$ since $\matU_n =[\matU_t, {\bf 0}_{d \times (\ell - t)}]$.
\end{proof}

Next, we provide an upper bound for the Frobenius norm squared of $\matR$ with respect to the spectral norm of $\matR$. That helps because the algorithm by design  naturally provides a bound for $\TNorm{\matR}$~(see Lemma~\ref{lem2}). 
It is worth mentioning that the only property of the algorithm used for this lemma is the fact that the vectors $\u$ inserted in $\matU$ are orthogonal to each other. In other words, the relation in Lemma~\ref{lemRF} is true for any online PCA algorithm which chooses vectors $\u$ that are orthonormal to each other. 
\begin{lemma} \label{lemRF}
Let 
$\matX, \matR \in \R^{d \times n},$ be the matrices whose $t$'th column is $\x_t$ and $\rb_t \in \R^d$ (the vectors $\rb_t$'s are taken as the ones in the end of the corresponding iteration), respectively. 
Then,
$$ 
\|\matR\|_{\mathrm{F}}^2 \le \OPT_k + 2 \cdot \|\matX\|_{\mathrm{F}} \cdot \sqrt{k} \cdot \| \matR \|_2
$$
\end{lemma}
\begin{proof}
We manipulate the term $\|\matR\|_{\mathrm{F}}^2$ as follows: 
\begin{eqnarray}
\|\matR\|_{\mathrm{F}}^2 &=& \|\matX\|_{\mathrm{F}}^2 - \|\tilde{\matX}\|_{\mathrm{F}}^2 \label{eqn:this1}\\
&\le& \|\matX\|_{\mathrm{F}}^2 - \sum_{i=1}^{k}\|\v_i^\top \tilde{\matX}\|_2^2 \label{eqn:this2}\\
&=& \|\matX\|_{\mathrm{F}}^2 - \sum_{i=1}^{k}\|\v_i^\top \matX - \v_i^\top \matR\|_2^2 \label{eqn:this3}\\
&=& \|\matX\|_{\mathrm{F}}^2 - 
\left( \sum_{i=1}^{k}\left(\|\v_i^\top \matX\|^2 - 2\left \langle \v_i^\top \matX, \v_i^\top \matR \right \rangle + \|\v_i^\top \matR\|_2^2 \right) \right) \label{eqn:this4}\\
&\le& \|\matX\|_{\mathrm{F}}^2 - \sum_{i=1}^{k}\|\v_i^\top \matX\|_2^2 + 2 \cdot \sum_{i=1}^{k} \|\v_i^\top \matX\|_2\|\v_i^\top \matR\|_2 \label{eqn:this5}\\
&\le& \|\matX\|_{\mathrm{F}}^2 - \sum_{i=1}^{k}\|\v_i^\top \matX\|_2^2 + 2 \cdot \sqrt{\sum_{i=1}^{k} \|\v_i^\top \matX\|_2^2} \cdot \sqrt{\sum_{i=1}^k \|\v_i^\top \matR\|_2^2} \label{eqn:this6}\\
&\le& \OPT_k + 2 \cdot \|\matX\|_{\mathrm{F}} \cdot \sqrt{k} \cdot \| \matR \|_2 \cdot \label{eqn:this7}\end{eqnarray}
%
Eqn.~\eqref{eqn:this1} follows by Lemma~\ref{pythagoras}. In Eqn.~\eqref{eqn:this2}, for $i=1,\dots,k$ let $\v_i \in \R^d$ denote the $i$th left singular vector of $\matX$ corresponding to the $i$th largest singular value of $\matX$, also let $\matB \in \R^{d \times k}$ contains those vectors as columns. 
In this equation we used 
$\|\tilde{\matX}\|_{\mathrm{F}}^2 \ge \sum_{i=1}^{k}\|\v_i^\top \tilde{\matX}\|_2^2.$
This is true because
\footnote{
It is worth mentioning here that a similar bound can be proved for any $k'$ with 
$k' \in [1,\dots,d]$; however, choosing exactly $k$ allows us to obtain $\OPT_k$ in the last manipulation of those derivations, which is useful because we eventually want to compare the reconstruction error of our algorithm to the ``best'' possible PCA reconstruction error. However, to define some PCA reconstruction error one needs $\matX$ and $k,$ so, though it seems reasonable to choose $k' > k$ because $\OPT_{k'} < \OPT_k$, it is not obvious that this will give an overall tighter bound because the other additive term in Eqn.~\eqref{eqn:this6} would have been larger if we had chosen $k' > k$.}:
$$ \sum_{i=1}^{k}\|\v_i^\top \tilde{\matX}\|_2^2 = \FNormS{ \matB\transp \tilde{\matX} } \le 
\TNormS{ \matB\transp } \cdot \FNormS{ \tilde{\matX} } = \FNormS{ \tilde{\matX} }.$$ 
In the first inequality we used the fact that $\FNormS{\matX \matY} \le \TNormS{\matX} \FNormS{\matY},$ for any $\matX$ and $\matY$; in the last equality we used $\TNormS{ \matB\transp } = 1$.

In Eqn.~\eqref{eqn:this3} we use the relation $\matX = \tilde\matX + \matR$. 
In Eqn.~\eqref{eqn:this4} we used the fact  that for any two vectors $\alpha,\beta$: 
$\| \alpha - \beta \|_2^2 = \| \alpha\|_2^2 + \| \beta \|_2^2 - 2   \left \langle \alpha,\beta \right \rangle$.
We used this property $k$ times for all $i=1,\dots,k$, with $\alpha = \v_i^\top \matX$ and $\beta = \v_i^\top \matR.$ In  Eqn.~\eqref{eqn:this6} we used the Cauchy-Swartz inequality: for numbers 
$\gamma_i \ge 0, \delta_i \ge 0$ and for $i=1,\dots,k$ it is: $  \sum_{i=1}^k \gamma_i \delta_i \le 
\sqrt{ \sum_{i=1}^k \gamma_i^2} \cdot \sqrt{ \sum_{i=1}^k \delta_i^2}$. We used this inequality for 
$\gamma_i = \|\v_i^\top \matX\|_2^2$ and $\delta_i = \|\v_i^\top \matR\|_2^2$. In Eqn.~\eqref{eqn:this7} we used that 
$$\|\matX\|_{\mathrm{F}}^2 - \sum_{i=1}^{k}\|\v_i^\top \matX\|_2^2 
= \sum_{i=1}^{d} \sigma_i^2(\matX) - \sum_{i=1}^{k} \sigma_i^2(\matX) = 
\sum_{i=k+1}^{d} \sigma_i^2(\matX)= \OPT_k.$$ 
Also, we used that
$\sum_{i=1}^k \|\v_i^\top \matX\|_2^2 \le \sum_{i=1}^k \sigma_i^2(\matX) \le \| \matX \|_{\mathrm{F}}^2,$
and that 
$\sum_{i=1}^k \|\v_i^\top \matR\|_2^2 \le k \| \matR \|_2^2.$
The latter inequality is true because $\|\v_i^\top \matR\|_2^2 \le \| \matR\|_2^2$ for all $i=1,\dots,k$. 
\end{proof}
%\begin{Cor}\label{cor4}
%In particular, to get $\ALG_{\ell} \le \OPT_k + \eps \|\matX\|_{\mathrm{F}}^2,$ set 
%$\ell = 4k/\varepsilon^2$.
%%$$ \ell = \frac{4k (\eps+ \OPT/\|\matX\|_{\mathrm{F}}^2 )}{\eps^2} $$
%\end{Cor}
Next, we provide an upper bound on the spectral norm squared of $\matX$. The previous lemma motivates the need for such an analysis. 
\begin{lemma}\label{lem2}
Let 
$\matX, \matR \in \R^{d \times n},$ be the matrices whose $t$'th column is $\x_t$ and $\rb_t \in \R^d$ (the vectors $\rb_t$'s are taken as the ones in the end of the corresponding iteration), respectively. 
Then,
$$\|\matR\|_2^2 \leq 2 \|\matX\|_{\mathrm{F}}^2/\ell.$$
\end{lemma}
\begin{proof} 
For notational convenience,  
let
$\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top := \matB.$
From Lemma~\ref{rankB}, we have that $\rank(\matB) = \ell'$, hence 
$\matB = \matU_{\matB} \matSig_{\matB} \matU_{\matB}^\top$ is the SVD of $\matB$ with
$\matU_{\matB} \in \R^{d \times \ell'}$ and $\matSig_{\matB} \in \R^{\ell' \times \ell'}$. 
From Lemma~\ref{rankA}, we have that $\rank(\matC_z) \le d- \ell'$, hence 
$\matC_z = \matU_{\matC_z} \matSig_{\matC_z} \matU_{\matC_z}^\top$ is the SVD of $\matC_z$ with
$\matU_{\matC_z} \in \R^{d \times (d-\ell')}$ and $\matSig_{\matC_z} \in \R^{(d-\ell') \times (d-\ell')}$. Here, $\matSig_{\matC_z}$ might contain some zero diagonals. 
From Equation~\ref{obs0} we have: 
$ \matR\matR^\top = \matC_z + \matB,$
and from Lemma~\ref{obs3} we have that 
$ \matC_z \matB= {\bf 0}_{d \times d}.$
Hence, 
for $\matV\matV^\top = \matV^\top \matV = \matI_d$, we obtain that:  
$$ 
\matR \matR^\top  =  
\matC_z + \matB
=
\underbrace{\left( \begin{array}{cc}
\matU_{\matC_z}  &  \matU_{\matB}
\end{array} \right)}_{\matV \in \R^{d \times d}}
\underbrace{
\left( \begin{array}{cc}
\matSig_{\matC_z} &  \\
 &  \matSig_{\matB}  \end{array} \right)}_{\matD \in \R^{d \times d}}
\underbrace{
 \left( \begin{array}{c}
\matU_{\matC_z}^\top  \\  
\matU_{\matB}^\top
\end{array} \right)}_{V^\top}.
$$ 
Then, 
$$\|\matR\|_2^2 =   \| \matV \matD \matV^\top \|_2 = 
\| \matD \|_2 \le \max\{ \TNorm{\matSig_{\matC_z}}, \TNorm{\matSig_{\matB}} \}
\le 2 \|\matX\|_{\mathrm{F}}^2/\ell.$$
The last inequality is true because: first, 
$\TNorm{ \matSig_{\matB}} \le \max_{j=1}^{\ell'} \lambda_j \le 2 \|\matX\|_{\mathrm{F}}^2/\ell$~(from Lemma~\ref{lem2}); and, second, 
$\TNorm{\matSig_{\matC_z}} = \TNorm{ \matC_z }
\le \TNorm{\matC_{z} + \rb_t \rb_t\transp} \le  \|\matX\|_{\mathrm{F}}^2/\ell,$
where the last inequality is by construction of the algorithm (otherwise the algorithm would have inserted the while-loop to update $\matC_z$ to $\matC_{z+1}$ 
and would have \emph{not} terminated). 
\end{proof}

Finally, we provide an upper bound on the number of times the algorithm inserts the while-loop; this immediately implies an upper bound on the number of vectors $\u$ inserted in $\matU$. The main idea here is to study the trace of the matrix 
$\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top$ and provide a lower bound that depends on $\ell'$ and an upper bound that depends on $\ell$. Combining the two bounds gives the desired relation between $\ell$ and $\ell'$. 
\begin{lemma}\label{lem3}
Let 
$\matX \in \R^{d \times n}$ be the matrix whose $t$'th column is $\x_t$ 
Then, assuming that for all $t$, $\|\x_t\|_2^2 \leq \|\matX\|_{\mathrm{F}}^2/\ell$, the while loop in Algorithm~\ref{alg1} can occur a total of at most 
$$ \ell' \le  \ell \cdot \min\{1, \OPT_k / \|\matX\|_{\mathrm{F}}^2 + \sqrt{8 k / \ell}\} \le \ell.$$
\end{lemma}
\begin{proof} For notational convenience, 
let 
$
\sum_{j=1}^{\ell'}\lambda_j \u_j \u_j^\top := \matZ.
$
First, using Lemma~\ref{lem25}, we calculate a lower bound for 
$\mathrm{Trace}( \matZ)$:
\begin{equation}\label{ellboundeqn1}
\mathrm{Trace}( \matZ) =
\sum_{j=1}^{\ell'}  \lambda_j  \ge \ell'  \cdot \left( \|\matX\|_{\mathrm{F}}^2/\ell \right). 
\end{equation}

From Eqn.~\eqref{obs0}: $  \matZ =  \matR\matR^\top - \matC_z.$ 
In Lemma~\ref{lem25} we argued that for all $j,$ $\lambda_j > 0$.
It follows that the largest eigenvalue of $\matC_z$ is also greater than zero, as are all the eigenvalues of $\matC_z$. Hence
$\mathrm{Trace}(\matC_z) > 0$ and
\begin{equation}\label{ellboundeqn2}
\mathrm{Trace}(\matZ) = \mathrm{Trace}(\matR\matR^\top) - \mathrm{Trace}(\matC_z) \le 
\mathrm{Trace}(\matR\matR^\top) = 
\|\matR\|_{\mathrm{F}}^2 \leq \|\matX\|_{\mathrm{F}}^2.
\end{equation}
The inequality $\|\matR\|_{\mathrm{F}}^2 \leq \|\matX\|_{\mathrm{F}}^2$ follows because for all $t,$ 
$$\| \rb_t ||_2^2 = \| (\matI_d - \matU_t \matU_t) \x_t \|_2^2  \le  \| (\matI_d - \matU_t \matU_t)\|_2^2 \cdot\| \x_t \|_2^2 = \| \x_t \|_2^2,$$
since $\TNormS{\matI_d - \matU_t \matU_t} \le 1$. 
Also, by  Theorem~\ref{thm1}: $\|\matR\|_{\mathrm{F}}^2 \leq \OPT_k + \sqrt{8k/\ell}\|\matX\|_{\mathrm{F}}^2,$ hence\begin{equation}\label{ellboundeqn3}
\mathrm{Trace}(\matZ) \leq  \|\matR\|_{\mathrm{F}}^2 \leq \OPT_k + \sqrt{8k/\ell}\|\matX\|_{\mathrm{F}}^2.
\end{equation}
Combining Eqn.~\eqref{ellboundeqn1}, Eqn.~\eqref{ellboundeqn2}, and Eqn.~\eqref{ellboundeqn3}
shows the claim. 
\end{proof}





