\documentclass[11pt, margin=1in]{article}
\usepackage[british]{babel} 
\usepackage{amsfonts}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{algorithm}
\usepackage{algorithmic}

\newcommand{\iso}{\operatorname{{\cal O}}} 
\newcommand{\ALG}{\operatorname{ALG}}
\newcommand{\OPT}{\operatorname{OPT}}
\newcommand{\eps}{\varepsilon}
\newcommand{\tr}{\operatorname{Tr}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\fro}{\mathrm{F}}
\renewcommand{\top}{{\scriptscriptstyle T}}
\newcommand{\ip}[1]{\left \langle #1 \right \rangle}
\newcommand{\poly}{\operatorname{poly}}


\newtheorem{theorem}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{observation}[theorem]{Observation}
\newtheorem{corollary}{Corollary} 

%%%%%%%%% can be removed for final version %%%%%%%%%%%
\usepackage{xcolor}
\newcommand{\dg}[1]{\noindent{\textcolor{magenta}{\{{\bf DG:} \em #1\}}}}
%\renewcommand{\dg}[1]{}
\newcommand{\zk}[1]{\noindent{\textcolor{blue}{\{{\bf ZK:} \em #1\}}}}
\newcommand{\edo}[1]{\noindent{\textcolor{purple}{\{{\bf edo:} \em #1\}}}}
\newcommand{\christos}[1]{\noindent{\textcolor{green}{\{{\bf CB:} \em #1\}}}}
%%%%%%%%% can be removed for final version %%%%%%%%%%%


\title{Online Principal Component Analysis}

\author{
  Christos Boutsidis \thanks{Yahoo Labs, New York, NY}
  \and
  Dan Garber \thanks{Technion Israel Institute of Technology and partially at Yahoo Labs}
  \and
  Zohar Karnin \thanks{Yahoo Labs, Haifa, Israel}
  \and
  Edo Liberty \thanks{Yahoo Labs, New York, NY}
}
\date{\nonumber}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{document}
\begin{titlepage}

\maketitle

\begin{abstract}
We consider the \emph{online} version of the well known Principal Component Analysis (PCA) problem. In standard PCA, the input to the problem is a set of vectors 
$X = [x_1,\ldots, x_n]$ in $\R^{d\times n}$ and a target dimension $k < d$; the output
is a set of vectors $Y = [y_1,\ldots, y_n]$ in $\R^{k \times n}$ that minimize
$ \min_{\Phi} \|X - \Phi Y\|_{\fro}^2$ where $\Phi$ is restricted to be an isometry.
The global minimum of this quantity, $\OPT_k$, is obtainable by offline PCA.

In the online setting, the vectors $x_t$ are presented to the algorithm one by one.
For every presented $x_t$ the algorithm \emph{must} output a vector $y_t$ before receiving $x_{t+1}$. The quality of the result, however, is measured in exactly the same way, $\ALG =  min_{\Phi} \|X - \Phi Y\|_{\fro}^2$. 
%This is a natural cost function for online PCA and one that is especially useful for streaming data mining and machine learning.
This paper presents the first approximation algorithms for this setting of online PCA. 
Our algorithms produce $y_t \in \R^\ell$ with $\ell = O(k\cdot \operatorname{poly}(1/\eps))$ such that $\ALG \le \OPT_k + \eps \|X\|_{\fro}^2.$
\end{abstract}


%%% SODA PURE TEXT ABSTRACT
%%  We consider the online version of the well known Principal Component 
%%Analysis (PCA) problem.  In standard PCA, the input to the problem is a 
%%set of vectors  X = [x_1, ... , x_n] in R^{d x n} and a target dimension k < d; 
%%the output is a set of vectors Y = [y_1, ... , y_n] in R^{k x n} that minimize	
%%min_{\Phi} ||X - \Phi Y||_F^2
%%where \Phi is restricted to be an isometry.  The global minimum of this 
%%quantity, OPT_k, is obtainable by offline PCA.
%%
%%  In the online setting, the vectors x_t are presented to the algorithm one 
%%by one. For every presented x_t the algorithm must output a vector y_t 
%%before receiving x_{t+1}.  The quality of the result, however, is measured 
%%in exactly the same way, ALG =  min_{\Phi} ||X - \Phi Y||_F^2.  This paper 
%%presents the first approximation algorithms for this setting of online PCA. 
%%Our algorithms produce y_t in R^\ell with \ell = O(k poly(1/\eps)) such 
%%that ALG < OPT_k + \eps ||X||_F^2.

\thispagestyle{empty}
\end{titlepage}


%%%%%%%%%%%%%%% SODA SUBMISSION INSTRUCTIONS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Full submissions should begin with the title of the paper, each author's name, affiliation, and e-mail address, 				%
% followed by a succinct statement of the problems considered, the main results, an explanation of their significance,		%
% and a comparison to past research, all of which should be easily understood by non-specialists. More technical 			%
% developments follow as appropriate. Use 11-point or larger font in single column format, with one-inch or wider 			%
% margins all around. You may include a clearly marked appendix, which will be read at the discretion of the committee. 		%
% The submission, excluding title page, bibliography and appendix, must not exceed 10 pages (authors should feel 			%
% free to send submissions that are significantly shorter than 10 pages). The submission must include a full proof of 		%
% the results, part of which can be placed in the appendix, whose length is not constrained.							%
%																								%
% You can either include only the omitted proofs and details in the appendix or append a complete full version 				%
% of the paper (reiterating the material in the 10-page abstract) to the submission after the bibliography. A detailed 			%
% proof in the appendix or the appended complete version in the appendix is not a substitute for establishing the 			%
% main ideas of the validity of the result within the 10-page abstract. 											%
%																								%
%A submission that deviates from these guidelines risks summary rejection without consideration of its merits.				%
%%%%%%%%%%%%%%% SODA SUBMISSION INSTRUCTIONS %%%%%%%%%%%%%%%	%%%%%%%%%%%%%%


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\section{Introduction}
Principal Component Analysis (PCA) is one of the most well known and widely used procedures in scientific computing.
It is used for dimension reduction, signal denoising, regression, correlation analysis, visualization etc~\cite{dunteman1989principal}.
It can be described in many ways but one is particularly appealing in the context of online algorithms.
Given $n$ high-dimensional vectors $x_1,\dots, x_n \in \mathbb{R}^d$ and a target dimension $k < d$, produce $n$ other low-dimensional vectors $y_1,\dots,y_n \in \mathbb{R}^k$ such that the reconstruction error is minimized. 
For any set of vectors $y_t$  the reconstruction error is defined as
\begin{equation}\label{pcacost}
\min_{\Phi \in \iso_{d,k}} \sum_{t=1}^{n} \|x_t - \Phi y_t\|_2^2. 
\end{equation}
where $\iso_{d,k}$ denotes the set of $d \times k$ isometries
%$\iso_{d,k} =  \{  \Phi \in \R^{d \times k} | \forall y \in \R^k \;\; \|\Phi y\|_2 = \|y\|_2\}$.  
\begin{equation}\label{eqn:iso}
\iso_{d,k} =  \{  \Phi \in \R^{d \times k} | \forall y \in \R^k \;\; \|\Phi y\|_2 = \|y\|_2\}.
\end{equation}

\noindent For any $x_1,\dots, x_n \in \mathbb{R}^d$ and $k < d$ standard offline PCA finds the optimal $y_1,\dots, y_n \in \mathbb{R}^k$ which minimize the reconstruction error
%, the PCA problem is: find $y_1,\dots,y_n \in \mathbb{R}^k$ that are the optimal solution to the problem:
\begin{equation}\label{pcaproblem}
\OPT_k = \min_{y_t \in \R^k} \left( \min_{\Phi \in \iso_{d,k}} \sum_{t=1}^{n} \|x_t - \Phi y_t\|_2^2 \right).
\end{equation}
The solution to this problem goes through computing $W  \in \iso_{d,k}$ whose columns span the top $k$ left singular vectors of the matrix $X =[x_1,\dots,x_n]\in \R^{d \times n}$.
Setting $y_t = W^\top x_t$ and $\Phi  = W$ specifies the optimal solution.
%
%Without loss of generality, \edo{that's not w.o.l.g. We simply need to assume this.} let
%$d <n$ and $rank(X)=d$.
%%
%The SVD of $X$ is $X=U_A S_A V_A^\top,$ where $U_A \in \R^{d \times d}$ and $V_A \in \R^{d \times n}$ contain the left and right singular vectors of $X,$ respectively, and $S_A \in \R^{d \times d}$ contains its singular values on the diagonal.
%Let $U_k \in \R^{d \times k}$ contain only the $k < d$ left singular vectors corresponding to the top $k$ singular values of $X$.
%Then, the optimal PCA vectors $y_t$ are $y_t = U_k^\top x_t$. 
%Equivalently, if $Y  = [y_1,\dots,y_n]\in \R^{k \times n}$ then $Y = U_k^\top X.$ Also, the best isometry matrix is $\Phi = U_k$. 
%We denote the minimum possible value in the PCA optimization problem as $\OPT_k$:
%\begin{equation}\label{pcabest}
%\OPT_k :=  \| X -  U_k U_k^\top X\|_{\fro}^2 =
%\sum_{i=1}^n \|x_t -  U_k U_k^\top x_t\|_2^2 = 
%\min_{y_t \in \R^k} \left(\min_{\Phi \in \iso_{d,k}} \sum_{t=1}^{n} \|x_t - \Phi y_t\|_2^2 \right). 
%\end{equation}

Computing  the optimal $y_t = W^\top x_t$ naively requires several passes over the matrix $X$.
Power iteration based methods for computing $W$ are memory and CPU efficient but require $\omega(1)$ passes over $X$.
Two passes also naively suffice; one to compute $X X^\top$ from which $W$ is computed and one to generate the mapping  $y_t = W^\top x_t$.
The bottleneck is in computing $X X^\top$ which demands $\Omega(d^2)$ auxiliary space (in memory) and $\Omega(d^2)$ operations per vector $x_t$ (assuming they are dense).
This is prohibitive even for moderate values of $d$.
A significant amount of research went into reducing the computational overhead of obtaining a good approximation for $W$ in one pass \cite{FriezeKannanVempala1998, DrineasKannan2003, DeshpandeV06, Sarlos06, RudelsonVershyninMatrixSampling2007, tygert07PNAS, Liberty13,Phillips14}.
Still, a second pass is needed to produce the reduced dimension vectors $y_t$.
%Reviewing these results is, unfortunately, beyond the scope of this paper.



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Online PCA}
In the online setting, the algorithm receives the input vectors $x_t$ one ofter the other and must always output $y_t$ before receiving $x_{t+1}$.
The cost of the online algorithm is measured like in the offline case
\begin{equation*}
\ALG = \min_{\Phi \in \iso_{d,\ell}} \sum_{t=1}^{n} \|x_t - \Phi y_t\|_2^2 \ .
\end{equation*}
Note that the target dimension of the algorithm, $\ell$, is potentially larger than $k$ to compensate for the handicap of operating online.

This is a natural model when a downstream online (rotation invariant) algorithm is applied to $y_t$.
Examples include online algorithms for clustering ($k$-means, $k$-median), regression, classification (SVM, logistic regression), facility location, $k$-server, etc.
By operating on the reduced dimensional vectors, these algorithms gain computational advantage but there is a much more important reason to apply them post PCA.

PCA denoises the data. Arguably, this is the most significant reason for PCA being such a popular and successful preprocessing stage in data mining.
Even when a significant portion of the Frobenius norm of $X$ is attributed to isotropic noise, PCA can often still recover the signal.
This is the reason that clustering, for example, the denoised vectors $y_t$ often gives better qualitative results than clustering $x_t$ directly.
Notice that in this setting the algorithm cannot retroactively change past decisions.
Furthermore, future decisions should try to stay consistent with past ones, even if those were misguided.

Our model departs from earlier definitions of online PCA. 
We shortly review three other definitions and point out the differences.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsubsection{Random projections}
Most similar to our work is the result of Sarlos \cite{Sarlos06}.
They generate $y_t = S^\top x_t$ where $S \in \R^{d \times \ell}$ is generated randomly and independently from the data. 
For example, each element of $S$ can be $\pm 1$ with equal probability (Theorem 4.4 in~\cite{clarkson2009numerical}) or drawn from a normal Gaussian distribution (Theorem 10.5 in~\cite{HalkoMT11}).
Then, with constant probability for $\ell = \Theta(k/\eps)$
\begin{equation*}%\label{pcacw}
\min_{\Psi \in \R^{d \times \ell}} \sum_{t=1}^{n} \|x_t - \Psi  y_t\|_2^2 \le (1+\eps)\OPT_k.
\end{equation*}
Here, the best reconstruction matrix is $\Psi = XY^{\dagger}$ which is \emph{not} an isometry in general.\footnote{The notation $Y^{\dagger}$ stands for the Moore Penrose inverse or pseudo inverse of $Y$.}
We claim that this seemingly minute departure from our model is actually very significant.
Note that the matrix $S$ exhibits the Johnson Lindenstrauss property \cite{JohnsonLindenstrauss84, GuptaDasgupta06, Achlioptas03}. 
Roughly speaking, this means the vectors $y_t$ approximately preserve the lengths, angels, and distances between all the vectors $x_t$.  
Thereby, preserving the noise and signal in $x_t$ equally well. This is not surprising given that $S$ is generated independently from the data.
Observe that to nullify the noise component $\Psi = XY^{\dagger}$ must be far from being an isometry and that $\Psi = X(S^\top X)^\dagger$ can only be computed after the entire matrix was observed.

For example, let $\Phi \in \iso_{d,k}$ be the optimal PCA projection for $X$. 
Consider $y_t \in \R^\ell$ whose first $k$ coordinates contain $\Phi^\top x_t$ and the rest $\ell-k$ coordinates contain an arbitrary vector $z_t \in \R^{\ell-k}$.
%$$
%Y = 
%\left( \begin{array}{c}
%\Phi^\top X \\
%\mbox{N}
% \end{array} \right). 
%$$
%In the case where $\|N\|_{\fro} \gg \|X\|_{\fro}$ the geometric arrangement of the columns of $Y$ ($y_t$) share very little with that of $x_t$.
In the case where $\|z_t\|_2 \gg \|\Phi^\top x_t\|_2$ the geometric arrangement of $y_t$ potentially shares very little with that of signal in $x_t$.
Yet, $\min_{\Psi \in \R^{d \times \ell}} \sum_{t=1}^{n} \|x_t - \Psi  y_t\|_2^2 = \OPT_k$ by setting $\Psi = (\Phi | 0^{d \times (\ell -k)})$.
This would have been impossible had $\Psi$ been restricted to being an isometry.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsubsection{Regret minimization} 
A regret minimization approach to online PCA was investigated in \cite{Warmuth07randomizedonline,NieKW13}. 
In their setting of online PCA, at time $t$, \emph{before} receiving the vector $x_t$, the algorithm produces a rank $k$ projection matrix $P_t \in \R^{d \times d}$.\footnote{Here, $P_t$ is a square projection matrix $P_t^2 = P_t$}
The authors present two methods for computing projections $P_t$ such that the quantity $\sum_t \| x_t - P_t^\top x_t \|_2^2$ converges to $\OPT_k$ in usual no-regret sense.
%
Since each $P_t$ can be written as $P_t= U_t U_t^\top$ for $U_t \in \iso_{d,k}$ it would seem that setting $y_t = U_t^\top x_t$ should solve our problem. 
Alas, the decomposition $P_t= U_t U_t^\top$ (and therefore $y_t$) is underdetermined.
Even if we ignore this issue, each $y_t$ can be reconstructed by a different $U_t$.
To see why this objective is problematic for the sake of dimension reduction, consider our setting where we can observe $x_t$ before outputting $y_t$.
One can simply choose the rank $1$ projection $P_t = x_tx_t^\top / \|x_t\|_2^{2}$.
On the one hand this gives $\sum_t \| x_t - P_t x_t \|_2^2 = 0$. On the other, it clearly does not provide meaningful dimension reduction.



%%Consider ${U'_t} = U_tQ_t$  for $Q \in \iso_{k,k}$.
%%Clearly, ${U'_t} {U'_t}^\top = U_tQ_t Q_t^\top U_t^\top = U_t U_t^\top = P_t$.
%%Since $Q_t$ is arbitrary the vector $y'_t$ is also arbitrary (except for its norm).
%%Therefore, the structure of  $y_t$'s  will not necessarily reflect in any way the geometry of the original vectors. 
%
%To see this, notice that a projection $P_t$ does not uniquely identify a mapping from $\R^d$ to $\R^k$. 
%If $P_t = U_tU_t^\top$ for some matrix $U_t \in \iso_{d,k}$ then it is also true that $P_t = U_t Q_t Q_t^\top U_t^\top $ where $Q_t \in \R^{k\times k}$ is an arbitrary unitary (rotation) matrix.
%Thus, an almost completely arbitrary mapping $y_t = Q_t^\top U_t^\top  x_t$ conforms to the same projections $P_t$.
%Clearly, the $y_t$'s generate are almost meaningless.
%%
%%Even in the case of  $k=1$, where $P_t$ does in fact specify $U_t$ uniquely, this objective function is problematic.
%%For simplicity assume $x_t$ are unit length. 
%%Now, imagine the algorithm somehow manages to consistently (and exactly) guess $x_t$ ahead of time. 
%%Then, it could use the rank $1$ projection $P_t = x_t x_t^\top $.
%%This results in the unique dimension reduction $U_t =  x_t$  and $y_t \gets U_t^\top x_t$. 
%%Note that $U_t y_t = x_t$ which means the objective is exactly zero. 
%%But, in the same time, $y_t$ are all identical one-dimensional vectors; all the vectors $x_t$ are essentially mapped to the scalar $1$.
%%The result is a structureless mapping that achieves an objective cost of zero.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsubsection{Stochastic Setting}
There are three recent results \cite{ACS13, MCJ13, BDF13} that efficiently approximate the PCA objective in Equation~\ref{pcacost}. 
They assume the input vectors $x_t$ are drawn i.i.d.\ from a fixed (and unknown) distribution. 
In this setting, observing $n_0$ columns $x_t$ one can efficiently compute $U_{n_0} \in \iso_{d,k}$ such that it approximately spans the top $k$ singular vectors of $X$.
Returning $y_t = 0^{k}$ for $t < n_0$ and $y_t = U^\top _{n_0} x_t$ for $t \ge n_0$ completes the algorithm.
This algorithm is provably correct if $n_0$ is independent of $n$ which is intuitively correct but non trivial to show.
%
While the stochastic setting is very common in machine learning (e.g.\ the PAC model) in online systems the data distribution is \emph{expected} to change or at least drift over time.
In systems that deal with abuse detection or prevention, one can expect an almost adversarial input.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\subsection{Our contributions}\label{contribs}
We design a deterministic online PCA algorithm (see Algorithm~\ref{alg1} in Section~\ref{sec:inefficient}).
Our main result~(see Theorem~\ref{thm1}) shows that, for any $X = [x_1,\dots,x_n]$ in $\R^{d \times n}$, $k < d$ and $\eps > 0$ the proposed algorithm 
produces a set of vectors $y_1,\dots,y_n$ in $\R^{\ell}$ such that 
$$
\ALG \le \OPT_k+ \eps \|X\|_{\fro}^2
$$
where $\ell = \lceil 8k/\eps^2\rceil$ and $\ALG$ is the registration error as defined in Equation~\ref{pcacost}.
To the best of our knowledge, this is the first online algorithm in the literature attaining theoretical guarantees for the PCA objective.
The description of the algorithm and the proof of its correctness are given in Sections~\ref{sec:inefficient} and \ref{sec:proofs}.


While Algorithm~\ref{alg1} solves the main technical and conceptual difficulty in online PCA, it has some drawbacks. 
$\mathrm{I}$) It must assume that $\max_t \| x_t \|_2^2 \le \|X\|_{\fro}^2/\ell$. This is not unlikely because we would expect $\| x_t \|_2^2 \approx \|X\|_{\fro}^2/n$. Nevertheless, requiring it is a limitation.
$\mathrm{II}$) The algorithm requires $\|X\|_{\fro}^2$ as input, which is unreasonable to expect in an online setting. 
$\mathrm{III}$) It spends $\Omega(d^2)$ floating point operations per input vector and requires auxiliary $\Theta(d^2)$ space in memory.

Section~\ref{sec:extensions} shows that, in the cost of slightly increasing the target dimension and additive error, one can address all the issues above.
$\mathrm{I}$) We deal with arbitrary input vectors by special handling of large norm input vectors. This is a simple amendment to the algorithm which only doubles the required target dimension.
$\mathrm{II}$) Algorithm~\ref{alg2} avoids requiring $\|X\|_{\fro}$ as input by estimating it on the fly. 
A ``doubling argument" analysis shows that the target dimension grows only to $O(k \log(n)/\eps^2)$.\footnote{Here, we assume that $\|x_t\|$ are polynomial in $n$.}
Bounding the target dimension by $O(k/\eps^3)$ requires a significant conceptual change to the algorithm and should be considered one of the main contributions of this paper.
$\mathrm{III}$) Algorithm~\ref{alg2} spends only $O(d k/\eps^3)$ floating point operations per input vector and uses only $O(dk/\eps^3)$ space by utilizing a streaming matrix approximation technique \cite{Liberty13}.

While the intuitions behind these extensions are quite natural, proving their correctness is technically intricate. 
We give an overview of these modifications in Section~\ref{sec:extensions} and defer most of the proofs to the appendix.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Online PCA algorithm} \label{sec:inefficient}
Algorithm~\ref{alg1} receives as input $X=[x_1,\dots,x_n]$ one vector at a time.
It also receives $k$, $\eps$ and $\|X\|_{\fro}^2$ (see Section~\ref{sec:extensions} for an extension of this algorithm that does not require $\| X \|_{\fro}^2$ as an input). 
The parameters $k$ and $\eps$ are used only to compute a sufficient target dimension $\ell = \lceil 8k/\eps^2\rceil$ which insures that $ALG \le \OPT_k + \eps \|X\|_{\fro}^2$.%\footnote{As a remark, notice that $k$ and $\eps$ are only used in computing $\ell$. Therefore, the guaranty of the algorithm holds simultaneously for all $k'$ and $\eps'$ for which $k'/{\eps'}^2 \le k/\eps^2$.}
For the sake of simplifying the algorithm we assume that $\max_t\|x_t\|_2^2\le\|X\|_{\fro}^2/\ell$. 
This is a reasonable assumption because we expect $\|x_t\|_2^2$ to be roughy $\|X\|_{\fro}^2/n$ and $n \gg \ell$.
Overcoming this assumption is somewhat cumbersome and uninspiring, see Section~\ref{sec:extensions} for details on that.
\begin{algorithm}\label{alg1}
\begin{algorithmic}
\STATE {\bf input}: $X$, $k$, $\eps$, $\|X\|_{\fro}$
\STATE $\ell = \lceil 8k/\eps^2\rceil $
\STATE $U = 0^{d \times \ell}$;   $C = 0^{d \times d}$
\FOR {$t = 1,...,n$}
%\STATE {\bf assert:} $\|x_t\|_2^2 \le \|X\|_{\fro}^2/\ell$ \dg{this is already an assumption, this assertion makes the code ugly and less clean}
\STATE $r_t = x_t- U U^\top x_t$
\WHILE {$\|C + r_t r_t^\top \|_2 \geq 2\|X\|_{\fro}^2 /\ell$}
	\STATE $[u,\lambda] \gets \operatorname{TopEigenVectorAndValueOf}(C)$
	\STATE Add $u$ to the next all-zero column of $U$
	\STATE $C \gets C -  \lambda u u^\top$ 
	\STATE $r_t \gets x_t - U U^\top x_t$
\ENDWHILE
\STATE $C \gets C + r_t r_t^\top $
%\STATE $\tilde{x_t} = UU^\top x_t$
\STATE {\bf yield: }$y_t \gets U^\top x_t$
\ENDFOR
\end{algorithmic}
\caption{An online algorithm for Principal Component Analysis}\label{alg1}
\end{algorithm}

%Algorithm~\ref{alg1} initializes $U$ and $C$ to be all-zeros matrices. 
%It subsequently updates these matrices appropriately. 
In Algorithm~\ref{alg1} the matrix $U$ is used to map $x_t$ to $y_t$ and is referred to as the projection matrix.\footnote{In linear algebra, a projection matrix is a matrix $P$ such that $P^2 =P$. Notice that $U$ is not a projection matrix in that sense.} 
The matrix $C$ accumulates the covariance of the residual errors $r_t = x_t - U U^\top x_t$.  
The algorithm starts with a rank one update of $C$ as $C = C+r_1 r_1^\top$. Notice that by the assumption for $x_t,$ we have that $\| r_1 r_1^\top\|_2^2 \le \|X\|_{\fro}^2 /\ell$, and hence for $t=1$ the algorithm does not enter the while-loop. Then, for the second input vector $x_2,$ the algorithm proceeds by checking the spectral norm of $C+r_2 r_2^\top = r_1 r_1^\top + r_2 r_2^\top$. If this does not exceed the threshold  
$2\|X\|_{\fro}^2 /\ell$ the algorithm keeps $U$ unchanged. 
It can go all the way to $t=n$ if this remains the case for all $t$.  
%
Notice, that in this case, $C = \sum r_t r_t^\top = \sum x_t x_t^\top = XX^\top$. 
The condition $\| C \|_2 \le 2\|X\|_{\fro}^2 /\ell$ translates to $\|X\|_2^2 \le 2\|X\|_{\fro}^2 /\ell$.
This means the numeric rank\footnote{The numeric rank, or the stable rank, of a matrix $X$ is equal to $\|X\|_\fro^2/\|X\|_2^2$.} of $X$ is at least $4k/\eps^2$.
That, in turn, means that $\OPT_k$ is large because there is not good rank-$k$ approximation for $X$.

If, however, for some iterate $t$ the spectral norm of $ C+r_t r_t^\top$ exceeds the threshold $2\|X\|_{\fro}^2 /\ell$, the algorithm makes a ``correction'' to $U$ (and consequently to $r_t$) in order to ensure that this is not the case. Specifically, it updates $U$ with the principal eigenvector of $C$ at that instance of the algorithm. At the same time it updates $C$ (inside the while-loop) by removing this eigenvector. 
%The algorithm maintains that $C = \sum_{t} r_t r_t^\top - \sum_j \lambda_j u_j u_j^\top$.
By controlling $\|C\|_2$ the algorithm enforces that  $\| \sum_{t} r_t r_t^\top \|_2^2\le  2\|X\|_{\fro}^2 /\ell$ which turns out to be a key component for online PCA.

\begin{theorem}\label{thm1} Let $X$, $k$, $\eps$ and $\|X\|_{\fro}^2$ be inputs to Algorithm~\ref{alg1}. 
Let $\ell = \lceil 8 k /\eps^2 \rceil $ and assume that $\max_t \|x_t\|_2^2 \le \|X\|_{\fro}^2/\ell$. 
The algorithm outputs vectors $y_1,\ldots,y_n$ in $\R^{\ell}$ such that
$$
ALG = \min_{\Phi \in \iso_{d, \ell}} \sum_{t=1}^{n} \|x_t - \Phi y_t\|_2^2 \le \OPT_k+ \eps \|X\|_{\fro}^2.
$$
%Moreover, the inner while loop is executed at most $\ell' = \ell \cdot \min\{1, OPT_k/\|X\|_{\fro}^2 + \eps\}$ times. %
%\dg{why is the bound on $\ell'$ here interesting. It has no consequence.}
\end{theorem}
%\begin{proof}
\noindent Let $Y = [y_1,\ldots,y_n]$ and let $R$ be the $d \times n$ matrix whose columns are $r_t$. 
In Lemma~\ref{lem1} we prove that 
$\min_{ \Phi \in \iso^{d\times{\ell}}} \|X- \Phi Y\|_{\fro}^2 \le \|R\|_{\fro}^2$.
In Lemma~\ref{lemRF} we prove that $\|R\|_{\fro}^2 \le \OPT_k + \sqrt{4k}\|X\|_{\fro}  \|R\|_2$
and in Lemma~\ref{lem2} we prove that $\|R\|_2^2 \leq 2\|X\|_{\fro}^2/\ell$.
Combining these and setting $\ell = \lceil 8 k/ \eps^2 \rceil $ completes the proof outline.


To prove the algorithm is correct, we must show that it does not add more than $\ell$ vectors to $U$. Let that number be denoted by $\ell'$.
We show that $\ell' \le \ell \|R\|_{\fro}^2 / \|X\|_{\fro}^2$ by lower bounding the different values of $\lambda$ in the algorithm (Lemma~\ref{lem25}).
Observing that $\|R\|_{\fro}^2 \le \|X\|_{\fro}^2$ proves the claim. 
In fact, by using that $\|R\|_{\fro}^2 \le \OPT_k + \eps\|X\|_{\fro}^2$ we get a tiger upper bound $\ell' \le \ell(\OPT_k/\|X\|_{\fro}^2 + \eps)$.
Thus, in the typical case of $\OPT_k < (1-\eps)\|X\|_{\fro}^2$ the algorithm effectively uses a target dimension lower than $\ell$.
%\end{proof}



%%%To calculate the number of arithmetic operations, notice that the running time is dominated by checking the condition in the while statement and computing the top singular vector of $C \in \R^{d \times d}$.
%%%Each of these operations requires $\tilde{O}(d^2 \log(d))$ operations using the power method where the $\tilde{O}$ notation omits all dependencies other than on $d$.
%%%Since these are repeated $O(n)$ times, the total running time of the algorithm is $\tilde{O}(nd^2\log(d))$. 
%%%We do not try to derive the total running time very accurately. In terms of running time, this algorithm is strictly dominated by its efficient counterpart presented in Section~\ref{sec:extensions}.




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Proofs of main lemmas}\label{sec:proofs}

Let $X \in \R^{d \times n},$ $Y \in \R^{\ell \times n},$ $\tilde{X} \in \R^{d \times n}$ and
$R \in \R^{d \times n}$ denote matrices whose $t$'th columns are $x_t \in \R^d$, $y_t = U_t^\top x_t \in \R^{\ell}$, 
$\tilde{x}_t = U_t U_t^\top x_t \in \R^d$, and $r_t = (I-U_t U_t^\top) x_t \in \R^d$ respectively. 
%
Throughout the paper, denote by $U_t$ and $C_t$ the state of $U$ and $C$ \emph{after} the $t$ iteration of the algorithm concluded and before the $t+1$ began.
For convenience, unless stated otherwise $C$ and $U$ without a subscript refer to state of these matrices after the algorithm terminated.
%
Let $\ell = \lceil 8k/\eps^2\rceil$ and $\ell' \le \ell$ be the number of vectors $u$ inserted into $U$ in Algorithm~\ref{alg1}. 
%This is exactly the number of times the while-loop is executed. 
%
%We reserve the index $j$ to index the vectors $u$ (and corresponding $\lambda$) computed inside the while-loop in Algorithm~\ref{alg1}.
Let $\Lambda$ be a diagonal matrix holding the values $\lambda$ on the diagonal such that $\sum_{j=1}^{\ell'} \lambda_j u_j u_j^\top = U\Lambda U^\top$.


\begin{lemma}\label{lem1}
$\ALG \le \|R\|_{\fro}^2.$
\end{lemma}
\begin{proof}
\begin{eqnarray}
\ALG &=& \min_{ \Phi \in \iso^{d\times{\ell}}} \|X- \Phi Y\|_{\fro}^2 \le \|X- U Y\|_{\fro}^2 \\
&=& \sum_{t=1}^{n} \|x_t - U U_t^\top x_t\|_2^2 = \sum_{t=1}^{n} \|x_t - U_tU_t^\top x_t\|_2^2 = \|R\|_{\fro}^2 \ .
\end{eqnarray}
The first inequality holds with equality for any $\Phi$ equal to $U$ on its $\ell'$ non-all-zero columns which are orthogonal unit vectors by Observation~\ref{obs2prime}.
The second equality is due to $UU_t^\top = U_tU_t^\top$ which holds because $U$ restricted to the non all-zero columns of $U_t$ is equal to $U_t$.
\end{proof}


\begin{lemma} \label{lemRF}
$ 
\|R\|_{\fro}^2 \le \OPT_k + \sqrt{4k}\|X\|_{\fro} \| R \|_2
$
\end{lemma}
\begin{proof}
Let $\tilde{X} = [\tilde{x}_1,\ldots, \tilde{x}_n]$ hold the reconstructed vectors $\tilde{x}_t  = U_tU_t^\top x_t$. 
Note that $X =  \tilde{X} + R$. From the Pythagorean theorem and the fact that $r_t$ and $\tilde{x}_t$ are orthogonal we have $\|X\|_{\fro}^2 =  \|\tilde{X}\|_{\fro}^2 + \|R\|_{\fro}^2$.
In what follows, $v_1,\ldots,v_k$ denote the top $k$ left singular vectors of $X$.
%Using this we manipulate $\|R\|_{\fro}^2$ as follows:
\begin{eqnarray}
\|R\|_{\fro}^2 &=& \|X\|_{\fro}^2 - \|\tilde{X}\|_{\fro}^2 \le \|X\|_{\fro}^2 - \sum_{i=1}^{k}\|v_i^\top \tilde{X}\|_2^2 \label{eqn:this2}\\
&=& \|X\|_{\fro}^2 - \sum_{i=1}^{k}\|v_i^\top X - v_i^\top R\|_2^2 \label{eqn:this3}\\
%&=& \|X\|_{\fro}^2 - \left( \sum_{i=1}^{k}\left(\|v_i^\top X\|^2 - 2\left \langle v_i^\top X, v_i^\top R \right \rangle + \|v_i^\top R\|_2^2 \right) \right) \label{eqn:this4}\\
&\le& \|X\|_{\fro}^2 -  \sum_{i=1}^{k}\|v_i^\top X\|_2^2 + 2\sum_{i=1}^{k}\left |  v_i^\top X R^\top v_i \right |  \label{eqn:this4}\\
&\le& \OPT_k + 2\|R\|_2 \sum_{i=1}^{k} \|v_i^\top X\|_2 \le \OPT_k + 2\|R\|_2 \cdot \sqrt{k}\|X\|_{\fro} \label{eqn:this7}
\end{eqnarray}
%
In Equation~\ref{eqn:this4} we used the fact  that for any two vectors $\alpha,\beta$: $\| \alpha - \beta \|_2^2  \le \|\alpha\|_2^2 + 2 |\alpha^\top \beta|$. 
In Equation \ref{eqn:this7} we use that $\OPT_k = \|X\|_{\fro}^2 - \sum_{i=1}^k\|v_i^{\top}X\|_2^2$ the 
Cauchy-Schwarz inequality $\sum_{i=1}^{k} \|v_i^\top X\|_2 \le (k\sum_{i=1}^{k} \|v_i^\top X\|^2_2)^{1/2} \le \sqrt{k}\|X\|_{\fro}$.
\end{proof}

\begin{lemma}[Upper bound for $\lambda_j$]
\label{lem24}
For all $j=1,2,...,\ell'$: 
$ \lambda_j \le   2\|X\|_{\fro}^2/ \ell.$
\end{lemma}
\begin{proof}
The updates to $C$ that increase its largest eigenvalue are $C \gets C + r_tr_t^\top$. 
But, these updates occur after failing the while-loop condition which means that $\|C + r_tr_t^\top\|_2 \le 2\|X\|_{\fro}^2/ \ell$ at the beginning of every iteration.
Also note that the update $C \gets C - \lambda uu^\top$ only reduces the two norm of $C$.
\end{proof}

\begin{observation} \label{obs1} \normalfont
During the execution of the algorithm the matrix $C$ is symmetric and positive semidefinite.
Initially $C$ is the all-zeros matrix which is symmetric and positive semidefinite.
The update $C \gets C + r_tr_t^\top$ clearly preserves these properties.
The update $C \gets C - \lambda uu^T$ also preserves these properties because $u$ is an eigenvector of $C$ at the time of the update with corresponding eigenvalue $\lambda$.
\end{observation}

\begin{observation} \label{obs2} \normalfont
After the end of each iteration, $CU = 0$. 
Note that $CUU^\top=0$ if and only if $CU = 0$. 
We prove by induction. Let $C'$ and $U'$ be the state of $C$ and $U$ at the end of the last iteration and assume that $C'U' = 0$.
If the while loop was not entered $CU = (C' + r_t r_t^\top)U' = (C' + (I - U'{U'}^\top)x_t x_t^\top(I - U'{U'}^\top))U' = C'U'$.
For every iteration of the while loop $CUU^\top = (C'- \lambda uu^\top)(U'{U'}^\top + uu^\top) = C'U'{U'}^\top + (C'u -\lambda u)u^\top - \lambda uu^\top U'{U'}^\top$.
Because $u$ is an eigenvector of $C'$ with eigenvalue $\lambda$ we have $C'u = \lambda u$. This means 
$(C'u -\lambda u)= 0$ and $\lambda uu^\top U'{U'}^\top = uu^\top C' U'{U'}^\top = 0$.
\end{observation}

\begin{observation} \label{obs2prime} \normalfont
The non all-zero columns of $U$ are mutually orthogonal unit vectors.
First, they are eigenvectors of $C$ and thus unit norm by the standard convention.
Second, when $u$ is added to $U$ it is an eigenvector of $C$ with eigenvalue $\lambda$.
Thus, $u^TU = \frac{1}{\lambda}u^\top CU = 0$ by Observation~\ref{obs2}.
\end{observation}

\begin{observation} \label{obs3} \normalfont
After the conclusion of iteration $t$ we have $C = \sum r_tr_t^\top - \sum_j \lambda_j u_j u_j^\top$ where $j$ sums over the vectors added thus far.
Specifically, when the algorithm terminates $RR^\top = C + U\Lambda U^\top$.
\end{observation}


\begin{lemma}\label{lem2}
$\|R\|_2^2 \leq 2\|X\|_{\fro}^2/\ell.$
\end{lemma}
\begin{proof}  
Let $z$ be the top eigenvector of $RR^\top$ and let $z_C$ and $z_U$ be its (orthogonal) projections into the span of $C$ and $U\Lambda U^\top$.
\begin{eqnarray*}
\|R\|_2^2 &=& \|RR^\top z\|_2 = \|(C + U\Lambda U^\top)z\|_2 = \sqrt{\|Cz_C\|_2^2 + \|U\Lambda U^\top z_U\|_2^2} \\
&\le& \sqrt{\|C\|^2\|z_C\|_2^2 + \|U\Lambda U^\top\|^2 \| z_C\|_2^2} \le (2\|X\|_{\fro}^2/\ell) \sqrt{\| z_C\|_2^2 + \| z_U\|_2^2}
\end{eqnarray*}
Here we used $RR^T = C + U\Lambda U^\top$ (Observation~\ref{obs3}), $CU = 0$ (Observation~\ref{obs2}),  
$\|C\|_2 \le 2\|X\|_{\fro}^2/\ell$ and $\|U\Lambda U^\top\|_2 \le 2\|X\|_{\fro}^2/\ell$ (Lemma \ref{lem24}).
\end{proof}


\begin{lemma}[Lower bound for $\lambda_j$]
\label{lem25}
For all $j=1,2,...,\ell'$: 
$ \lambda_j \ge \|X\|_{\fro}^2/\ell.$
\end{lemma}
\begin{proof}
%Each $\lambda_j$ corresponds to the largest eigenvalue of some $C_i$. Hence, it suffices to argue that for all $i=1,\dots,z$: $\lambda_1(C_i)=\|C_i\|_2 \ge \|X\|_{\fro}^2/(2\ell)$.
Note that the condition in the while loop is $\|C+r_t r_t^\top\|_2 \geq 2\|X\|_{\fro}^2/\ell$. 
In the point of the algorithm when this condition is true, we have
%When this is the case (i.e., the condition holds), then for the iterate $t$ and some $i$:
%\begin{equation} \label{eqn:lowerbound}
$
\|C\|_2 \geq \|C+ r_t r_t^\top\|_2 -\|r_t r_t^\top\|_2 \geq \|X\|_{\fro}^2/\ell.
$
%\end{equation}
The first inequality follows by the triangle inequality. 
The second inequality uses $\|C+r_t r_t^\top\|_2 \geq 2\|X\|_{\fro}^2/\ell$ and $\| r_t r_t^\top \|_2 \le \|X\|_{\fro}^2/\ell$.  
To verify that $\| r_t r_t^\top \|_2 \le \|X\|_{\fro}^2/\ell$ recall that $r_t = (I - UU^\top) x_t$. 
Note that $I - UU^\top$ is a projection matrix and $\|r_t\|_2^2 \le \|x_t\|_2^2 \le \|X\|_{\fro}^2/\ell$ by our assumption on the input.
\end{proof}




\begin{lemma}\label{lem3}
The while loop in the algorithm occurs at most $\ell'$ times and 
$$ \ell' \le  \ell \cdot \min\{1, \OPT_k/\|X\|_{\fro}^2 + \eps\}$$
\end{lemma}
\begin{proof}
We bound the trace of the matrix $U \Lambda U^\top$ from above and below.
\begin{equation}\label{ellboundeqn1}
\|R\|_{\fro}^2 = \tr(RR^\top) \ge \tr(U \Lambda U^\top) = \tr(\Lambda) =
\sum_{j=1}^{\ell'}  \lambda_j  \ge \ell'  \|X\|_{\fro}^2/\ell
\end{equation}
The first inequality is correct because $RR^\top = C+U \Lambda U^\top$ (from from observation \ref{obs3}) coupled with the fact that $C$ is symmetric and positive semidefinite (Observation~\ref{obs1}).
The last inequality follows from Lemma~\ref{lem25}. 
Since $\|X\|_{\fro}^2 \ge \|R\|_{\fro}^2$ we get $\ell' \le \ell$.
Also, by  Theorem~\ref{thm1} we have that $\|R\|_{\fro}^2 \leq \OPT_k + \eps\|X\|_{\fro}^2$. 
Combining with Equation \ref{ellboundeqn1}, $\ell' \le \ell(\OPT_k/\|X\|_{\fro}^2 +  \eps)$.
\end{proof}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Efficient and general online PCA algorithm}\label{sec:extensions}


In this section we explain the revisions needed to solve the issues raised in Section \ref{contribs}.
We start by removing the assumption that $\|x_t\|_2^2 \le \|X\|_{\fro}^2/\ell$.
The idea is, given a vector $x_t$ whose norm is larger than $\|X\|_{\fro}^2/\ell$, compute a unit vector, $u$, in the direction of its corresponding residual. 
Add $u$ to the $U$ and project $C$ on $(I - uu^T)$.
The analysis resulting this change is almost the same to the one above. % and an extension to the unknown horizon case is trivial as well.
Observe that no cost is incurred by these vectors. And, that there could be at most $\ell$ vectors $x_t$ such that $\|x_t\|_2^2 \ge \|X\|_{\fro}^2/\ell$.
Therefore, the modified algorithm requires at most twice the target dimension.

We can also avoid requiring $\|X\|_\fro$ as input rather simply. 
The while-loop condition in Algorithm~\ref{alg1} can be revised to $\|C+r_t r_t\| > \|X_t\|_{\fro}^2/\ell$ where $X_t = [x_1,\dots,x_t]$ is the data matrix observed so far.  
Using the same analysis as in Section \ref{sec:proofs} we can argue the following; during the time in which $\|X_t\|_{\fro}^2 \in \|x_1\|_2^2(2^\tau,2^{\tau+1}]$ the number of added directions cannot exceed $O(\ell)$.
Thus, assuming $\|x_t\|^2_2 \le \|x_1\|_2^2 \poly(n)$ the target dimension is bounded by $O(\ell \log(\|X\|_{\fro}^2/\|x_1\|_2^2))  = O(k \log(n)/\eps^2)$.


In Algorithm~\ref{alg2} we obtaining a bound on the target dimension that does not depend on $n$.
Below we give the main theorem about Algorithm~\ref{alg2} and some proof sketches. 
Unfortunately, due to space constraints the full description and proofs are deferred to the appendix. 
\begin{theorem}\label{thm2}
Algorithm~\ref{alg2} guaranties that $\ALG \leq \OPT_k + \eps \|X\|_{\fro}^2$. 
For some $\delta = \min\{ \eps, \eps^2\|X\|_{\fro}^2/\OPT \}$ it requires $O(dk/\delta^3)$ space and $O(n d k/\delta^3 +  \log (n) dk^3/\delta^6)$
arithmetic operations assuming $\|x_t\|_2$ are polynomial in $n$.
The target dimension of Algorithm~\ref{alg2} is at most $k/\delta^3$.
%%Let $\alpha = \frac{\OPT}{\|X\|_{\fro}^2}$ and let $\eps >0$. Algorithm~\ref{alg2}, given parameters $k,\delta$ with $\delta = \left( \sqrt{\alpha+\eps} - \sqrt{\alpha} \right)^2/42 = O(\min\{ \eps, \eps^2/\alpha\})$ guaranties that $\ALG \leq \OPT_k + \eps \|X\|_{\fro}^2$. 
%%It requires $O(dk/\delta^3)$ space and $O(n d k/\delta^3 +  \log (n) dk^3/\delta^6)$
%%arithmetic operations assuming $\|x_t\|_2$ are polynomial in $n$.
%%The target dimension of Algorithm~\ref{alg2} is at most $k/\delta^3$.
%%
%%%Algorithm~\ref{alg2} guaranties that $\ALG \leq \left( \sqrt{\OPT_k} + 6.48 \sqrt{\eps} \|X_n\|_F \right)^2$. 
%%%%$ \ALG \leq \OPT_k + 50 \eps \|X\|^2_F $. \zk{50?}
%%%It requires $O(dk/\eps^3)$ space and $O(n d k/\eps^3 +  \log (n) dk^3/\eps^6)$
%%%arithmetic operations assuming $\|x_t\|_2$ are polynomial in $n$.
%%%The target dimension of Algorithm~\ref{alg2} is at most $k/\eps^3$.
\end{theorem}
To obtain the target dimension of $k/\eps^3$ stated in theorem \ref{thm2} we argue as follows.
If many directions were already added to $U$, instead of adding new directions we can replace the ``least useful'' existing ones.
These replacements introduce the need for two new analyses. 
One is a modified bound on the total number of times a new direction is added, and the second is a bound on the loss of the algorithm, taking the removals of directions into account. 
We use a potential function of roughly $\|C\|_{\fro}^2$ and show that replacing an old vector with a new one will always incur an increase to the potential function; 
on the other hand the function is always upper bounded by the total Frobenius norm of the observed data. 
This allows us to bound the number of new vectors entered to $U$ during the time that $\|X_t\|_{\fro}^2 \in \|x_1\|_2^2(2^\tau,2^{\tau+1}]$ by $O(\ell)$ (for appropriate $\ell$). 
For bounding the loss, we show that in each replacement, because we choose to discard the least useful column of $U$, we suffer an additive loss of at most $\frac{\eps}{\ell} \|X_t\|_{\fro}^2$. 
These two bounds combined prove that the additional loss suffered due to replacing directions is no more than roughly $\eps\|X\|_{\fro}^2$. 
Since we already suffer a similar additive penalty, this completes our analysis.


%\subsection{Time and Memory Efficiency}
To deal with time and memory efficiency issues Algorithm~\ref{alg2} uses existing matrix sketching techniques. We chose the Frequent Directions sketch introduced by Liberty in \cite{Liberty13}. In Algorithm~\ref{alg1} we take $C$ to be the actual projection of $RR^\top$ onto the space orthogonal to that spanned by $U$ (though the algorithm is not exactly stated this way, this is equivalent to what is done there). 
%Doing this generate a $d \times d$ matrix which potentially requires $d^2$ memory and updating it requires $d^2$ time. 
In Algorithm~\ref{alg2}  we take $C$ to be the projection of $ZZ^\top$ onto the space orthogonal to that spanned by $U$, where $Z$ is a sketch of $R$ with $\|ZZ^\top - RR^\top\|$ having a bounded spectral norm. 
The same analysis works with only technical modification.
%\dg{the reader has no idea at this point what $Z$ is. Better to say that in the inefficient algorithm, instead of directly maintaining the matrix $C$ which is nothing but the projection of $RR^{\top}(I-UU^{\top})$, one can maintain the matrix $R$ and compute $C=RR^{\top}(I-UU^{\top})$. In the efficient algorithm, we use the latter update but instead of maintaining $R$ we keep only a sketch of it, denoted by $Z$, and compute each time $C=ZZ^{\top}(I-UU^{\top})$}  




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\bibliographystyle{unsrt}
\bibliography{bib.bib}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\appendix
\section{Implementation of extensions presented in section \ref{sec:extensions}}\label{appendix}

In this section we modify Algorithm~\ref{alg1} in order to (1) remove the assumption of knowledge of $\|X\|_{\fro}^2$ and (2) improve the time and space complexity. These two improvements are made at the expense of sacrificing the accuracy \emph{slightly}. 
%In order to reduce the space and time complexity, the main idea is to keep a sketch of $C$ instead of $C$ itself. 
%So in Algorithm 2, the matrix $C$ is a sketch yet it plays the role its corresponding matrix plays in Algorithm 1. 
%The idea that helps us avoid assuming knowledge of $\|X\|_{\fro}^2$ is to make decisions of increasing the dimension of $U$ based upon ``current knowledge'' about the Frobenius norm of the input matrix, i.e., the Frobenius norm squared of the 
%sub-matrix of $X$ observed up to the current iteration. Without an additional trick this leads to a target dimension dependent on $\log(n)$, the number of columns. In order to remove this dependency over $n$, we work with a fixed, albeit slightly larger budget than before, and when we require to add a column to a full matrix $U$, we simply remove a column that is ``the least helpful''.
%This is the one that covers the least amount of energy in the vectors observed so far. 
To sidestep trivial technicalities, we assume some prior knowledge over the quantity $\|X\|_{\fro}^2$; 
we merely assume that we have some quantity $w_0$ such that $w_0 \leq \|X\|_{\fro}^2$ and $w_0 \gg \|x_t\|^2$ for all $\|x_t\|_2^2$. The assumption of knowing $w_0$ can be removed by working with a buffer of roughly $k/eps^3$ column. Since we already require this amount of space, the resulting increase in the memory complexity is asymptotically negligible.



\begin{algorithm}[h!]
\begin{algorithmic}
\STATE {\bf input:} $X$, $k$, $\eps \in (0,\frac{1}{15})$, $w_0$ (which defaults to $w_0 = 0$)
%\STATE $U = 0^{d \times k/\eps^3}$, $W = 0^{d \times k/\eps^3}$, $Z = 0^{d \times k/\eps^2}$, $w=0$
\STATE $U = 0^{d \times k/\eps^3}$, $Z = 0^{d \times k/\eps^2}$, $w=0$, $w_U = 0^{k/\eps^3}$
\FOR {$t = 1,...,n$}
\STATE $w = w + \|x_t\|_2^2$
%\STATE sketch $x_t$ into $W$
\STATE $r_t = x_t - U U^\top  x_t$
\STATE $C = (I-U U^\top)ZZ^\top(I-U U^\top)$
\WHILE {$\|C+r_t r_t^\top\|_2 \geq \max\{w_0,w\} \cdot \frac{k}{\eps^2}$}
	\STATE $u, \lambda = \operatorname{topEigenvectorAndValue}(C)$
	\STATE $(w_U)_u \gets \lambda$
%	\STATE If $U$ has a zero column, write $u$ in its place. Otherwise, write $u$ instead of the column $v$ of $U$ with the minimal quantity of $\|Wv\|^2+\|Zv\|^2$
	\STATE If $U$ has a zero column, write $u$ in its place. Otherwise, write $u$ instead of the column $v$ of $U$ with the minimal quantity of $(w_U)_v$
	\STATE $C = (I-U U^\top)ZZ^\top(I-U U^\top)$
	\STATE $r_t = x_t - U U^\top  x_t$
\ENDWHILE
\STATE Sketch $r_t$ into $Z$
\STATE For each non-zero column $u$ in $U$, $(w_U)_u \gets (w_U)_u + \ip{x_t,u}^2$
\STATE {\bf yield: }$y_t \gets U^\top x_t$
\ENDFOR
\end{algorithmic}
\caption{An efficient online PCA algorithm} 
\label{alg2}
\end{algorithm}


\subsection{Notations}
For a variable $\xi$ in the algorithm, where $\xi$ can either be a matrix $Z,X,C$ or a vector $r,x$ we denote by $\xi_t$ the value of $\xi$ at the end of iteration number $t$. $\xi_0$ will denote the value of the variable at the beginning of the algorithm. For variables that may change during the while loop we denote by $\xi_{t,z}$ the value of $\xi$ at the end of the $z$'th while loop in the $t$'th iteration. In particular, if the while loop was never entered the value of $\xi$ at the end of the iteration $t$ will be equal to $\xi_{t,0}$. For such $\xi$ that may change during the while loop notice that $\xi_t$ is its value at the end of the iteration, meaning after the last while loop has ended.

An exception to the above is for the variable $w$. Here, we denote by $w_t$ the value of $\max\{w_0,w\}$ at the end of iteration number $t$.
%\dg{so this is not really an exception - its the same convention}\zk{it is because $w$ is a variable and $w_t$ is not its value at time $t$}. 
We denote by $n$ the index of the last iteration of the algorithm. In particular $\xi_n$ denotes the value of $\xi$ upon the termination of the algorithm.


\subsection{Matrix Sketching}
We use the Frequent Direction matrix sketching algorithm presented by Liberty in \cite{Liberty13}. This algorithm provides a sketching algorithm for the streaming version where we observe a matrix $R$ column by column.




\begin{lemma} \label{lem:sketch prop} \normalfont
Let $R_1,\ldots,R_t,\ldots$ be a sequence of matrices with columns of dimension $d$ where $R_{t+1}$ is obtained by appending a column vector $r_{t+1}$ to $R_t$. Let $Z_1,\ldots,Z_t,\ldots$ the corresponding sketches of $R$ obtained by adding these columns as they arrive, according to the Frequent Direction algorithm in \cite{Liberty13} with parameter $\ell$.
\begin{enumerate}
\item The worst case time required by the sketch to add a single column is $O(\ell d)$. \label{itm:amrtzd}

\item Each $Z_t$ is a matrix of dimensions $d \times O(\ell)$. \label{itm:space}

\item Let $u$ be a left singular vector of $Z_t$ with singular value $\sigma$ and assume that $r_{t+1}$ is orthogonal to $u$. Then $u$ is a singular vector of $Z_{t+1}$ with singular value $\leq \sigma$. \label{itm:r u orth Z}

\item For any vector $u$ and time $t$ it holds that $\|Z_t u\| \leq \|R_t u\|$. \label{itm:Z leq R}

\item For any $t$ there exists a positive semidefinite matrix $E_t$ such that $\|E_t\|^2_2 \leq \|R_t\|_F^2 / \ell$ and $Z_t Z_t^\top + E = R_t R_t^\top$. \label{itm: Z E plus R}
\end{enumerate}
\end{lemma}



\subsection{Proof of Theorem \ref{thm2}}

%%\dg{the structure of this subsection is really hard to follow with all lemmas and proofs. Perhaps it is better to use the same TOP $\rightarrow$ DOWN structure we used in the inefficient part. This way the reader know towards which end we going here. Suggestion: start with lemmas that deal with the fact that we need to remove vectors (which is the main non totally-intuitive modification):
%%\begin{enumerate}
%%\item lemma \ref{lem1stronger}
%%\item lemma \ref{lem:ditch loss}
%%\item lemma \ref{lem:v has low weight}
%%\end{enumerate}
%%}

\begin{observation} \label{obs:U orth str}
At all times, the matrix $U^\top U$ is a diagonal matrix with either zeros or ones across the diagonal. In other words, the non-zero column vectors of $U$, at any time point are orthonormal.
\end{observation}
\begin{proof}
We prove the claim for any $U_{t,z}$ by induction on $(t,z)$. For $t=0$ the claim is trivial as $U_0=0$. For the step we need only to consider $U_{t,z}$ for $z>0$ since $U_{t,0} = U_{t-1}$. Let $u$ be the new vector in $U_{t,z}$. Since $u$ is defined as an eigenvector of 
$$C_{t,z-1} = (I-U_{t,z-1} U_{t,z-1}^\top) Z_{t-1} Z_{t-1}^\top (I-U_{t,z-1} U_{t,z-1}^\top) \ ,$$
we have that $U_{t,z-1}u=0$ and the claim immediately follows.
\end{proof}


\begin{lemma} \label{lem:us singular}
For all $t,z$, the non-zero columns of $U_{t,z}$ are left singular vectors of $Z_{t-1}$ (possibly with singular value zero). In particular, the claim holds for the final values of the matrices $U$ and $Z$.
\end{lemma}

\begin{proof}
We prove the claim by induction on $t,z$, with a trivial base case of $t=1$ where $Z_{t-1}=0$. Let $t>1$. For $z=0$, each non-zero column vector $u$ of $U_{t,0}$ is a non-zero column vector of $U_{t-1,z}$ for the largest valid $z$ w.r.t $t-1$. By the induction hypothesis it holds that $u$ is a singular vector of $Z_{t-2}$. According to Observation~\ref{obs:U orth str} we have that $r_{t-1}$, the vector added to $Z_{t-2}$ is orthogonal to $u$, hence Lemma~\ref{lem:sketch prop}, item~\ref{itm:r u orth Z} indicates that $u$ is a singular vector of $Z_{t-1}$ as required.

Consider now $z>0$. In this case $u$ is a vector added in the while loop. Recall that $u$ is defined as an eigenvector of 
$$C_{t,z-1} = (I-U_{t,z-1} U_{t,z-1}^\top) Z_{t-1} Z_{t-1}^\top (I-U_{t,z-1} U_{t,z-1}^\top) $$ 
According to our induction hypothesis, all of the non-zero column vectors of $U_{t,z-1}$ are singular vectors of $Z_{t-1}$, hence
$$Z_{t-1} Z_{t-1}^\top = C_{t,z-1} + U_{t,z-1} U_{t,z-1}^\top Z_{t-1} Z_{t-1}^\top U_{t,z-1} U_{t,z-1}^\top  \ . $$ 
The two equalities above imply that any eigenvector of $C_{t,z-1}$ is an eigenvector of $Z_{t-1} Z_{t-1}^\top$ as well. It follows that $u$ is a singular vector of $Z_{t-1}$ as required, thus proving the claim.

%\dg{the latter part of the proof, in particular the last two equations, is really hard to follow. Sounds to me that if we already know that the eigenvectors of $U_{t,z-1}U_{t,z-1}^{\top}$ are a subset of the eigenvectors of $Z_{t-1}Z_{t-1}^{\top}$ then it is clear that we simply get $C$ by removing those eigenvectors along with corresponding eigenvalues from $Z_{t-1}Z_{t-1}^{\top}$. Thus a singular vector of $C_{t,z-1}$ is in particular a singular vector of $Z_{t-1}Z_{t-1}^{\top}$} \zk{thats really a question of taste. I prefer the current proof}
\end{proof}


\begin{lemma} \label{lem:v has low weight}
Let $v$ be a column vector of $U_{t,z}$ that is not in $U_{t,z+1}$. Let $(t_v,z_v)$ be the earliest time stamp from which $v$ was a column vector in $U$ consecutively up to time $(t,z)$. It holds that 
$$ \|Z_{t-1}v\|_2^2 + \|(X_{t-1}-X_{t_v-1}) v\|_2^2 \leq \frac{2\eps^3}{k} w_{t-1} $$
%$$ \|Z_{t-1}v\|^2 + \|X_{t-1} v\|^2 \leq \frac{3\eps^3}{k} w_{t-1} $$
\end{lemma}
\begin{proof}
We denote by $(w_U)_u$ the values of the $w_U$ vector for the different directions $u$ at time $(t,z)$.
Let $\lambda_v$ be the eigenvalue associated with $v$ during the time it was entered to $U$. Then at that time $\|Zv\|_2^2 = \|Cv\|_2 = \lambda_v$ (recall that $v$ is chosen as a vector orthogonal to those of $U$ hence $\|Cv\|=\|Z(I-U)v\|^2=\|Zv\|^2$).
Furthermore, since $v$ was a column in $U$ up to time $t$ we get that all vectors added to $R$ from the insertion of $v$ up to time $t$ are orthogonal to $v$. Lemma~\ref{lem:sketch prop}, item~\ref{itm:r u orth Z} shows that 
$$\|Z_{t-1}v\|_2^2 \leq \lambda_v  .$$
Since $v$ was a column vector of $U$ from iteration $t_v$ up to iteration $t$, we have that 
$$ \|Z_{t-1}v\|_2^2 + \|(X_{t-1}-X_{t_v-1}) v\|_2^2 \leq \lambda_v + \sum_{\tau=t_v}^t \ip{v,x_\tau}^2 = (w_U)_v $$


It remains to bound the quantity of $(w_U)_v$. We will bound the sum $\sum_{u \in U_{t,z}} (w_U)_u$ and use the fact that $v$ is the minimizer of the corresponding expression hence $(w_U)_v$ is upper bounded by $\sum_u (w_U)_u \cdot \frac{\eps^3}{k}$. 

Let $t_u$ be the index of the iteration in which $u$ is inserted into $U$. It is easy to verify that for $\lambda_u = \|C_{t_u-1}u\|$ it holds that 
$$ w_u = \lambda_u + \|(X_{t-1}-X_{t_u-1}) u\|_2^2 $$
Now, since $C \preceq Z \preceq R$ we have that
$$ w_u = \|R_{t_u-1} u\|_2^2 + \|(X_{t-1}-X_{t_u-1}) u\|_2^2 $$
Finally, 
$$ \sum_u (w_U)_u \leq \sum_u \|(X_{t-1}-X_{t_u-1}) u\|_2^2 + \|R_{t_u-1}u\|_2^2 \stackrel{(i)}{\leq} $$
$$\sum_u \|X_{t-1}u\|_2^2 + \|R_{t-1}u\|_2^2 \stackrel{(ii)}{\leq} \|X_{t-1}\|_F^2 + \|R_{t-1}\|_F^2 \stackrel{(iii)}{\leq} 2\|X_{t-1}\|_F^2 = 2 w_{t-1}  $$
Inequality $(i)$ is immediate from the definitions of $X_t,R_t$ as concatenations of $t$ column vectors. Inequality $(ii)$ follows since any orthogonal vector set and any matrix admit $\sum_u \|Au\|_2^2 \leq \|A\|_{\fro}^2$. Inequality $(iii)$ is due to the fact that each column of $R$ is obtained by projection a column of $X$ onto some subspace.

\end{proof}



\begin{lemma} \label{lem: bound C sketch}
At all times $\|C_{t,z}\|_2 \leq w_{t-1} \cdot \frac{\eps^2}{k}$,
\end{lemma} 
\begin{proof}
We prove the claim by induction over $t,z$. The base case for $t=0$ is trivial. For $t>0$, $z=0$, if the while loop in iteration $t$ was entered to we have that $C_{t,0}=C_{t-1,z}$ for some $z$. Since $w_{t-1} \geq w_{t-2}$ the claim holds. If $t$ is such that the while loop of the iteration was not entered the condition of the while loop asserts that $\|C_{t,z}\|=\|C_t\| \leq w_{t-1} \cdot \frac{\eps^2}{k}$.

Consider now $t,z>0$. We have that 
$$C_{t,z-1} = (I-U_{t,z-1} U_{t,z-1}^\top) Z_{t-1} Z_{t-1}^\top (I-U_{t,z-1} U_{t,z-1}^\top) $$
$$C_{t,z} = (I-U_{t,z} U_{t,z}^\top) Z_{t-1} Z_{t-1}^\top (I-U_{t,z} U_{t,z}^\top) $$
If $U_{t,z}$ is obtained by writing $u$ instead of a zero column of $U_{t,z-1}$ then $C_{t,z}$ is a projection of $C_{t,z-1}$ and the claim holds due to the induction hypothesis. If not, $u$ is inserted instead of some vector $v$. According to Lemmas~\ref{lem:us singular} and~\ref{lem:v has low weight}, $v$ is an eigenvector of $Z_{t-1} Z_{t-1}^\top$ with eigenvalue $\lambda_v \leq w_{t-1} \cdot \frac{2\eps^3}{k} \leq \frac{\eps^2}{k}$, assuming $\eps \leq 0.5$. It follows that $C_{t,z}$ is a projection of $C_{t,z-1}+\lambda_v v v^\top$. Now, since $C_{t,z-1}v=0$ (as $v$ is a column vector of $U_{t,z-1}$) we have that
$$\|C_{t,z}\|_2 \leq \|C_{t,z-1}+\lambda_v v v^\top\|_2 = \max\{ \|C_{t,z-1}\|_2, \|\lambda_v v v^\top\|_2 \}$$
According to our induction hypothesis and the bound for $\lambda_v$, the above expression is bounded by $w_{t-1} \cdot \frac{\eps^2}{k}$ as required. 
\end{proof}


\begin{lemma} \label{lem:lambda upper sketch}
Let $u$ be a vector that is not in $U_{t,z}$ and in $U_{t,z+1}$. Let $\lambda$ be the eigenvector associated to it w.r.t $C_{t,z}$. It holds that $\lambda \leq w_{t-1} \cdot \frac{\eps^2}{k}$. Furthermore, if $u$ is a column vector in $U_{t',z'}$ for all $t' \geq t, z' \geq z$, it holds that $\|u^\top Z_n\|^2 \leq \lambda \leq \|X_n\|_F^2 \cdot \frac{\eps^2}{k} $.
\end{lemma}
\begin{proof}
Since $u$ is chosen as the top eigenvector of $C_{t,z}$ we have by Lemma~\ref{lem: bound C sketch} that
$$\lambda = \|C_{t,z} u\|_2 = \|C_{t,z}\|_2 \leq   w_{t-1} \cdot \frac{\eps^2}{k} $$

For the second claim in the lemma we note that since $u$ is an eigenvector of $C_{t,z-1}= (I-U_{t,z-1} U_{t,z-1}^\top)  Z_{t-1}Z_{t-1}^\top  (I-U_{t,z-1} U_{t,z-1}^\top)$, we have that $U_{t,z-1}^\top u=0$, hence
$$ \|u^\top Z_{t-1}\|_2^2 = \|u^\top (I-U_{t,z-1} U_{t,z-1}^\top) Z_{t-1}\|_2^2 = u^\top C_{t,z} u \leq \|C_{t,z} u\|_2 \leq w_{t-1} \cdot \frac{\eps^2}{k}$$
Since $u$ is assumed to be an element of $U$ throughout the running time of the algorithm, it holds that for all future vectors $r$ added to the sketch $Z$, $u$ is orthogonal to $r$. The claim now follows from Lemma~\ref{lem:sketch prop} item~\ref{itm:r u orth Z}.
\end{proof}


\begin{lemma}\label{lem2strong}
$ \|R_n\|_2^2 \leq \frac{2\eps^2}{k} \|X_n\|_{\mathrm{F}}^2.$
\end{lemma}
\begin{proof}
Let $u_1,\ldots,u_{\ell}$ and $\lambda_1,\ldots,\lambda_{\ell}$ be the columns of $U_n$ and their corresponding eigenvalues in $C$ at the time of their addition to $U$.
From Lemmas~\ref{lem:us singular} and~\ref{lem:lambda upper sketch} we have that each $u_j$ is an eigenvector of $ZZ^\top$ with eigenvalue $\lambda_j' \leq \lambda_j \leq \frac{\eps^2}{k}\|X_n\|_F^2$. It follows that
$$ \|Z_n Z_n^\top \|_2 = \max \left\{ \frac{\eps^2}{k}\|X_n\|_F^2, \|(I-U_n U_n^\top) Z_n Z_n^\top (I-U_n U_n^\top)\|_2 \right\} $$
$$=  \max \left\{ \frac{\eps^2}{k}\|X_n\|_F^2, \|C_n\|_2 \right\} \leq \frac{\eps^2}{k}\|X_n\|_F^2  \ .$$
The last inequality is due to Lemma~\ref{lem: bound C sketch}.

Next, by the sketching property (Lemma~\ref{lem:sketch prop} item~\ref{itm: Z E plus R}), for appropriate matrix $E$: $Z_n Z_n^\top = R_n R_n^\top + E$, with $\|E\| \leq \frac{\eps^2}{k} \|R_n\|_F^2 $. As the columns of $R$ are projections of those of $X$ we have that $\|R_n\|_F^2 \leq \|X_n\|_F^2$, hence
$$ \|R_n\|_2^2 = \|R_n R_n^\top \|_2 = \|Z_n Z_n^\top - E\|_2 \leq \|Z_n Z_n^\top \|_2 + \|E\|_2 \leq \frac{2\eps^2}{k}\|X_n\|_F^2$$
\end{proof}



\begin{lemma} \label{thm1strong}
$$ 
%\|R\|_F^2 \le OPT_k + \sqrt{\frac{8 k}{\ell}}\cdot \|X\|_{\mathrm{F}}^2
%4 \OPT_k + \sqrt{ \frac{ 36 k}{\ell} } || X ||_{\mathrm{F}}^2
\|R_n\|_F^2 \le \OPT_k + 4\eps \|X_n\|_{\mathrm{F}}^2
$$
\end{lemma}
\begin{proof}
The Lemma can be proven analogically to Theorem~\ref{thm1} as the only difference is the bound over $\|R_n\|_2^2$. 
\end{proof}

\begin{lemma}\label{lem3stronger}
Assume that for all $t$, $\|x_t\|_2^2 \leq w_t \cdot \frac{\eps^2}{5k}$. Assume that $\eps \leq 0.1$. For $\tau>0$ consider the iterations of the algorithm during which $w_t \in [2^\tau, 2^{\tau+1})$. During this time, the while loop will be executed at most $5 k/\eps^2$ times. 
%\zk{we can remove the requirement of $\eps < 0.1$ by having the appropriate sketches of size $\max\{10, 1/\eps\} \cdot k/\eps^2$.}
\end{lemma}
\begin{proof}
For the proof, we define a potential function 
$$\Phi_{t,z} =  \mathrm{Trace}(R_{t-1}R_{t-1}^\top) - \mathrm{Trace}(C_{t,z}) \ .$$
We first notice that since $C$ is clearly PSD,
$$\Phi_{t,z} \leq \mathrm{Trace}(R_{t-1} R_{t-1}^\top) = \|R_{t-1}\|_F^2 \leq \|X_{t-1}\|_F^2 \leq 2^{\tau+1} \ .$$
The first inequality is since the columns of $R$ are projections of those of $X$ and the second is since $\|X_{t-1}\|_F^2 \leq w_{t-1}$. 
We will show that first, $\Phi$ is non-decreasing with time and furthermore, for valid $z>0$, $\Phi_{t,z} \geq \Phi_{t,z-1}+0.2\frac{\eps^2}{k} 2^{\tau+1}$. The result immediately follows.


Consider a pair $(t,z)$ followed by the pair of indices $(t+1,0)$. Here, $\Phi_{t+1,0}-\Phi_{t,z} = \|r_t\|_2^2 \geq 0$ hence for such pairs the potential is non-decreasing. Now consider some pair $(t,z)$ for $z>0$. Since $(t,z)$ is a valid pair it holds that
\begin{eqnarray}\label{eq:C t z lb}
\|C_{t,z-1}\|_2 &\geq &\|C_{t,z-1} + r_{t,z-1} r_{t,z-1}^\top - r_{t,z-1} r_{t,z-1}^\top\|_2 \nonumber \\
&\geq &  \|C_{t,z-1} + r_{t,z-1} r_{t,z-1}^\top \|_2 - \|r_{t,z-1} r_{t,z-1}^\top\|_2 \geq w_t \frac{\eps^2}{k} (1-0.2) \nonumber \\
&\geq & 0.4 \cdot 2^{\tau+1} \frac{\eps^2}{k}  
\end{eqnarray}
Denote by $u$ the column vector in $U_{t,z}$ that is not in $U_{t,z-1}$. Let $U'$ be the matrix obtained by appending the column $u$ to the matrix $U_{t,z-1}$. Let 
$$C' = (I-U'(U')^\top)Z_{t-1}Z_{t-1}^\top (I-U'(U')^\top) = (I-uu^\top) C_{t,z-1} (I-uu^\top)$$
Since $u$ is the top eigenvector of $C_{t,z-1}$ we have by equation~\eqref{eq:C t z lb} that
$$\mathrm{Trace}(C') - \mathrm{Trace}(C_{t,z-1}) = \|C_{t,z-1}\|_2 \geq 0.4 \cdot 2^{\tau+1} \frac{\eps^2}{k}$$

If $U_{t,z-1}$ had a zero column then $C_{t,z}=C'$ and we are done. If not, let $v$ be the vector that was replaced by $u$.
According to Lemma~\ref{lem:us singular}, $v$ is a singular vector of $Z_{t-1}$. According to Lemma~\ref{lem:v has low weight} and $\eps < 0.1$ we have that
$$\|Z_{t-1}v\|_2^2 \leq \frac{2\eps^3}{k}\|X_{t-1}\|_F^2 \leq \frac{\eps^2}{5k} 2^{\tau+1} \ .$$
Hence, 
$$C_{t,z} = C' + \|Z_{t-1}v\|^2 \cdot vv^\top$$
meaning that 
$$ \mathrm{Trace}(C_{t,z} - C_{t,z-1}) = \mathrm{Trace}(C_{t,z} -C') + \mathrm{Trace}(C' -C_{t,z-1})  \geq \frac{\eps^2}{5k} 2^{\tau+1} \ . $$

We conclude that as required $\Phi$ is non-decreasing over time and in each iteration of the while loop, increases by at least $\frac{\eps^2}{5k} 2^{\tau+1}$. Since $\Phi$ is upper bounded by $2^{\tau+1}$ during the discussed iterations, the lemma follows.
\end{proof}



\begin{lemma}\label{lem1stronger}
Let $(v_1,t_1'+1,t_1+1),\ldots,(v_j,t_j'+1,t_j+1),\ldots$ be the sequence of triplets of vectors removed from $U$, the times on which they were added to $U$ and the times on which they were removed from $U$.
%Let $(v_1,t_1+1),\ldots,(v_j,t_j+1),\ldots$ be the sequence of pairs of vectors removed from $U$, and the iteration times during which they were moved.
$$\ALG \le \left(\|R_n\|_F+ 2\sqrt{\sum_j \|(X_{t_j}-X_{t_j'}) v_j\|_2^2}  \right)^2  .$$
\end{lemma}
\begin{proof}
For any time $t$ denote by $U_t$ the matrix $U$ in the end of iteration $t$, by $U_t^{(1)}$ the outcome of zeroing-out every column of $U_t$ that is different from the corresponding column in $U_n$ and by $U_t^{(2)}$ its complement, that is the outcome of zeroing-out every column in $U_t$ that is identical to the corresponding column in $U_n$. In the same way define $U^{(2)}$ to be the outcome of zeroing-out columns in $U_n$ that are all zeros in $U_t^{(2)}$.

It holds that,
\begin{eqnarray*}
\|x_t - U_n U_t^{\top}x_t\|_2^2 &=& \|x_t - (U_t + U_n - U_t)U_t^{\top}x_t\|_2^2 \\
&\leq & \left({\|x_t - U_tU_t^{\top}x_t\|_2 + \|(U_n-U_t)U_t^{\top}x_t\|_2}\right)^2 \\
& = & \left({\|r_t\|_2 + \|(U^{(2)}-U_t^{(2)})(U_t^{(2)})^{\top}x_t\|}\right)^2 \\
& \leq & \left({\|r_t\|_2 + 2\|(U_t^{(2)})^{\top}x_t\|_2}\right)^2 
\end{eqnarray*}

Summing over all times $t$ we have,
\begin{eqnarray*}
\ALG &=&\sum_{t=1}^n\left({\|r_t\|_2 + 2\|(U_t^{(2)})^{\top}x_t\|_2}\right)^2 \\
&=& \sum_{t=1}^n\|r_t\|_2^2 + 4\|r_t\|_2\|(U_t^{(2)})^{\top}x_t\|_2 + 4\|(U_t^{(2)})^{\top}x_t\|_2^2 \\
& \leq & \|R_n\|_{\mathrm{F}}^2 + 4\sqrt{\sum_{t=1}^n\|r_t\|_2^2}\sqrt{\sum_{t=1}^n\|(U_t^{(2)})^{\top}x_t\|_2^2} + 4\sum_t\|(U_t^{(2)})^{\top}x_t\|_2^2
\end{eqnarray*}

Where the last inequality follows from applying the Cauchy-Schwarz inequality to the dot product between the vectors $(\|r_1\|_2, \|r_2\|_2,...,\|r_n\|_2)$ and $(\|(U_1^{(2)})^{\top}x_1\|_2, \|(U_2^{(2)})^{\top}x_2\|_2,...,\|(U_n^{(2)})^{\top}x_n\|_2)$.

Since $U_t^{(2)}$ contains only vectors that were columns of $U$ at time $t$ but were replaced later, and are not present in $U_n$, we have that $\|(U_t^{(2)})^{\top}x_t\|^2 \leq \sum_{j:t_j > t > t_j'}(x_t^{\top}v_j)^2$ and so $\sum_{t=1}^n\|(U_t^{(2)})^{\top}x_t\|_2^2 \leq \sum_j\|(X_{t_j}-X_{t_j'})v_j\|_2^2$. Thus we have that,

\begin{eqnarray*}
\ALG & \leq & \|R_n\|_{\mathrm{F}}^2 + 4\Vert{R_n}\Vert_{\mathrm{F}}\sqrt{\sum_j\|(X_{t_j}-X_{t_j'})v_j\|_2^2} + 4\sum_j\|(X_{t_j}-X_{t_j'})v_j\|_2^2 \\
&=& \left({\|R_n\|_{\mathrm{F}} + 2\sqrt{\sum_j\|(X_{t_j}-X_{t_j'})v_j\|_2^2}}\right)^2
\end{eqnarray*}
\end{proof} 


\begin{lemma} \label{lem:ditch loss}
Let $(v_1,t_1'+1,t_1+1),\ldots,(v_j,t_j'+1,t_j+1),\ldots$ be the sequence of triplets of vectors removed from $U$, the times on which they were added to $U$ and the times on which they were removed from $U$. Then
$$\sum_j \|(X_{t_j}-X_{t_j'}) v_j\|_2^2 \leq 20\eps \|X_n\|_F^2  .$$
\end{lemma}
\begin{proof}
For some $\tau>0$ consider the execution of the algorithm during the period in which $w_t \in [2^{\tau},2^{\tau+1})$. According to Lemma~\ref{lem3stronger}, at most $\frac{5k}{\eps^2}$ vectors $v$ were removed from the $U$ during that period. According to Lemma~\ref{lem:v has low weight}, for each such $v_j$ it holds that 
$$ \|(X_{t_j}-X_{t_j'}) v\|_2^2 \leq w_{t_j} \cdot \frac{2\eps^3}{k} \leq \frac{2\eps^3}{k} 2^{\tau+1} $$
It follows that the contribution of vectors $v$ thrown from the set during the discussed time period is at most 
$$ 5\frac{k}{\eps^2} \cdot \frac{2\eps^3 }{k} 2^{\tau+1} = 10\eps \cdot 2^{\tau+1} $$
The entire sum can now be bounded by a geometric series, ending at $\tau =  \log_2(\|X\|_F^2)$ thus proving the lemma.
\end{proof}


\begin{corollary}
$$\ALG_{k,\eps} \leq \left( \sqrt{\OPT_k} + \left(\sqrt{4}+\sqrt{20}\right) \sqrt{\eps} \|X_n\|_F \right)^2 \leq \left( \sqrt{\OPT_k} + 6.48 \sqrt{\eps} \|X_n\|_F \right)^2$$
In particular, for $\eps = \delta^2 \cdot \OPT_k/\|X_n\|_F^2$,
$$ \ALG_{k,\eps} = \OPT_k (1+O(\delta)) $$
\end{corollary}



\begin{lemma}[Time complexity]\label{lem5strong} 
Algorithm~\ref{alg2} requires $O(n \frac{d k}{\eps^3} + \log(w_n/w_0) \frac{dk^3}{\eps^6})$ arithmetic operations.
%Under the assumption that $w_0 \geq $
\end{lemma}
\begin{proof}
We begin by pointing out that in the algorithm we work with the $d \times d$ matrices $C$ and $ZZ^\top$. These matrices have a bounded rank and furthermore, we are always able to maintain a representation of them of the form $AA^\top$ with $A$ being a $d \times r$ matrix with $r$ being the rank of $C$ or $ZZ^\top$. Hence, all computations involving them requires $O(dr)$ or $O(dr^2)$ arithmetic operations (corresponding to a product with a vector and computing the eigendecomposition).


%\dg{the proof here is not clear. we need to better describe how we work with $C$ and $ZZ^{\top}$ without paying $d^2$ time and space}
Consider first the cost of the operations outside the while loop that do not involve the matrix $C$. It is easy to see, with Lemma~\ref{lem:sketch prop}, item~\ref{itm:amrtzd}, that the amortized time required in each iteration is $O(d\frac{k}{\eps^3})$. The two operations of computing $C$ and its top singular value may require $O(d\frac{k^2}{\eps^6})$ time. 
However, these operations do not necessarily have to occur every iteration if we use the following trick: When entering the while loop we will require the norm of $C+r_t r_T^\top$ to be bounded by $w_t \frac{\eps^2}{k}$ rather than $w_t \frac{\eps^2}{2k}$. Assume now that we entered the while loop at time $t$. In this case, we do not need to check the condition of the while loop until we reach an iteration $t'$ where $w_{t'} \geq w_t(1+\frac{\eps^2}{2k})$. Other than checking the condition of the while loop there is no need to compute $C$, hence the costly operations of the external loop need only be executed an amount of 
$$O(\log(w_n/w_0) \cdot \frac{k}{\eps^2})$$
%Under the assumption that the first column is not significantly smaller in norm than the average norm of a column we get
%$$O(\log(w_n/w_0) \cdot \frac{k}{\eps^2}) = O(\log(n) \cdot \frac{k}{\eps^2})$$

We now proceed to analyze the running time of the inner while loop. Each such iteration requires $O(d\frac{k^2}{\eps^6})$ arithmetic operations. However, according to Lemma~\ref{lem3stronger} we have that the total number of such iterations is bounded by 
$$O(\log(w_n/w_0) \cdot \frac{k}{\eps^2}) $$
The lemma immediately follows.

\end{proof}

\begin{lemma}[Space complexity]\label{lem6strong}
 Algorithm 1 requires $O(dk/\eps^3)$ space. 
\end{lemma}
\begin{proof}
Immediate from the algorithm and the properties of the Frequent Direction Sketch (Lemma~\ref{lem:sketch prop} item~\ref{itm:space})
\end{proof}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\end{document}





