\section{Introduction}
Principal Component Analysis (PCA) is one of the most well known and widely used methods in scientific computing.
It is used for dimension reduction, signal denoising, regression, correlation analysis, visualization etc~\cite{dunteman1989principal}.
It can be described in many ways, but one that is particularly appealing in the context of \emph{online} algorithms is the following. 
Given $n$ ``high-dimensional'' vectors $\x_1,\dots, \x_n \in \mathbb{R}^d$ and a target dimension $k < d$, produce $n$ ``low-dimensions'' vectors $\y_1,\dots,\y_n \in \mathbb{R}^k$ such that the ``reconstruction error'' is minimized. 
To define the reconstruction error, let $\iso_{d,k}$ denote the set of $d \times k$ isometric embedding matrices:
%\begin{equation}\label{eqn:iso}
$
\iso_{d,k} =  \{  \matPhi \in \R^{d \times k} | \forall \y \in \R^k \;\; \|\matPhi \y\|_2 = \|\y\|_2\}.
$
%\end{equation}
Then, the PCA reconstruction error is: 
\begin{equation}\label{pcacost}
\min_{\matPhi \in \iso_{d,k}} \sum_{t=1}^{n} \|\x_t - \matPhi \\y_t\|_2^2. 
\end{equation}

PCA can be cast as an optimization problem. Given $\x_1,\dots, \x_n \in \mathbb{R}^d$ and $k < d$, the PCA problem is: find $\y_1,\dots,\y_n \in \mathbb{R}^k$ that are the optimal solution to the problem:
\begin{equation}\label{pcaproblem}
\min_{\y_t \in \R^k} \left( \min_{\matPhi \in \iso_{d,k}} \sum_{t=1}^{n} \|\x_t - \matPhi \y_t\|_2^2 \right).
\end{equation}
The solution to this problem goes through computing the Singular Value Decomposition (SVD) of the matrix 
$\matX =[\x_1,\dots,\x_n]\in \R^{d \times n}.$
Throughout the paper we assume that
$d <n$ and $\rank(\matX)=d$.
%
%The SVD of $\matX$ is $\matX=\mat\matU_{\matX} \matSig_{\matX} \matV_{\matX}^\top,$ where 
%$\mat\matU_{\matX} \in \R^{d \times d}$ and $\matV_{\matX} \in \R^{d \times n}$ contain the left and right singular vectors of $\matX,$ respectively, and $\matSig_{\matX} \in \R^{d \times d}$ contain its singular values on the diagonal.
Let $\mat\matU_k \in \R^{d \times k}$ contain only the $k < d$ left singular vectors corresponding to the top $k$ singular values of $\matX$.
Then, the optimal PCA vectors $\y_t$ are $\y_t = \mat\matU_k^\top \x_t$. Equivalently, if 
$\matY  = [\y_1,\dots,\y_n]\in \R^{k \times n},$
then, $\matY = \mat\matU_k^\top \matX.$ Also, the ``best'' isometry matrix is $\matPhi = \mat\matU_k$. 
We denote the minimum possible value in the PCA optimization problem as $\OPT_k$:
\begin{equation}\label{pcabest}
\OPT_k :=  \| \matX -  \mat\matU_k \mat\matU_k^\top \matX\|_{\mathrm{F}}^2 =
%\sum_{i=1}^n \|\x_t -  \mat\matU_k \mat\matU_k^\top \x_t\|_2^2 = 
\min_{\\y_t \in \R^k} \left(\min_{\matPhi \in \iso_{d,k}} \sum_{t=1}^{n} \|\x_t - \matPhi \\y_t\|_2^2 \right). 
\end{equation}

Computing  the optimal $\y_t = \matU_k^\top \x_t$ naively requires several passes over the matrix $\matX$.
Power iteration based methods for computing $\matU_k$ are memory and CPU efficient but require $\omega(1)$ passes over $\matX$.
Two passes also naively suffice; one to compute $\matX \matX^\top$ from which $\matU_k$ is computed and one to generate the mapping  $\y_t = \matU_k^\top \x_t$.
The bottleneck is in computing $\matX \matX^\top$ which demands $\Omega(d^2)$ auxiliary space (in memory) and $\Omega(d^2)$ operations per vector $\x_t$ (assuming they are dense).
This is prohibitive even for moderate values of $d$.
A significant amount of research went into reducing the computational overhead of obtaining a good approximation for $\matW$ in one pass \cite{FriezeKannanVempala1998, DrineasKannan2003, DeshpandeV06, Sarlos06, RudelsonVershyninMatrixSampling2007, tygert07PNAS, Liberty13,Phillips14}.
Still, a second pass is needed to produce the reduced dimension vectors $\y_t$.

\subsection{Online PCA}
In the online setting, the algorithm receives the input vectors $\x_t$ one ofter the other and must always output $\y_t$ before receiving $\x_{t+1}$.
The cost of the online algorithm is measured like in the offline case
\begin{equation*}
\ALG = \min_{\matPhi \in \iso_{d,\ell}} \sum_{t=1}^{n} \|\x_t - \matPhi \y_t\|_2^2 \ .
\end{equation*}
Note that the target dimension of the algorithm, $\ell$, is potentially larger than $k$ to compensate for the handicap of operating online.

This is a natural model for PCA when a downstream online %(rotation invariant) 
algorithm is applied to $\y_t$. Examples include online algorithms for clustering ($k$-means, $k$-median), regression, classification (SVM, logistic regression), facility location, $k$-server, etc.
By operating on vectors of reduced dimension, these algorithms operate more efficiently but there is a much more important reason to apply them post-PCA.

%PCA denoises the data. Arguably, this is the most significant reason for PCA being such a popular and successful preprocessing stage in data mining.
%Even when a significant portion of the Frobenius norm of $\matX$ is attributed to isotropic noise, PCA can often still recover the signal.
%This is the reason that clustering, for example, the denoised vectors $\y_t$ often gives better qualitative results than clustering $\x_t$ directly.
%Notice that in this setting the algorithm cannot retroactively change past decisions.
%Furthermore, future decisions should try to stay consistent with past ones, even if those were misguided.

Our model departs from earlier definitions of online PCA. We shortly review three other definitions and point out the differences as well as highlight their limitations.

\subsubsection{Random projections}
Most similar to our work is the result of Sarlos \cite{Sarlos06}, which uses the \emph{random projection method} to do online PCA. Using random projections, $\y_t = \matS^\top \x_t$ where 
$\matS \in \R^{d \times \ell}$ is generated randomly and independently from the data. 
For example, each element of $\matS$ can be $\pm 1$ with equal probability (Theorem 4.4 in~\cite{clarkson2009numerical}) or drawn from a normal Gaussian distribution (Theorem 10.5 in~\cite{HalkoMT11}).
Then, with constant probability and for $\ell = \Theta(k/\varepsilon)$
%\begin{equation*}%\label{pcacw}
$
\min_{\Psi \in \R^{d \times \ell}} \sum_{t=1}^{n} \|\x_t - \matPsi  \y_t\|_2^2 \le (1+\varepsilon)\OPT_k.
$
%\end{equation*}
Here, the best reconstruction matrix is $\matPsi = \matX \matY^{\dagger}$ which is \emph{not} an isometry in general.\footnote{The notation $\matY^{\dagger}$ stands for the Moore Penrose inverse or pseudo inverse of $\matY$.}
We claim that this seemingly minute departure from our model is actually very significant.
Note that the matrix $\matS$ exhibits the ''Johnson Lindenstrauss'' property\cite{JohnsonLindenstrauss84, GuptaDasgupta06, Achlioptas03}. 
Roughly speaking, this means that the vectors $\y_t$ approximately preserve the lengths, angels, and distances between all the vectors $\x_t$, thereby, preserving the noise and signal in $\x_t$ equally well. This is not surprising given that $\matS$ is generated independently from the data.
Observe that to nullify the noise component $\Psi = \matX \matY^{\dagger}$ must be far from being an isometry and that $\matPsi = \matX(\matS^\top \matX)^\dagger$ can only be computed after the entire matrix was observed.

For example, let $\matPhi \in \iso_{d,k}$ be the optimal PCA projection for $\matX$. 
Consider $\y_t \in \R^\ell$ whose first $k$ coordinates contain $\matPhi^\top \x_t$ and the remaining $\ell-k$ coordinates contain an arbitrary vector $\z_t \in \R^{\ell-k}$.
In the case where $\|\z_t\|_2 \gg \|\matPhi^\top \x_t\|_2$ the geometric arrangement of $\y_t$ potentially shares very little with that of the signal in $\x_t$.
Yet, 
$\min_{\matPsi \in \R^{d \times \ell}} \sum_{t=1}^{n} \|\x_t - \matPsi  \y_t\|_2^2 = \OPT_k,$
by setting $\matPsi = (\matPhi | 0^{d \times (\ell -k)})$.
This would have been impossible had $\matPsi$ been restricted to being an isometry.

\subsubsection{Regret minimization} \footnote{Justin: I did not fully understand the discussion of Section 1.1.1. I think the gist is that by not restricting Phi to be an isometry, Sarlos' solution does not have the kind of de-noising affect that the solution in the present manuscript has. This could be conveyed in a much more succinct and clear manner than the current presentation achieves. 

More generally though, the de-noising point is a bit of a minefield in that the de-noising properties that the paper claims that PCA has are never formally defined, and certainly the paper do not formally prove that your online PCA algorithm achieves any formal de-noising properties (is proving such properties an interesting question for future work?) My point here is not to suggest that you remove the discussion on de-noising and the comparison with Sarlos' solution, but I do think this discussion must be handled with care (more care than is in the current presentation). }
A regret minimization approach to online PCA was investigated in \cite{Warmuth07randomizedonline,NieKW13}. 
In their setting of online PCA, at time $t$, \emph{before} receiving the vector $\x_t$, the algorithm produces a rank $k$ projection matrix $\matP_t \in \R^{d \times d}$.\footnote{Here, $\matP_t$ is a square projection matrix with $\matP_t^2 = \matP_t$}
The authors present two methods for computing projections $\matP_t$ such that the quantity 
$\sum_t \| \x_t - \matP_t^\top \x_t \|_2^2$ converges to $\OPT_k$ in the usual no-regret sense.
%
Since each $\matP_t$ can be written as $\matP_t= \matU_t \matU_t^\top,$ for $\matU_t \in \iso_{d,k},$ it would seem that setting $\y_t = \matU_t^\top \x_t$ should solve our problem. 
Alas, the decomposition $\matP_t= \matU_t \matU_t^\top$ (and therefore $\y_t$) is underdetermined.
Even if we ignore this issue, each $\y_t$ can be reconstructed by a different $\matU_t$.
To see why this objective is problematic for the sake of dimension reduction, consider our setting where we can observe $\x_t$ before outputting $\y_t$.
One can simply choose the rank $1$ projection $\matP_t = \x_t\x_t^\top / \|\x_t\|_2^{2}$. 
On the one hand this gives $\sum_t \| \x_t - \matP_t \x_t \|_2^2 = 0$. On the other, it clearly does not provide meaningful dimension reduction.

\subsubsection{Stochastic Setting} \footnote{Justin: The sentence "This algorithm is provably correct if $n_0$ is independent of n which is intuitively correct but non trivial to show." was unclear. Firstly, what is n? Secondly, what does "provably correct" mean? What are the error guarantees of the algorithm? Third, did the three recent papers formally establish what this sentence claims? Perhaps cite a specific theorem from one or several of the papers.}

There are three recent results \cite{ACS13, MCJ13, BDF13} that efficiently approximate the PCA objective in Equation~\eqref{pcacost}. 
They assume the input vectors $\x_t$ are drawn i.i.d.\ from a fixed (and unknown) distribution. 
In this setting, observing $n_0$ columns $\x_t$ one can efficiently compute $\matU_{n_0} \in \iso_{d,k}$ such that it approximately spans the top $k$ singular vectors of $\matX$.
Returning $\y_t = 0^{k}$ for $t < n_0$ and $\y_t = \matU^\top _{n_0} \x_t$ for $t \ge n_0$ completes the algorithm.
This algorithm is provably correct if $n_0$ is independent of $n$ which is intuitively correct but non-trivial to show.
%
While the stochastic setting is very common in machine learning (e.g.\ the PAC model) in online systems the data distribution is \emph{expected} to change or at least drift over time.
In systems that deal with abuse detection or prevention, one can expect an almost adversarial input.

\subsection{Summary of contributions}\label{contribs}
Our first contribution is a deterministic online algorithm (see Algorithm~\eqref{alg1} in Section~\eqref{sec:inefficient}) for the standard PCA objective in Eqn~\eqref{pcaproblem}.
Our main result~(see Theorem~\eqref{thm1}) shows that, for any $\matX = [\x_1,\dots,\x_n]$ in $\R^{d \times n}$, under some assumptions discussed below, $k < d$ and $\varepsilon > 0$ the proposed algorithm produces a set of vectors $\y_1,\dots,\y_n$ in $\R^{\ell}$ such that 
$
\ALG \le \OPT_k+ \varepsilon \|\matX\|_{\mathrm{F}}^2
$
where 
$\ell = \lceil 8k/\varepsilon^2\rceil.$
To the best of our knowledge, this is the first online algorithm in the literature attaining theoretical guarantees for our PCA objective in Eqn~\ref{pcaproblem} (as previously discussed in the manuscript prior work has considered other variants of online PCA).
The description of the algorithm and the proof of its correctness are given in Section~\ref{sec:inefficient}.

While Algorithm~\ref{alg1} solves the main technical and conceptual difficulty in online PCA, it has certain drawbacks:  
\begin{enumerate}
\item It must assume that $\| \x_t \|_2^2 \le \|X\|_{\mathrm{F}}^2/\ell$. 
\item It requires $\|\matX\|_{\mathrm{F}}^2$ as input.
\item It spends $\Omega(d^3k/\varepsilon^2)$ floating point operations per input vector and requires auxiliary $\Theta(d^2)$ space in memory. 
\end{enumerate}

We show that in the cost of slightly increasing the target dimension and additive error, one can address all the issues above. This leads to the second contribution of this paper, another deterministic online algorithm (see Algorithm~\ref{alg2} in Section~\ref{sec:efficient}) for the PCA objective in 
Eqn~\ref{pcaproblem}. We briefly explain here how we deal with the above issues: 
\begin{enumerate}
\item We deal with arbitrary input vectors by special handling of large norm input vectors. This is a simple amendment to the algorithm which only doubles the required target dimension.
\item  Algorithm~\ref{alg2} avoids requiring $\|\matX\|_{\mathrm{F}}$ as input by estimating it on the fly. 
A ``doubling argument" analysis shows that the target dimension grows only to $O(k \log(n)/\varepsilon^2)$.\footnote{Here, we assume that $\|\x_t\|$ are polynomial in $n$.}
Bounding the target dimension by $O(k/\varepsilon^3)$ requires a significant conceptual change to the algorithm and should be considered one of the main contributions of this paper.
\item Algorithm~\ref{alg2} spends only $O(d k/\varepsilon^3)$ floating point operations per input vector and uses only $O(dk/\varepsilon^3)$ space by utilizing a streaming matrix approximation technique \cite{Liberty13}.
\end{enumerate}






