\label{sec:online}

The online procedure uses the voltage time series directly. We predict the voltage that each electrode exhibits at a particular time $t$, using a small history $H$ of past voltage readings (i.e. times $t-H,\ldots,t-1$). We develop two models, one that is trained on sessions in which a patient is asleep and the other on those in which a patient is awake. The model that best predicts the voltage at $t$ the best is the final prediction. For each patient we randomly select $n$ time-points to make up a training set $\mathcal{D} = \{\x^{(1)}, \ldots, \x^{(n)} \subset \mathcal{R}^{(E*H)}$ with corresponding voltage readings at $t$ for each electrode $\mathcal{Y} = \{\y^{(1)},\ldots,\y^{(n)}\} \in \mathcal{R}^E$.

\textbf{Model.} In this case we are faced with a regression problem, predicting the voltage at time $t$ for each electrode. Thus, we make use of the squared risk. Similar to the batch classification setting, we would like to select a sparse set of electrodes for prediction. As before, once an electrode is selected, we would like to use all of its history for predicting the voltage at time $t$. Thus, we again relax the $\ell_0$ norm using the mixed-norm to arrive at the online loss function
\begin{align}
{\cal L}(\theta^k) = \sum_{i=1}^{N} \big(y_k^{(i)} - \theta^{k^\top}\mathbf{x}^{(i)} \big)^2 + \lambda \sum_{e=1}^E \sqrt{ \sum_{h=1}^H (\theta_{(e,h)}^k)^2 } \label{eq:online-loss}.
\end{align}
Different from batch classification, we solve for a separate weight vector $\theta^k$ for each electrode in $\mathcal{Y}$, as we suspect that each electrode captures different signal characteristics. Similar to the batch setting, we can make the mixed-norm differentiable using lemma 1, resulting in two minimization steps:
\begin{align}
\min_{z>0} & \sum_{e=1}^E \frac{1}{2} \Bigg[\frac{\sum_{h=1}^H (\theta_{(e,h)}^k)^2}{z_e^k} + z_e^k\Bigg] \nonumber \\ \nonumber
\min_{\theta^k} & \sum_{i=1}^{N} \big(y_k^{(i)} - \theta^{k^\top}\mathbf{x}^{(i)} \big)^2 + \lambda \sum_{e=1}^E \frac{1}{2} \Bigg[\frac{\sum_{h=1}^H (\theta_{(e,h)}^k)^2}{z_e^k} + z_e^k\Bigg] \nonumber \\ \nonumber
\end{align}
The first minimization can be solved in closed form as shown in the lemma, $z_e^k = \sqrt{\sum_{h=1}^H (\theta_{(e,h)}^k)^2}$. Further, it is shown in~\cite{boyd2004convex} that the right hand side of lemma 1 is jointly convex in $\theta$ and $z$, so long as $g(\theta)$ is convex. Thus, the second minimization can be solved in closed form as well by viewing the loss as a ridge regression variant. Define the auxiliary matrix $\Zb_{jj}^k = 1/z_j^k$ for $j = 1,\ldots, E*H$, where $z_j^k = z_1^k$ if $1 \leq j \leq H$, $z_j^k = z_2^k$ if $H+1 \leq j \leq 2H$, and so on. Further let $\Xb_{ij} = x_j^{(i)}$ and $\Yb_{ie} = y_e^{(i)}$. The closed form solution for $\theta^k$ is then
\begin{align}
\theta^k = (\Xb^\top \Xb + \frac{\lambda}{2} \Zb^k)^{-1} \Xb^\top \Yb_k
\end{align}
By alternately solving for $z_e^k$ and $\theta^k$ we minimize eq.~(\ref{eq:online-loss}).