\documentclass[a4paper,10pt]{llncs}
\usepackage[utf8x]{inputenc}
\usepackage[pdftex]{graphicx}
\usepackage{subfigure}
\usepackage{graphicx}
\usepackage[T1]{fontenc}
\usepackage{float}
\usepackage{url}
\usepackage{booktabs} % For `professional' tables (toprule, etc.)

%opening
\title{Motor intention recognition in EEG: in pursuit of a relevant
  feature set}%
\author{P.A. Iturralde\inst{1}, M. Patrone\inst{1}, F. Lecumberry\inst{2} and A. Fernández\inst{2}}%
\institute{Department of Physics, School of Engineering, UdelaR, Montevideo, Uruguay.\\
\and
Department of Electrical Engineering, School of Engineering, UdelaR, Montevideo, Uruguay.\\
\email{iturral,mpatrone,fefo,aliciaf@fing.edu.uy}}

%Comentarios generales de la reunión del 21/3
% Ser precisos en el lenguaje al distinguir entre reducción de dimensionalidad (algoritmos automáticos para reducir el espacio de búsqueda, sin explorar manualmente sobre los datos y sin explícitamente buscar que permitan entrenar un buen clasificador), extracción de características (generar nuevas características a partir de las existentes, que permitan discriminar las clases del problema) y selección de características (quedarme sólo con algunas de las características existentes, que permitan la discriminación).
%No separar la estructura principal por métodos, sino que separar por teoría y resultados ???
%Mencionar qué pasa sobre el trabajo con el método común de trabajar en frecuencia y otras metodologías usuales. Si da mal, comentarlo, pero que los números figuren ahí para comparar.
%Precisar circunstancias en las que se obtuvieron los resultados
%Correr varias veces para presentar medias y varianzas (10 veces maso).
%Comentario sobre qué pasa al tratar los dos díás como conjuntos de datos distintos: mejora? Es útil según la aplicación: entrenamiento y uso el mismo día distintos días? Ver cuánto trabajo nos toma estudiar los resultados de entrenar/testear en cada día por separado.
%Grafiquita con performances, varianzas, por usuario, distintos días, mezclados.
%Tablita solo para cba, comparando métodos, días por separado, etc.

\begin{document}

\maketitle


\begin{abstract}
  Brain-computer interfaces (BCIs) based on electroencephalograms
  (EEG) are a noninvasive and cheap alternative to get a communication
  channel between brain and computers. Some of the main issues with
  EEG signals are its high dimensionality, high inter-user variance,
  and non-stationarity. In this work we present different approaches
  to deal with the high dimensionality of the data, finding relevant
  descriptors in EEG signals for motor intention recognition: first, a
  classical dimensionality reduction method using Diffusion Distance,
  second a technique based on spectral analysis of EEG channels
  associated with the frontal and prefrontal cortex, and third a
  projection over average signals. Performance analysis for different
  sets of features is done, showing that some of them are more robust
  to user variability.
\end{abstract}
%\keywords{EEG, motor intention recognition, PCA, high-dimensionality reduction, diffusion maps}

\section{Introduction}

In recent years, Brain-Computer Interfaces (BCIs) have become an
active topic of research. Such interfaces could provide an alternative
communication channel between humans and computers, replacing the
normal output channel of nerves and muscles. As its main application,
BCIs could be used by paralyzed patients or others suffering some type
of motor impairment but who are cognitively intact as a means to
interact with the environment.\

BCIs register brain states (relating to thoughts and intentions) as
signals that are interpreted and translated into actions. The signal
acquisition process is critical to the performance of the whole
system, and several technologies have been proposed to carry out such
a task. Signal registering through electroencephalograms (EEG) are one
of the most promising systems because of its noninvasive nature that
allows for simpler and cheaper devices with almost no associated
risks, as opposed to invasive technologies such as
electrocorticographic (ECoG) signals which require medical procedures
for its implantation. However, EEG signals provide only a diffuse
access to brain signals, since currents in the brain cortex are volume
conducted through the skull before being sensed at the scalp. This
means that more sophisticated processing and recognition systems are
needed in order to obtain information about brain states from such
signals~\cite{Wolpaw2002767,1214694}.

One of the main issues with EEG signals is its high
dimensionality. Usually, signals from over ten and up to hundreds of
channels are acquired at sampling rates of at least 100 Hz to ensure
that no aliasing occurs. This implies that even for short trials that
last for about a second, the raw feature space has a dimension between
1000 and 60000, making it very difficult to work with. One of the most
used~\cite{Delorme20021057,4595650} is to perform component analysis (PCA, ICA or
equivalent) in order to reduce redundancy in both time (over-sampled
signals) and space (EEG electrodes that are nearby have similar EEG
signals). This type of analysis (particularly ICA) has also been used
to deduce the actual location of brain activity from EEG signals,
see~\cite{Delorme20021057}.\

Another common approach~\cite{847807} after the component analysis is
to divide the remaining signals in several frequency bands along
different time windows for the duration of the trial (time-frequency
analysis \cite{Wang20042744}), using these spectral components to perform the
classification. While the amplitude of the spectral components is
widely used particularly in the mu and beta bands \cite{4595650}, the phase
of the components has received special attention because of its
relation to Event Related (De)Synchronization (ERS/ERD) \cite{Pfurtscheller19991842,Wang20042744}.\

Dimensionality reduction is a major topic of research, and several
algorithms have been proposed to perform such task. Data laying in
high dimensional spaces generally presents complex geometries, and
probably not enough samples for accurate statistics. This kind of
problem requires new strategies to deal with it instead of the
classical PCA or Multi-Dimensional Scaling (MDS). One reason is the
non linearity of the manifold where the original high dimensional data
points lies. Another is the computational cost of traditional methods
applied to data of high dimension, as it grows exponentially with
dimension. Additionally, the sparse sampling leads to poor convergence
of the algorithms, a phenomenon referred to as ``the curse of
dimensionality''. Although input data may present a high
dimensionality, it is common that the ``real'' or intrinsic
dimensionality of the source that generates this data is much lower
due to significant correlations between many of the
coordinates. Finding meaningful structures in the data and obtaining
those ``principal coordinates'' is one of the goals of the machine
learning algorithms. In past years kernel based methods have
concentrated the attention with good results and a solid background
theory; a brief list of these methods include Locally Linear Embedding
\cite{Roweis22122000}, Laplacian Eigenmaps \cite{Belkin}, Hessian Eigenmaps \cite{Donoho13052003} and
Diffusion Maps \cite{1704834}.\

In this paper we deal with a motor imagery task, consisting of several
trials in where a user decided to release, and actually released, a
button or not. The situation has the same high dimensionality problem
that was described before, consisting of instances containing 31 EEG
signals originally sampled at 1kHz for a time frame of one second. To
reduce the amount of data preserving the spectral components of
interest, downsampling to 100Hz was performed. Data also presents high
inter-user variance, and non-stationarity.\

We present different approaches to deal with the high dimensionality
of the data, finding relevant descriptors in EEG signals for motor
intention recognition: first, a classical dimensionality reduction
method using Diffusion Distance, second a technique based on spectral
analysis of EEG channels associated with the frontal and prefrontal
cortex \cite{Wang20042744}, and third a projection over average signals as proposed in \cite{Delorme20021057}.\


%Dimensionality reduction is a major topic of research, and several algorithms have been proposed to perform such a task. On one hand, automatic dimensionality reduction algorithms look to greatly reduce dimensionality while keeping all or most of the signal's information (measured as variance, entropy or some other quantity). Algorithms ... (ejemplos aquí). On the other hand, feature extraction algorithms focus also on dimensionality reduction, but by defining features that allow for adequate classification. Examples of this are... It can be said that automatic dimensionality reduction and feature extraction are unsupervised and supervised versions of dimensionality reduction respectively.(fruit!?)\\
%
% Regarding dimensionality reduction in EEG signals, several methods have been proposed. One of the most used (referencias!) is to perform component analysis (\textit{PCA}, \textit{ICA} or equivalent) in order to reduce redundancy in both time (over-sampled signals) and space (EEG electrodes that are nearby have similar EEG signals). As it was said before, this type of analysis (particularly ICA) has also been used to deduce the actual location of brain activity from EEG signals (referencia).\
%
%Regarding the signal processing and pattern recognition, one of the most common approaches after the component analysis is to divide the remaining signals in several frequency bands along different time windows for the duration of the trial (time-frequency analysis), and using these spectral components to perform the classification. While the amplitude of the spectral components is widely used particularly in the \textit{mu} and \textit{theta} bands (refs!), the phase of the components has received special attention because of its relation to \textit{event related (de)synchronization (ERS/ERD)} (referencias!).\

%In this paper we deal with a motor imagery task, consisting of several trials in which a user decided to release, and actually released, a button or not. The situation has the same high dimensionality problem that was described before, consisting of instances containing 31 EEG signals acquired for a second at a 100Hz sampling rate (downsampled from the original 1kHz for maneagability). Data also presents high inter-user variance, and is non-stationary. This paper will be aimed at dealing with the high dimensionality issue, by trying three different approaches. First, we present an automatic reduction using \textit{diffusion maps}, with results that were  highly inconsistent among users. Following (falta un mejor enganche!), we propose two very different feature extraction strategies to classify instances: the first one relies on training classifiers using only the spectral components from EEG channels associated with the frontal and pre-frontal cortex, with improved results from the previous strategy. The second one consists of comparing the shape of the acquired signals for each EEG channel with reference signals that are extracted from the training data for each user. With this last method we obtained the best results and the highest consistency among users.\

This paper is structured as follows. On section Methods we describe
the dataset that was used, and we detail the three different
approaches used. On section Results we present the performance
analysis in all three cases for all the users in the dataset, as well
as some other results obtained from different variants of these
strategies. Discussion regarding the results and future lines of work
are mentioned on the last section.\

\section{Methods}
\subsection{Data description}

The EEG data files of the experiment were made available by A. Delorme
in (\url{ http://sccn.ucsd.edu/~arno/fam2data/data/ }). Fourteen
subjects of several ages and both sexes were tested in two recording
phases on different days. Each day consisted of at least 10 series
with 100 images per series. The subjects were asked to identify target
images and non-target images, equally likely presented. In each
instance, the image was shown for 20 msec. in a computer (avoiding the
use of exploratory eye movement), and the subject had to gave their
response following a go/nogo paradigm. For each target they had to
lift their finger from a button as quickly as possible, and for each
distractor had to keep pressing the button during at least 1~sec (nogo
response).\

Scalp EEGs were recorded from 31 channels (placed according to the
international 10/20 system \cite{malmivuo1995bioelectromagnetism}, see figure \ref{fig:capocha_completa}) with
a 1~kHz sampling rate during 1s for each instance. Every instance
consists of a single image experiment; it starts with the image being
displayed for 20~msec., and lasts for a second, which was the maximum
time the users had to respond. The database includes at least 1000
images per subject and fourteen different subjects, which results in
14000 instances with a feature set of 31000 dimensions. However, since
EEGs depends on personal physiological factors \cite{malmivuo1995bioelectromagnetism}, there is an
important variability in the characteristics of the signals between
subjects (signal mean and energy). This is the reason to use the data
as fourteen independent sets, one for each subject.\


\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{./imagenes/capocha_completa.png}
\caption{\small Channel locations used in the EEG recordings,
  according to the 10/20 system. Image adapted from
  \texttt{http://www.mariusthart.net/}}
%\caption{\small Ubicación de los canales utilizados para la adquisición del EEG, en el sistema 10-20.}
\label{fig:capocha_completa}
\end{figure}

Pre-processing of the data was performed to correct for a DC drift
between instances. Therefore, signals from every channel in each
instance were forced to start at zero by substracting a DC
level. Signals were also downsampled to 100~Hz, since spectral analysis
showed that no energy was present over 50~Hz, making the elevated
original sampling rate unnecesary.\


\subsection{High-dimensionality analysis tools}


A first step for dimension reduction is to estimate the intrinsic
dimension of the problem.\

We found the best results (regarding consistency in the estimation)
using the Maximum Likelihood Estimator for dimensionality, which gave
values ranging from 29 to 35 for the different users. Independently
from the estimator used, a mapping tool was selected in order to
reduce the feature space. Locally Linear Embedding, Laplacian
Eigenmaps, Hessian Eigenmaps and Diffusion Maps (DM) were
considered. In this case the best results were obtained with the use
of Diffusion Maps. DM is a general framework for data analysis based
on a diffusion process over an undirected weighted graph, defining a
new metric on the data called Diffusion Distance \cite{Nadler2006113}. This
distance is equivalent to the Euclidean distance in the space with
coordinates given by the mapping function. In order to compute the
weights on the graph we used a Gaussian kernel with an adequate
(manually) selected variance, fixed for all subjects.\

Once the reduced feature space was obtained, a classifier was trained
with these new features.\



%Our first approach to deal with the high-dimensionality space consisted on trying some automatic dimensionality reduction tools. Once we had the reduced dataset, different classifiers were trained and evaluated.\
%
%For the dimension reduction, it is first necessary to estimate the intrinsic dimensionality of the problem. We found the best results (regarding consistency in the estimation) using the \textit{Maximum Likelihood Estimator} for dimensionality, which gave values ranging from 29 to 35 for the different users. Independently from the estimator used, a mapping tool is to be selected in order to reduce the feature space. In this case the best results were obtained with the use of \textit{diffusion maps}. Once the reduced feature space was obtained, a classifier was trained with these new features.


% \subsection{Feature extraction}
%
% We used two different techniques to extract relevant features from the original data. The first one is the common spectral analysis of the signals, focusing on cerebral regions that seems to present higher differential activity between target and non-target instances. The second is signal shape comparission, an also common technique but rarely used for the classification of EEG signals.


\subsection{Channel and time window selection from active zones}

Plotting the data corresponding to the EEG signals over a single
trial, different zones of activity (both in the spatial and temporal
senses) become visible. Approximately 100msec after the image is
shown, the occipital region becomes active. Since the occipital lobe
is where visual information is processed, this seems to indicate that
an analysis of the image is being performed. After about 550msec
pre-frontal cortex becomes active, which is consistent with the
expected motor control actions that need to be carried out. While it
could be expected that the activity be more noticeable when a motor
action is indeed executed (target class), both classes seem to present
some of this activity (see figure~\ref{fig:medias1y0_cba}, which show
average signals for target and non-target instances). Furthermore, it
is in these channels and time-window that the greatest differential
activity is observed.\

\begin{figure}[h]
\centering
\includegraphics[width=10cm]{imagenes/medias1y0_cba.pdf}
\caption{\small Average signals for target and non-target classes.}
\label{fig:medias1y0_cba}
\end{figure}

From both the functional analysis of the cerebral lobes and the
activity zones shown in the signals, follows that considering a
feature space consisting only of the signals corresponding to frontal
and pre-frontal channels (F-channels, see figure
\ref{fig:capocha_completa}) might allow to reduce dimensionality while
still allowing to differentiate between the two types of
instances. Spectral analysis was then performed over these
channels. Frequencies finally considered for the training of
classifiers where in the 0-10Hz band, resulting in a feature vector of
dimension 275.\

\subsection{Projection over average signals}

An average signal --over instances-- was extracted for each of the 31
available channels, for each class. Thus, 62 reference signals were
obtained, 31 corresponding to the average in all the EEG channels in
the target class (T) and 31 for the channels in the non-target class (NT). Let us call $s^j_i$ the time signal corresponding to the i-th channel for the j-th instance,  $s^T_i$ the time signal corresponding to the i-th channel in the average over the target class in the training set, and $s^{NT}_i$ the time signal corresponding to the i-th channel in the average over the non-target class in the training set (see Equations \ref{eq:ref0}, \ref{eq:ref1}). For each instance, a projection over the reference signals was
performed (channel by channel, by means of a scalar product over time), resulting in a feature vector $c=(c^T_i,c^{NT}_i)$ of 62 scalar features for each instance (see Equations \ref{eq:coefsT}, \ref{eq:coefsNT}). The first 31 of these features ($c^{T}_i$) refer to the projection of the instance's signals into the averaged signals for the target class, and the last 31 ($c^{NT}_i$) for the non-target class. From this new feature space different classifiers were trained. Results were validated over the test set.\


\begin{equation}
 s^{NT}_i = \frac{1}{N} \sum_{j \in NT} s^j_i, \, i \in[1,31]
\label{eq:ref0}
\end{equation}
\begin{equation}
 s^T_i = \frac{1}{N} \sum_{j \in T} s^j_i, \, i \in[1,31]
\label{eq:ref1}
\end{equation}

\begin{equation}
 c_i^{NT} = <s^j_i , s_i^{NT}>, \, i \in[1,31]
\label{eq:coefsNT}
\end{equation}
\begin{equation}
 c_i^{T} = <s^j_i , s_i^{T}>, \, i \in[1,31]
\label{eq:coefsT}
\end{equation}


Experiments were also conducted considering only F-channels, as is
suggested by the analysis of the previous section. However results
were worse in that case than when considering all channels.

\section{Results}

The results obtained with the methods described in the previous
section are shown in figure \ref{fig:results}. For all cases different
classifiers were trained and tested (multilayer perceptrons, C4.5
trees) but since results did not differ significantly among them,
only best results concerning multilayer perceptrons are presented.\

\begin{figure}[t]
\centering
\includegraphics[width=10cm]{imagenes/results.pdf}
\caption{\small Results obtained for the different users. Method 1:
  Diffusion Maps. Method 2: Channel and time window selection from
  active zones. Method 3: Projection over average signals. The classification was made with a two-layer feedforward perceptron, with 3 neurons in each layer, a learning rate of 0.3 and backpropagation as the training algorithm.}
\label{fig:results}
\end{figure}


\begin{table}[b]
\begin{center}
\begin{tabular}{lccc}
  \toprule
  % \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
            & Method 1 & Method 2 & Method 3 \\ %\hline
  \midrule
  mean      & 68.1\%   & 72.9\%   &  76.3\%  \\
  variance  & 11.0\%   & 7.7\%    &  7.0\%   \\
  %\hline
  \bottomrule
\end{tabular}
\caption{Average results and variance for the tested methods over all
  users.}
\label{tabla}
\end{center}
\end{table}


There are two major observations that arise from figure
\ref{fig:results}. First of all, for most subject the best results are
obtained by making the projection over average signals (Method
3), even over Method 2 (spectral analysis in frontal and pre-frontal channels) which is the most common approach found in literature. This indicates that there is useful information in the shape of
the signals (in time) that could complement the frequency analysis.\

Second, the inter-user variance is significantly lower for Method 3 which indicates that this method is more consistent among the different subjects. On the other hand, DM (Method 1) presents an extreme inter-user variance (with accuracy going over 85\% for some users and as low as 50\% for others).\

Method 3 presents the best results regarding both classifier
performance and inter-user variance (see Table~\ref{tabla}). Although
this method uses a feature set dependendent on information extracted
from the actual signals (averages over instances) it can be
generalized, which allows an automatic process of feature extraction
and classification for new users. \

It is worth noting that results drop notoriously if instances are not randomly mixed before the train and test groups are separated. This seems to be due to the temporal non-stationarity that exists between signal from the same subject, where significant differences can be found between signals from different experiments. Even more, these differences seem to increase with time.\

The results are consistent with the foreseen need to include expert knowledge in the feature extraction. The dimension of the original feature space has proved to be too big for most automatic algorithms of dimensionality reduction.\


% Combinando con votacion por mayoria para ega :
%drtoolbox 67%
%fred 70
%proy 72.4%
%combinados:76.2%




%\subsection*{High-dimension analysis tools}
%
%After training a neural network with the mapped feature space, we managed to get an accuracy over the testing set of 89\% for some users, however over other users we had an accuracy of 59\%. The extreme inter-user variance obtained in this case suggests that the mapping (which was user dependent) did not perform within expectations for all users.\
%
%
%\subsection*{Channel and time window selection from activity zones}
%
%Choosing only the frontal channels a 73,5\% average accuracy was obtained by training a multilayer perceptron from the spectral components. The inter-user variance in this case is slightly less than for the previous case, although overall accuracy is not better. PCA and subsampling to 50 Hz were used to obtain these results, in order to diminish computational times involved, but the performance was not significantly affected when they were not used. \\
%
%
%\subsection*{Projection over average signals}
%
%With this procedure we obtained the best results, regarding classifier performance (78.1 \% with a two layer perceptron). It is worth noticing that this procedure was the one that had the least inter-user variance, below 5\%.\ %4-3
%
%It is worth noting that if instances are not randomly mixed before the train and test groups are separated, the results drop notoriously. This seems to be due to the temporal variability that exists between signal from the same subject, where big differences can be found between signals from different experiments. Besides, these differences become greater when the time between experiments increase.\





% \subsection*{Generalization for all users}
%
% In the following table we show the results obtained for the different subjects using the projection over average signals (Method 1) and selection rom activity zones (Method 2).\
%
% \begin{tabular}{|c|c|c|}
%   \hline
%   % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
%   Subject & \% success Method 1 & \% success éxito Method 2\\
%   \hline
% cba & 86.6 & 76.3\\
% clm & 78.8 & 74.5\\
% ega & 76.2 & 71.5\\
% fsa & 75.6 & 67.0\\
% gro & 72.4 & 72.2\\
% hth & 74.6 & 72.3\\
% lmi & 90.2 & 88.7\\
% mba & 74.2 & 71.9\\
% mma & 77.2 & 78.8\\
% mta & 83.6 & 78.8\\
% pla & 69.2 & 56.3\\
% sce & 85.8 & 71.7\\
% sph & 80.4 & 80.0\\
% wpa & 69.0 & 68.7\\
%  \hline
% average & 78.1 & 73.5\\
% \hline
% \end{tabular}
%
% It can be seen that results for both methods appear to be consistent for each subject, although between users they vary from 56\% to 90\%. Particularly,Method 1 seems to be more consistent among the different subjects, presenting a variance of less than 5\%.




\section{Conclusion}

%necesidad de reducir manualmente la dimensión del espacio de características.\
%la proyección en el tiempo no puede ser descartada.\
%a nosotros nos da incluso mejor que el análisis ``clásico'' de frecuencias.\
%la consistencia de resultados en los distintos usuarios es importante para la automatización de la extracción de características.\

We proposed and tested three different approaches to perform feature
selection/extraction in EEG signals. All the methods make a classifier
independent selection. Performance evaluation of motor intention
recognition, using the selected features with a two layer perceptron
show that the results are user dependent for all the methods, but
projection over average signals (Method 3) shows the least variability
between users. The classifier based on DM has the same high
performance for the best users but a very low one for others; further
analysis in the parameter selection process is needed in order to
generalize this method to new users. As an advantage, DM shows lower
dependability with the training/test data set split.\

Discarding channels that are not related with the frontal cortex works
well for reducing dimensionality, and thus helps to increase
classifier performance, as was shown with results for Method
2. However for Method 3 discarding channels seemed to show a decrease
in performance, suggesting that there is indeed relevant information
associated with these channels. It is worth noticing that in the
latter case dimensionality is no longer an issue, since the projection
over average signals yields a feature space of dimension 62, as
opposed to the thousands originally present.\

In the future we will try to obtain results while making embedded
feature selection with different classifiers (SVM, C4.5, etc.) and try
combining the three tested methods in order to improve the performance
for each user. Some promising early results along this line show that
an improvement of 4\% over the best single classifier is possible at
least for some users.\

%\section{Acknowledgments}
%The work of P.A. Iturralde has been supported by ANII, Uruguay.

\bibliography{bibfile.bib}{}
\bibliographystyle{plain}

% \bibitem{Wolpaw2002767} Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM.
% Brain-computer interfaces for communication and control. \emph{Clin Neurophysiol} 2002; 113, pp 767:791.
%
% \bibitem{2} Vaughan TM, Heetderks WJ, Trejo LJ, Rymer WZ, Weinrich M, Moore MM, Kubler A, Dobkin BH, Birbaumer N, Donchin E, Wolpaw EW, Wolpaw JR. Guest editorial brain-computer interface
% technology: a review of the second international meeting. \emph{IEEE Trans Neural Syst Rehabil Eng}, 2003;11(2):94–109
%
% \bibitem{8}\newblock
% A. Delorme, S. Makeig, M. Fabre-Thorpe, T. Sejnowski. From Single-trial EEG to Brain Area Dynamics
% \emph{Neurocomputing} pp. 44-46, 1057-1064, 2002.
%
% \bibitem{13}\newblock % usa ICA
% N. Bigdely-Shamlo, A. Vankov, R. Ramirez and S. Makeig, Brain activity-based image classification from rapid serial visual presentation, \emph{IEEE Transactions on Neural Systems and Rehabilitation engineering}, vol. 16, no. 5, pp. 432-441, 2008.
%
% \bibitem{review}\newblock
% J. R. Wolpaw et al., Brain-computer interface technology: a review of the first international meeting., \emph{IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society}, vol. 8, no. 2, pp. 164-73, Jun. 2000.
%
% \bibitem{10}\newblock
% J. Wolpaw, D. MacFarland. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. \emph{Proceedings of the National Academy of Sciences}. Vol. 101, No. 51, pp. 17849-54. 2004.
%
% \bibitem{11}\newblock
% G. Pfurtscheller. EEG event-related desynchronization (ERD) and event-related synchronization (ERS) \emph{Electroencephalography: Basic Principles, Clinical Applications and Related Fields}, 4th ed, E. Niedermeyer and F. H. Lopes da Silva, Eds. Baltimore, MD: Williams \& Wilkins, 1999, pp. 958–967.
%
% \bibitem{4}\newblock
% S. T. Roweis and L. K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding, \emph{Science}, vol. 290, pp. 2323:2326, 2000.
%
% \bibitem{5}\newblock
% M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering, in \emph{NIPS}, pp. 585:591, 2001.
%
% \bibitem{9}\newblock
% D. Donoho, C. Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. \emph{Proceedings of the National Academy of Sciences}. Vol. 100, No 10, pp. 5591-5596. 2003.
%
% \bibitem{6}\newblock
% S. Lafon, Y. Keller, and R. R. Coifman. Data fusion and multicue data matching by diffusion maps, \emph{IEEE Trans}. Pattern Anal. Mach. Intell., vol. 28, no. 11, pp. 1784:1797, 2006.
%
%
%
%
% \bibitem{3}\newblock
% J. Malmivuo, R. Plonsey. Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields, chap. 13, \emph{Oxford University Press}, 1995. Obtained from www.bem.fi/book/00/ti.htm Last checked on 02/15/2012
%
%
%
%
%
% \bibitem{7}\newblock
% B. Nadler, S. Lafon, R. R. Coifman, and I. G. Kevrekidis. Diffusion maps, spectral clustering and eigenfunctions of fokker-planck operators, in \emph{NIPS}, 2005.












%\bibitem{bandas freq 1}\newblock % Para decir que se suele analizar bandas mu y theta en freq
%C. Anderson and Zlatko Sijercic,Classification of EEG signals from four subjects during five mental tasks, \emph{Proceedings of the Conference on Engineering Applications in Neural Networks}, pp. 407-414, 1996.
%
%\bibitem{bandas freq 2}\newblock
%T. Wang, J. Deng, and B. He, Classifying EEG-based motor imagery tasks by means of time-frequency synthesized spatial patterns., \emph{Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology}, vol. 115, no. 12, pp. 2744-53, Dec. 2004.


\end{document}

