\chapter{Data Analysis}

\section{Introduction}
\subsection{Methods for Data Analysis}
To examine attributes individually, or two at a time, two collections
of methods can be very useful: statistical summarization and
visualization. They can be used together: bar charts and histograms
can describe  value distributions very nicely. Rules to choose number
and size of bins?? Sturges' rule, IQR rule. Boxplots, scatter plots,
density plots (with jitter).

For high dimensional data, one typical strategy is to try to reduce
the number of dimensions. Techniques for dimensionality reduction
include:
\bi
\item {\em Principal Component Analysis (PCA)};
\item {\em Projection Pursuit};
\item {\em Multidimensional Scaling (MDS)};
\item {\em Correlation Analysis}: Pearson, rank, Spearman's, Kendall.
\ei

{\em Outlier detection} and {\em Missing Value detection} are two
other important techniques which will be used later in {\em data
  cleaning}. 


\subsection{Higher Dimensions: Dimensionality Reduction}
For higher dimensional data, one needs more advanced methods. In
particular, when the number of attributes in a data set is high, we
may have a problem.
If we are trying to determine some property B, and we have a data
attribute A, the question is whether A is a good indicator of B. This
can be measured by the difference between the prior probability of B,
$p(B)$, versus the probability of B and A, $p(B \land A)$, or the
probability of B given A, $p(B|A)$, plus the probability of B given
{\bf not} A, $p(B| \lnot A)$ (this simplifies by assuming that A is
binary).  We know that if A and
B are independent, we should expect $p(A \land B) = p(A) \times p(B)$.
The problem is, most of the time we don't know what $p(B)$
is, or at least not exactly, and $p(B \land A)$ has to be estimated
from the data, which includes a sampling error. When the amount of
data is small, such estimation is not robust. When data is large, the
estimation is much better. However, if there are more attributes
involved (instead of A we have $A_1,\ldots,A_n$) we can also ask about
the disciminating power of each $A_i$, as well as sets of $A_i$. The
problem is, even with large data sets, those with a specific subset of
$A_i$ are likely to be sparse, as there are an exponential number of
subsets, while data tends to grow linearly. As a result, we are back
in the situation where estimation from data may not be robust. 

As a result of this situation, a
common task is to filter out the attributes taken into
consideration. This is called {\em feature selection}. The goal is to
select an optimal subset of attributes for the task at hand. This is
often times not possible; instead, we decide on a measure to evaluate
the goodness of a subset of attributes, and use some heuristic to pick
up the most promising subsets among all possible. There are two basic
approaches:
\bi
\item select the top $k$ attributes: assume that the best subset is
  the one that has the ``individually best'' attributes. The strategy
  then is: examine each attribute in isolation, rank the attributes on
  how well they perform, pick the top $k$ ones in the ranking. This
  approach is very efficient (linear in the number of attributes), but
  it does not lead to an optimal solution in most cases, as sometimes
  attributes can work well together, complementing each other. When the
  target variable is given (prediction, classification), one can
  consider the correlation coefficient of each attribute and the
  target. Another possibility is information gain (information gain
  tends to prefer attributes with large number of different values, so
  normalization may have to be used with it). 6.1, PAGE 118.

\item selecting the best subset: we rank subsets of attributes
  directly. There are two challenges here: devising a function to
  evaluate the goodnes of a set of attributes (instead of an
  individual one); and deciding which subsets to test -since it is
  usually unfeasible to test them all. 6.1,m PAGE 120-121.
\ei



{\em Principal Component Analysis (PCA)}: consider a dataset as points in
$m$-dimensions. The idea of PCA is to project all points into a space
with fewer dimensions (a subspace) while preserving as much variance
as possible from the original data. Let $X \in {\mathrm R}^m$ be
the data set, with size $n$. We restrict projections to those based on
the original 
coordinates to the whole space; to do this we normalize the data
points by centering them around the origin: the mean is calculated for
each dimension and the values are substituted by the difference from
the mean. Let $\bar{X} = \frac{1}{n} \Sigma_{i=1}^m x_i$ the vector of
means; then the projection can be represented by a matrix $M$ of $k
\times m$ dimensions ($k$ is the number of dimensions in the reduced
space, for instance, 2 for a plane). We get \[y = M \times
(X\ -\ \bar{X})\] Note that the matrix can allow projections but not
scaling, because this would change the variance. 

To find the projection that best preserves variance, eigenvalues and
eigenvectors are used. We first find the {\em covariance
  matrix}\footnote{The covariance matrix generalizes the idea of
  variance to several dimensions. The element in position $(i,j)$
  reflects the covariance of the $i$th and $j$th elements of the vector.} of
the data, \[C = \frac{1}{n-1} \Sigma_{i=1}^n (X_i\ -\ \bar{X})
(X_i\ -\ \bar{X})^T\]
where each one of the $n$ data points (rows $x_i$) is substituted by
its variance. Then we find the $k$ largest eigenvalues for $C$
$\lambda_1,\ldots,\lambda_k$, and the associated $k$ eigenvectors
$v_1,\ldots,v_k$. This is the projection matrix $M =
(v_1,\ldots,v_k)$. Each $v_i$ is called a {\em principal component}.

We can see now that without standardizing the values, the attribute
with the largest values could dominate the variance
calculations. Sometimes, a {\em z-score} is used instead of just
normalization to the mean: value $x$ of attribute $A$ is transformed
into $\frac{x - \mu}{\sigma}$, where $\mu$ is the mean of $A$ and
$\sigma$ the standard deviation. 

When using $k = 2$ or $k = 3$, PCA can be used for visualization. But
in general, PCA can be used to see which ones are the $k$ most
important attributes in the data set as follows: a given set of $k$
attributes out of $m$ will preserve $\frac{\Sigma_{i=1}^k
  \lambda_i}{\Sigma_{j=1}^m \lambda_j}$ of the variation in the whole
data (where $\lambda_1,\ldots,\lambda_m$ in the denominator are the
eigenvalues of the covariance matrix of all attributes in the data
set, with the numerator has the top $k$ eigenvalues for a given set of
$k$ attributes. Note that $\lambda_1 \geq \lambda_2 \geq \ldots \geq
\lambda_m$.) Thus, we can play with different attributes and different
values of $k$ to find out which attributes contribute more to the
covariance. 

Note that PCA can be misleading if too few dimensions are used. Note
also that PCA considers attribute ``interesting'' if they contribute
to variance, which may or may not be a good criteria for the problem
at hand. 

\subsection{Multidimensional Scaling}

{\em Multidimensional Scaling (MDS)} is, like PCA, a
dimension-reduction technique, but it is based on preserving the
distance between data points, not variance. For starters, a {\em
  distance matrix} $D$ is built, where $D(i,j)$ is the distance
between data points $i$ and $j$. The distance used should be
symmetric, so that $D$ is too, and should obey $D(i,i) = 0$, so that
the diagonal of $D$ is zero. 

$D$ is computed from $X$, the data -but data should be normalized
before computing $X$, or MDS will have problems, just like PCA
(z-scores are better here than normalizations to the mean). Usually, a
number of dimensions $k$ is chosen beforehand: for visualization, $k =
2$ or $k = 3$. Each data point is reduced to $k$ dimensions (a
position in $k$ space) such that distances between points are
preserved as closely as possible. One way to ensure this is to see
what the distance in $k$ dimensional space is between points $i$ and
$j$, $d_k(i,j)$, and compare it to the original distance in $m$
dimensional space, $d_m(i,j)$. The sum of squared errors is used to
decide how good the new projections works: \[E_0 = \Sigma_{i=1}^n
\Sigma_{j=i+1}^n (d_k(i,j)\ -\ d_m(i,j))^2\]
This is usually normalized to make it independent of the number of
data points and the absolute distances used:
\[E_0 = \frac{1}{\Sigma_{i=1}^n
\Sigma_{j=i+1}^n (d_m(i,j)^2} \Sigma_{i=1}^n
\Sigma_{j=i+1}^n (d_k(i,j)\ -\ d_m(i,j))^2\]
This is an absolute error; a relative error can be used instead:
\[E_0 = \Sigma_{i=1}^n
\Sigma_{j=i+1}^n \frac{(d_k(i,j)\ -\ d_m(i,j))}{ d_m(i,j)}^2\]
Sometimes a compromise between normalized and absolute error is used:
\[E_0 = \frac{1}{\Sigma_{i=1}^n
\Sigma_{j=i+1}^n (d_m(i,j)^2} \Sigma_{i=1}^n
\Sigma_{j=i+1}^n \frac{(d_k(i,j)\ -\ d_m(i,j))}{ d_m(i,j)}^2\]
In this case, MDS is called {\em Sammon mapping} and the error used
is called {\em stress}. 
Because there is no known analythical method to minimize any of these
error measures, a heuristic optimization method (typically, {\em
  gradient descent}\footnote{To use this methods, it is required that
  the distance in the reduced space of two arbitrary (but distinct)
  data points is not zero: $d_k(i,j) \neq 0$ for $i \neq j$. Thus,
  data points that are duplicates in this lower space should be
  eliminated.} is used. 

\subsection{Correlation Variables}
Another way to check whether variables are independent or correlated
is to use one of several correlation coefficients. Most well known
are:
\bi
\item  {\em Pearson's Coefficient}: gives linear relation between two
  numerical attributes $X$ and $Y$ with $n$ points
  each: \[\frac{\Sigma_{i=1}^n (x\ -\ \bar{x})(y\ -\ \bar{y})}{(n - 
    1) s_x s_y}\] 
wheer $\bar{x}$ is the mean of $X$,  $\bar{y}$ is the mean of $Y$,
$s_x$ is the standard deviation of $X$, and $s_y$ is the standard
deviation of $Y$. This yields values between -1 and 1: the negative
numbers signal negative correlation, the positive numbers positive
correlation; the larger the absolute value, the stronger the
correlation. Zero means independence. However, only linear
correlations are detected.  
\item {\em Rank Coefficients}: values are ignored, and only their rank
  within the attribute used. This captures monotonous correlations,
  whether linear or not. Also, these coefficients are more robust to
  outliers, since the values are not used. In a data set
  $x_1,\ldots,x_n$ we sort the values by some criteria, and then
  assign each $x_i$ a rank or position in the list, $r(x_i)$. 
\bi
\item {\em Spearman's rho}: \[1\ -\ 6 \frac{\Sigma_{i=1}^n (r(x_i) -
  r(y_i))^2}{n (n^2 - 1)}\] 
This yields a value between -1 and 1: a value of 1 means the ranks
coincide, a value of -1 that they are exactly opposite; a value of
zero that there is no correlation between the ranks.Spearman's assumes
that there are no ties (i.e. no repetition of values for the sorting
criteria); when this is not the case, the rank usually corresponds to
a mean value. Ex: two values are identical, would have rank $l$ and
$l+1$; we give each rank $l.5$
\item {\em Kendall's tau}: it not based on ranks but on comparison of
  orders of values. Two pairs $(x_i,x_j)$ and $(y_i,y_j)$ are called
  {\em concordant} if whenever $x_i < x_j$ then $y_i < y_j$; and {\em
    discordant} if whenever $x_i < x_j$ then $y_i > y_j$. Let $C$ be
  the number of concordant pairs and $D$ the number of discordant
  pairs. Then \[\frac{C - D}{\frac{1}{2} n (n - 1)}\]
\ei
\ei 
 For categorical attributes, one can use the chi-square test for
 (in)dependence. We have a table $(X,Y)$ and substitute the values by
 their individual probabilities and the joint probability $P_{ij}$ (the
 probability of co-occurrence of value $x_i$ and $y_j$). Note that the 
 marginal probabilities $P_i = \Sigma_{i=1}^n P_{ij}$ and $P_j =
 \Sigma_{j=1}^n P_{ij}$ are equivalent to the individual
 probabilities. Then \[\Sigma_{i=1}^n \Sigma_{j=1}^n n \frac{(P_{ij} -
   P_i P_j)^2}{P_i P_j}\]


\section{Clustering}

\section{Classification}

\section{Regression and Correlation}
Regression is the process of studying how two or more attributes are
related by fitting a function to one or more of the attributes (called
the {\em independent variables} or the {\em explanatory} or {\em
  regressor variables}) to see if they produce one of the
attributes (called the {\em dependent variable} or the {\em response
  variable}). Because it is not 
expected that the process will be perfect (the function is, after all,
a model or idealization, and the data is a set of measurements which
contain some degree of imprecision) some margin of error is expected. Usually,
the goal of regression is to minimize this error.

The simplest case is when the function is linear. The approach
followed for linear functions can be extended, with some additional
complexity, to polynomial functions. Other types of functions can be
treated too by using transformations that 'translate' the problem into
a linear one. All these cases, which rely on linear algebra, can
actually be implemented in SQL. There are more complex cases which
require more advanced tools and that cannot be implemented in SQL
(CHECK). 

One of the biggest problems with regression is that, given the data,
one may have to decide at the outset which type of function to try to
fit -at the very least, whether the function should be linear or
not. Because linear regression is quite easy to do, it is tempting to
try to fit a linear model to any data set. The problem is, a line can
be set to fit quite well data sets which are not linear at all; a
famous example of this is the {\em Anscombe's quartet}, a set of 4
datasets such that: when linear regression is applied to them, they
all fit equally well (according to the model), but only one of the
data models is truly linear. Thus, some experts consider linear
regression as a tool that is overused. In particular, linear
regression is not really a descriptive tool; it is more used to make
{\em predictions} based on data. And the quality of the predictions
depends on whether the data set really is linear in nature.

 The next problem is that, if we want to fit the data perfectly,
this can always be done by using a more complex model: by using a
polynomial function with degree $n$, a set of $n+1$ data points can be
explained perfectly if there is only one independent and one dependent
variable. This is because the set has $n$ degrees of freedom;
intuitively, each parameter in the polynomial 'takes care' of one of
them. But if this polynomial were used to predict future or unknown
data (extrapolation), we may note errors in the result. In a sense, by
following the data too closely we have failed to 'generalize' from it
and grasp the more general pattern that exists. This is called {\em
  overfitting}. To fight this, we admit some degree of error, and
simply try to minimize it -as opposed to completely erradicate it.

NEED TO EXPLAIN OVERFITTING AND DEGREES OF FREEDOM EARLIER -CHAPTER OR
SECTION ON MODELING?

Assume a table that shows the data on two attributes, {\tt DATA(Xattr,
  Yattr)}. We want to know if the attributes are (linearly) correlated
or are independent. We assume {\tt Xattr} is an {\em independent
  variable} (it does not depend on the other) and {\tt Yattr} is a
{\em dependent variable} (it does depend on the other). For starters,
we may graph the variables; this may help determine if there is a
dependence, and whether it is linear or non-linear. 

\subsection{Linear Regression}
For the linear case, which is the simpler one, it is typical to
measure the error by the {\em sum of squared errors (SSE)} method. To
optimize here means to use the {\em principle of least squares}: we
minimize the square of the distance from the data point to the
postulated line. The good thing about this approach is that it is
relatively simple and it can be applied not only to linear regression,
but to any polynomial function. The bad thing is that this method is
very sensitive to outliers, since all distances are squared, so any
particular value that is far from the others makes a considerable
contribution to the SSE, and hence may greatly influence the final
result (note that this can be considered a good thing in some cases,
since it means that the method works extra hard at dealing with large
deviations).  

For a line with equation $y = a + bx$, we know that the values that
minimize the square of the distance are
\[a = \frac{(\Sigma y) (\Sigma x^2) - (\Sigma x)(\Sigma xy)}{n (\Sigma
x^2) - (\Sigma x)^2}\]
and
\[b = \frac{n (\Sigma xy) - (\Sigma x) (\Sigma y)}{n (\Sigma x^2) -
  (\Sigma x)^2}\]

\subsection{Linear Regression in n Variables}
When we have more than one independent variable, we can still use
regression; this is called {\em multivariate regression}. In the
linear case, suppose we have $n$ attributes $X_1,\ldots,x_n$ that we
use to predict $y$ as before. The table has schema {\tt
  DATA(Xattr1,Xattr2,...,Xattrn,Yattr)}. To fit an equation
\[y = f(x_1,\ldots,x_n) = a_0 + a_1x_1 + a_2x_2 + \ldots + a_nx_n\]\
we can represent the data as a matrix: the projection of {\tt DATA} in
{\tt (Xattr1,...,Xattrn)} is a matrix ${\mathbf X}$ of $n$ columns and
$m$ rows (one for each data point), while the column {\tt Yattr} can
be seen as a vector ${\mathbf y}$ of length $m$. The solution we are
looking for is the values for the set of coefficients
$(a_0,\ldots,a_n)$, which is a $n+1$ vector ${\mathbf a}$.
The solution is the {\em Moore-Penrose pseudo-inverse} vector, given
by
\[{\mathbf a = (X^TX)^{-1}X^Ty}\]
This only works if ${\mathbf X^TX}$ is invertible. We can check this
in SQL, and compute the solution above too. CHECK. Transform this into
a system of individual equations!!


\subsection{Polynomial Regression}
We can consider a linear equation as a polynomial of degree 1. By
trying a degree $m > 1$, we can fit a polynomial (of degree $m$) to
the data, with essentially the same procedure as the linear case: we
look for 
\[ y = a_0 + a_1x + \ldots +a_mx^m\]
that best fits the data point set. There are here $m+1$ parameters, each
one of the $a_i$.
Solution: take partial derivatives with respect to each $a_i$, end up
with a system with $m+1$  equations and $m+1$ unknowns; use one of the
linear algebra methods to solve (Gauss elimination, etc.). Does this
have a closed form that we can put in SQL? CHECK

\subsection{Exponential Regression}
When trying to fit an exponential function to the data, the usual
trick is to use the {\em logit transformation} to make it look like a
linear function problem. That is, if we are thinking about fitting a
function 
\[y = ax^b\]
to the data, we take the logarithm of both sides, to get
\[log y = log a + b \times log x\]
This is a linear function with logarithmic coefficients. That is, set
$a' = log a$ and transform all the data into logaritms: each row
$(x,y)$ of the {\tt DATA} table is substituted by $(log x, log
y)$. Then we can carry out the same approach that we saw in the linear
case\footnote{Technically, this approach does not minimize the SSE
  {\em for the original data}, only for the transformed one. However,
  the method works very well in practice.}.

\section{Association Rules}

\section{Deviation Analysis}

\section{Neural Networks}

\section{SVM}

\section{Time Series Analysis}
A series variable is one that has an ordering (sometimes
implicit). The most common example is time series, where the order is
by time. Usually there is a variable which is monotonic (constantly
increasing or decreasing) that provides the order (a time variable for
time series). This variable can be called the displacement or index
variable. 

In time series, the index cycles, and patterns may repeat themselves over
a period (cyclic patterns).  We want to detect cycles but also any
trend -a noncyclic, monotonic pattern.

Classical decomposition: look at the series for trend, seasonality,
cycles and noise.  Another idea is to use autocorrelations (see DATA
ANALYSIS BOOK). Calculating (weighted, exponential) moving averages is
a first approximation. 

Peak-valley-mean smoothing: a peak is a value higher than the previous
and following. Instead of using the actual peak, we average the
previous and following values. Same for valley.

Mediam smoothing: use the median of all values in a window.

Resmoothing: to keep on smoothing the values until there is no change.

One way to analyze time series is by
Fourier series, where we add a set of simple periodical (wave)
functions. Each periodical function is characterized by:
\bi
\item the frequency (how many times a waveform repeats for a given
  unit of time).
\item the phase (where the peaks and throughs of the wave occur).
\item the amplitude (distance between peak and through).
\ei
Usually the sine and cosine functions are used. These are identical
except that sine is 'shifted' 90 degrees from the cosie (a phase
shift). 

Most simple: moving averages.
This average, like all averages, is sensitive to outliers. A more
sophisticated technique is {\em exponential smoothing}. To do this in
SQL is tricky, since we need to calculate a value and then use it in
further calculations, something that SQL is not good at! TRICK: use
the formula in OPEN SOURCE book, plus a numbering column to put past
observations in chronological order (pg. 87). This is for single
exponential. 

Single exponential does not work well with trend data, as the trend
tends to be smoothed out. For data with periodic changes, we can use
double exponential smoothing. Here, we use the average of the data.

EXAMPLE: we have sales for a store for several years. Each year has a
seasonal trend. We are trying to forecast sales. We have 5 years (12
months each) of sales for 60 data points. We choose $\alpha = 0.1$; we
calculate \[r_1 = \alpha (last-month) + (1 - \alpha)
(average-48-months)\]
\[r_2 = \alpha (real-value-49-month) + (1 - \alpha) r_1\]
\[r_3 = \alpha (real-value-50-month) + (1 - \alpha) r_2\]
Here $r_1$ is our estimate of how much we'll sell the first month of
the fifth year,
$r_2$ is the estimate of how much we'll sell the second month of the
fifth year, etc. This is compared to the real data. What is usually
done is to consider an initial segment of the data to calculate the
first value, and some data is left to use in the calculations. 
In double smoothing, we calculate not just the average but the 'rate'
of change. This can be done by using linear regression and computing
the parameter. Then we do
\[baseforecast_1 = \alpha (last-month) + (1 - \alpha)
(average-48-months + rate)\]
\[trend_1  = baseforecast_1 - (average-48-months)\]
\[weightedtrend_1 = \beta s_1 + (1 -\beta) rate\]
\[result_1 = baseforecast_1 + weightedtrend_1\]
\[baseforecast_2 = \alpha (real-value-49-month) + (1 - \alpha) result_1\]
\[trend_2 = baseforecast_2 - baseforecast_1\]
\[weightedtrend_2 = \beta trend_2 + (1 - \beta) weightedtrend_1\]
\[result_2 = baseforecast_2 + weightedtrend_2\]
and so on. Again, this is not doable as is in SQL, can we use the same
trick??

When there are strong seasonal factors, it is possible to adjust for
them too. Suppose we divide yearly sales into quarters, so we get a
sale per quarter. We calculate the average of sales per quarter, and
calculate the difference between this average and the real sales per
quarter (if we divide real sales by average, we get a number that
should be close to 1, unless there are huge quarterly swings). This
number can then be used to smooth the results of other techniques, say
linear regression applied to several years of data. 

To determine how well our forecasting is doing, we can calculate the
sum of the differences between forecasted value and real value (as
times passes). The sum of differences (CFE, cumulative
forecast error) should be near zero (as positive and negative
differences cancel each other). The sum of the absolute values of
differences (MAD, mean absolute deviation) is divided by the number of
forecasts and should also be close to zero. The traking signal (FS) is
the ratio of CFE to MAD. 

Forecasts errors shoudl be normally
distributed, so we can also check if the errors we get indeed are. 

\subsection{Improving Results}
Suppose a test has $p$ probability of being right, and $(1 - p)$ of
being wrong (error rate). You repeat the test $n$ times, all of the
succesful. Then the level of confidence that a result is wrong is
given by $f = (1 - p)^n$, while the level of confidence in being right
is given by $s = 1 - f = 1 - (1 - p)^n$. By taking logs in the first
equation, we have $n = \frac{\log(s)}{\log(1 - p)}$; likewise in the
second equation we get $n = \frac{\log(p)}{\log(1-p)}$. To achieve a
given $f$ or $s$, $n$ gives us the number of tests we need.

\section{Data Analysis in SQL}
Most of the above can be done in SQL, although some of it can be a
bit complicated. But it can save us from moving the data outside the
database just to do some simple analysis.

NOTE: all the following can be done with several software packages;
some more efficiently implemented with R. The reason is explained here
is the same as for basic statistical analysis.
Also, all of the following assume ``tabular'' data, which relational
databases handle well.

MANIPULATE A MATRIX IN SQL: TRASPOSE,MULTIPLY, INVERT, MULTIPLY BY A
VECTOR. 

COVARIANCE MATRIX: given table $DATA(Attr_1,\ldots,Attr_n)$, we
convert it to table $CDATA(tuple,attribute,value)$, where tuple
$(a_1,\ldots,a_n)$ gets converted to tuples $(i,A_1,a_1)$, \ldots
$(i,A_n,a_n)$, with $i$ the tuple identifier. Then the covariance
matrix of the original data is obtained by taking the Cartesian
product of CDATA (augmented with means to be more user-friendly):

\begin{verbatim}
CREATE TABLE MCDATA as
SELECT tuple,attribute,value,mean
FROM CDATA, (SELECT attribute,avg(value) as mean
             FROM CDATA
             GROUP BY attribute) mtable
WHERE CDATA.attribute = mtable.attribute

SELECT T1.attribute, T2.attribute, sum((T1.value-T1.mean)*(T2.value-T2.mean))/count(*)
FROM MCDATA as T1, MCDATA as T2
GROUP BY T1.attribute, T2.attribute
\end{verbatim}

The above could be more efficient by adding the condition {\tt WHERE
  T1.attribute >= T2.attribute} if the attributes are ordered (this
corresponds to the fact that the covariance matrix is
symmetric). Similarly, one could add the condition condition {\tt WHERE
  T1.attribute > T2.attribute} and then union this result with a query 
{\tt SELECT MCDATA.attribute, MCDATA.attribute,stdev(value) FROM
  MCDATA} to get the diagonal values, since those are just the
standard deviations of each attribute.

To get eigenvalues, eigenvectors, one still needs code. Even recursion
will not do since what changes from iteration to iteration is the
values within the matrix, not the rows in the matrix.

\subsection{Linear and Non-Linear  Regression: Two variables}
When we have two or more variables, the first thing to do is to check
whether they are independent or correlated in some way. The simples
form of correlation is linear. To check for linear correlation, we use
linear regression.

We first try to fit an expression to the data; the expression is a
function, assuming that the correlation of the two variables can be
expressed functionally: $y = f(x)$. Getting the $f$ that is a best fit
is called regression; determining the degree of fit is
correlation. The function can be a linear equation (denotes a line) or
non-linear (denoting some curve). 

For the linear case on two variables (the simplest one), assume we
have a table {\em DATA(x,y)}. We are looking for an equation of the
form $y = ax + b$; using the {\em principle of least squares}, we try
to find $a$ and $b$ that minimize the distance (as expressed by the
square of differences) to the data. The equations to do so are:
\[a = \frac{(\Sigma y) (\Sigma x^2) - (\Sigma x)(\Sigma xy)}{n (\Sigma
x^2) - (\Sigma x)^2)}\] 
\[b = \frac{n (\Sigma xy) - (\Sigma x)(\Sigma y)}{n (\Sigma
x^2) - (\Sigma x)^2)}\]
This can all be calculated in SQL quite simply: a very simple way to
do it is to create a table
{\em LINEAR(x, y, x-sqr, y-sqr, xy)} and fill it up from table {\em
  DATA(x,y)} (this can be done with a subquery in the FROM clause, so
that LINEAR is  a virtual table; or it can be materialized as a VIEW
or as table, if we have other uses for it\footnote{NEED TO TALK ABOUT
  MATERIALIZED VIEWS.}.
Then a query over LINEAR using SUM will get the desired results (note
that we sum over $x$, over $y$, over $x^2$, over $y^2$ and over
$xy$. These are the columns of LINEAR). 

In SQL terms, we can calculate those values in one pass over the
table. Conceptually, we need to do several things:
\be
\item first, add to the table columns for $n$, $x^2$, $y^2$ and $xy$, and
  compute those (the value of $y^2$ is not needed here, but it will be
  used later).
\item next, obtain the sum of all columns except $n$: $x$, $y$, $x^2$,
  $y^2$, $xy$. 
\item finally, compute the values of $a$ and $b$ by implementing the
  equations above; each term is now available.
\ee

Fortunately, it's extremely simple to do all the steps at once in SQL:

\begin{verbatim}
SELECT ((sumy * sumxsqr) - (sumx * sumxy))/ ((n * sumxsqr) - sqrt(sumx)) as a
       ((n * sumxy) - (sumx * sumy)) / ((n * sumxsqr) - sqrt(sumx)) as b
FROM (SELECT count(xAttr) as n, 
             sum(xAttr) as sumx, 
             sum(yAttr) as sumy,
             sum(x^2) as sumxsqr, 
             sum(y^2) as sumysqr, 
             sum(x * y) as sumxy
      FROM DATA) AS TEMP
\end{verbatim}

Note that the result can be put into a table {\tt LINEARFIT(a,b)} so
that the coefficients just calculated can then be used to compute the
expected values:

\begin{verbatim}
CREATE TABLE EXPECTED(Xattr, Yexpected)
AS (SELECT Xattr, (a + Xattr *b)
    FROM DATA, LINEARFIT)
\end{verbatim}

Now we can determine how good a fit the line is by calculating the
{\em correlation coefficient}, which is given by
\[r = \frac{(n \times (\Sigma xy)) - (\Sigma x \times \Sigma
  y)}{\sqrt{((n \times (\Sigma x^2) - (\Sigma x)^2) \times ((n \times
    (\Sigma y^2)) - (\Sigma y)^2)}}\]

This can be calculated similar to the previous computation:

\begin{verbatim}
SELECT ((n * sumxy) - (sumx * sumy))/ sqrt((n * sumxsqr - sumx^2) *
(n * sumysqr - sumy^2)) as r
FROM (SELECT count(xAttr) as n, 
             sum(xAttr) as sumx, 
             sum(yAttr) as sumy,
             sum(x^2)) as sumxsqr, 
             sum(y^2) as sumysqr, 
             sum(x * y) as sumxy
      FROM DATA) AS TEMP
\end{verbatim}
Note that if the sums are left in a temporary table of their own, we
don't need to repeat the calculations.
The correlation coefficient can give a value between -1 (perfect
negative correlation) and 1 (perfect positive correlation); 0 means no
correlation, and small absolute values mean that the line is a poor
fit.

Another way to calculate the correlation coefficient is given by
\[\abs{r} = \sqrt{1 - \frac{s_{err}}{s_y}}\]
where $s_y$ is the variance of the $y$ values, and $s_{err}$ is the
variance of the {\em residual} values $y_i - \hat{y_i}$, where $y_i$
is an $y$ value and $\hat{y_i}$ is the value predicted by the linear
equation. This can also be obtained simply in SQL: we join table {\tt
  DATA} and {\tt EXPECTED} to put together real and predicted $y$
values; their difference is the residual. Assume for simplicity our
system has a function {\tt var} for the variance:

\begin{verbatim}
SELECT sqrt(1 - var(Yattr)/ var(residual)) as r
FROM (SELECT Xattr, Yattr, Yexpected, Yattr - Yexpected as residual
      FROM DATA, EXPECTED
      WHERE DATA.Xattr = EXPECTED.Xattr)
\end{verbatim}


When linear regression is not a good fit, we can try polynomial
regression, where the formula is of the form $y = a + bx + cx^2$. Here
we need to determine values for $a$, $b$, and $c$. The formulas are
quite complex: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXxxx. An alternative is
to calculate the three {\em normal equations}:
\[n a + (\Sigma x) b + (\Sigma x^2) c = \Sigma y\]
\[(\Sigma x) n a + (\Sigma x^2) b + (\Sigma x^3) c = \Sigma xy\]
\[(\Sigma x^2) n a + (\Sigma x^3) b + (\Sigma x^4) c = \Sigma x^2y\]
Note how the second equation is the first one times $\Sigma x$, and
the third equation is the the second  again times $\Sigma x$. Again,
from table {\em DATA(x,y)} we create table {\em
  POLYNOMIAL(x,y,x-square,x-cube,x-fourth,xy, x-squarey)} which can be
filled from DATA pretty simply. An SQL query over POLYNOMIAL will give
the needed totals; we can now use Gaussian elimination or another
method to solve the equations by hand. 

We can, as before, calculate the goodness of fit creating a table {\em
  POLYNOMIAL-FIT} and computing $ \mid r \mid$.

Finally, if linear and polynomial models fail, we can try to fit an
exponential model. Here the formula is \[y = ab^x\] To make this model
tractable, the most common trick is to simply work with
logarithms. Taking logs of the above equation yields $\ln y =
\ln(ab^x) = \ln a + x(\ln b)$, which is a linear equation. That is, we
subtitute $y' = \ln y$, $a' = \ln a$, $b' = \ln b$ and solve $y' = a'
+ b'x$ just like we did for the linear model. Once that is done, we
can get the original values back. Basically, we calculate
\[ a = e^{\left(\frac{(\Sigma \ln y) (\Sigma x^2) - (\Sigma x)(\Sigma
    x \ln y)}{n (\Sigma x^2) - (\Sigma x)^2}\right)}\]
(the exponent is the original formula for $a$ with all occurrences of
  $y$ substituted by $\ln y$) and
\[b = e^{\left(b = \frac{n (\Sigma x \ln y) - (\Sigma x)(\Sigma \ln y)}{n (\Sigma
x^2) - (\Sigma x)^2}\right)}\]
(ditto here). As the logarithm function is built-in in most systems
  today, we can create table {\em EXPONENTIAL(x, y, x-square, log-y,
    x-times-log-y)} from DATA, and compute the necessary totals. To
  check for goodness of fit, we can calculate $\mid r \mid$ as before.

\subsection{Linear and Non-Linear  Regression: More than Two variables}
If we have $n  \geq 3$ variables, we consider $n-1$ are independent
variables, and one of them is the dependent one. However, things
become more complicated here. If we have variables $x_1$, $x_2$ and
$y$ (we use this notation to make clear $y$ is the dependent
variable), we are looking at an equation (in the linear case) of the
form $y = a + b x_1 + c x_2$, and we need to determine coefficients
$a$, $b$ and $c$. To do so by solving the equations can get a bit
complex, so we may want to work with the normal equations
\[n a + (\Sigma x_1) b + (\Sigma x_2) c = \Sigma y\]
\[(\Sigma x_1) a + (\Sigma x_1^2) b + (\Sigma x_1 x_2) c = \Sigma x_1
y\]
\[(\Sigma x_2) a + (\Sigma x_1 x_2) b + (\Sigma x_2^2) c = \Sigma x_2 y\]
To compute the needed values, we create table {\em
  THREE-LINEAR($x_1$,$x_2$, y,$x_1$-square,$x_2$-square,
  $x_1$-times-$x_2$,$x_1$-times-y, $x_2$-times-y)} from DATA, and run
and SQL query to add all columns, as usual. We plug the results into
the equations and solve. We can still calculate the correlation
coefficient $\mid r \mid$ to determine if this is a goof fit. Even if
it is, we may still want to determine which one of $x_1$ or $x_2$
influences $y$ in a more direct way. We can conduct a t-test on the
value $b$ and the original data values $x$ and $y$, and another t-test
on the value $c$ and the original data values $x$ and $y$, and compare
the result. 

This method does not extend well to high values of $n$. 

\subsection{Time Series Analysis}
SEE CH 4 DATA ANALYSIS WITH OPEN SOURCE TOOLS BOOK.

Nowadays, moving averages can be easily calculated with the WINDOWS
operator. Note that the moving average can be calculated without
WINDOWS by using SELF-JOINS. The latter is less efficient but it
allows one to use weighted moving averages. With weighted moving
averages, more recent data can be given more weight (or other linear
combinations tried). 


\subsection{Clustering}
Given a population, give a partition of the population into 'similar'
classes.

\subsection{Classification}
Given a population and collection of classes, asign a unique (hard
classification) class to each object in population, or a probability
(soft computation) that the each object is in a class.

Classification trees?

\subsection{Association Rules}
Extract assoc rules with support, confidence. It can be done in
transaction table; group by transaction, having count(*) this is
support -and how to do confidence?

\section{Data Analysis with Files}
Which ones of the above can be done from command line? I think very
few.

From Greg Brown's blog (gibrown.wordpress.com):
To generate a random sample from a large data file:

\begin{verbatim}
 awk 'BEGIN {srand()} {printf "\%05.0f \%s \n",rand()*99999, \$0;
  }' data.txt | sort -n | head -100 | sed 's/^[0-9]* //'}
\end{verbatim}

This just adds a random number to the beginning of each line, sorts
the list, takes the top 100 lines, and removes the random number. 


\section{Optional}
Stream Processing (maybe in time series?). Streaming is usually based
on windows, need subchapter on order with windows on it.


