\chapter{Understanding Data}

Summary of methods from Dasu and Johnson: Summaries can be
\bi
\item parametric: data is characterized by a distribution. Parameter
  values estimated from data: 
\bi
\item measures of centrality: mean, median, mode. Mean are not robust
  (sensitive to bad data or outliers). Sometimes a {\em trimmed mean}
  (calculated without the largest and smallest value) is used. Median
  is robust but it can be hard to estimate (ie. in more than one
  dimension).  For density functions, we order the values and then
  calculate the sum (integral, for continuous values) of probabilities
  until we reach 0.5; at that point, the latest value is the
  median. Note that median is not defined for non-ordinal, categorical
  attributes, as they have no order. Also, while the mean may not
  always exist (for some heavy-tailed distributions like Cauchy, the
  mean is infinite), the median always does. The stability of the
  median is a protection against outliers, but it also means the
  median is not sensitive to small changes in the data, so it does not
  help detect outliers or bad data.  In several dimension, if we
  estimate a vector of medians we get basically a trimmed mean where
  all data points but the middle one  have been trimmed away. But a
  vector medians may be misleading. Ex: given points (0,0,1), (0,1,0)
  and (1,0,0), the median is (0,0,0), outside the plane formed by the
  data points. Mode is especially useful for categorical
  attributes. Note that the mode is the peak of the density
  function. However, the mode is not widely used for multivariate
  distributions. 
\item measures of dispersion: variance, range, inter-quartile range,
  absolute deviation. Quantiles divide the values of an attribute into
  intervals that contain equal proportion of data values. They are
  used to build histograms. Variance may not always exist (Cauchy has
  infinite variance). Variances generalize to multidimensional cases:
  for $n$ attributes, an $n \times n$ dispersion matrix contains, in
  diagonal element $M_{ii}$ the variance of the i-th attribute, and in
  element $M_{ij}$ the coveriance of the i-th and j-th attribute. MAD
  (Median Absolute Deviation) is given by 
\[MAD = median \vert X\ -\ median(f(X)) \vert \]
that is, we compare all values of attribute $X$ with its own median,
and take the absolute difference. Then we compute the median of these
differences. Range is also very sensitive to outliers, so sometimes
the Inter-quantile range (the range between the third and first
quantile, reflecting the 'central' half of values and discounting
extremes) is also computed. If the IQR is much smaller than the
variance, it is likely that the distribution has a long tail.
\item measures of skewness: at least 3rd and 4th momentums.
\ei
\item non-parametric: compute anchor points from data to build density
  estimation. Each anchor point divides the density into regions with
  equal mass under the curve. With the sample, we try to divide the
  data into bins that have equal number of points. For univariate
  data, we compute quantiles. Thus, a set of K anchor points with n
  data elements gives cut-offs $q_0,q_1,\ldots,q_K$ with
  $\int_{q_{i-1}}^{q_i} f(u) du = \alpha$, with $f$ the estimated
  density, and $\alpha = \frac{K}{n}$. Based on quantiles, we can
  build histograms. This approach is based on the idea or ordering or
  ranking the data (since the density gives us the percent of values
  that are equal to or less than a given point). In multi-dimensional
  data, we talk about the depth of data points: imagine the set of
  points as envolved in a convex hull; points in the periphery have
  little depth, points inside the mass have more depth.
\ei
Other summaries for 2 or more variables include correlations
(covariance, ranked correlation, etc.). Multivariate histrograms are
created by discretizing numeric values to show categories in rows and
columns, and counts on cells. This discretization creates a partition
of the data. Partitions work well if the elements on them are
homogeneous; otherwise, summaries for the partition do not represent
the data. For example, age average for a group of kids may not tell us
that we have elementary school and high school students mixed
together. 

For attribute relationships:
Covariance and correlation coefficients are the simples measures of
connection between two attributes. Covariance is given by

\[C(X,Y) = E((X\ -\ E(X))(Y\ -\ E(Y)))\]
where $X$, $Y$ are the attributes and $E$ the expectation. In a
sample, this resolves to:
\[\hat{C(X,Y)} = \frac{\Sigma_{i=1}^N(X_i - \bar{X})(Y_i -
  \bar{Y})}{N-1} = \frac{\Sigma_{i=1}^N X_i
  Y_i}{N}\ -\ \frac{(\Sigma_{i=1}^N X_i )(\Sigma_{i=1}^N
  Y_i)}{N(N-1)}\] 

The correlation coefficient is simply a normalized covariance:
\[\rho = \frac{C(X,Y)}{\sigma_X \sigma_Y}\]
where $\sigma_x$ is the standard deviation of $X$, and similarly for
$\sigma_Y$. This value is always between -1 and +1, with larger
absolute values reflecting stronger correlation, and the sign
indicating positive (both $X$ and $Y$ move in the same direction) or
negative ($X$ and $Y$ move in opposite directions) correlation. If two
attributes are independent, the value should be zero. However, if this
value is zero it does not mean that the attributes are independent
(there could be a non-linear relationship).

The dispersion matrix for two attributes is 
\[S = \left ( \begin{array}{cc} V_X & C(X,Y) \\ C(X,Y) & V_Y \end{array}
\right ) \]
where $V_X$ is the variance of $X$, and similarly for $V_Y$. 

For categorical attributes, one can use contingency tables. This is a
cross-tabulation of data with frequency counts instead of values. If
attribute $X$ has $n$ possible values and attribute $Y$ has $m$
possible values, let $c_{ij}$ be the count of data with value $i$ of
$X$ and value $j$ of $Y$, $c_{i.}$ the count of  data with value $i$ of
$X$, $c_{.j}$ the count of daa with value $j$ of $Y$, and $c$ the total
count of values; then
\[T(X,Y) = \Sigma_i \Sigma_j \frac{(c_{ij} - \frac{c_i c_j}{c})^2}{\frac{c_i c_j}{c}}\]
is the chi-square test of independence. As a table

\begin{center}
\begin{tabular}{c|ccl} \\ \hline
X/Y & $Y_1$ & $Y_m$ & Row Total \\ \hline
$X_1$ & $c_{11}$ & $c_{1m}$ & $\Sigma_j c_{1j}$ \\ \hline
$\vdots$ & $\cdots$ & $\cdots$ & $\vdots$ \\ \hline
$X_n$ & $c_{n1}$ & $c_{nm}$ & $\Sigma_j c_{nj}$ \\ \hline
Column Total & $\Sigma_i c_{i1}$ & $\Sigma_i c_{im}$ & $n$ \\ \hline
\end{tabular}
\end{center}

Where $c_{i.} = \Sigma_j c_{ij}$, $c_{.j} = \Sigma_i c_{ij}$ (this
two-way table of X-Y counts is an example of bivariate histograms).
The result of the tabulation is compared to the chi-square
distribution with $(m-1) \times (n-1)$ degrees of freedom; if the
result if greater than the chi-square, then the attributes are not
independent. If the result is smaller than the chi-square, then the
attributes are independent.

Another way to test independence or dependence is mutual information.
If $X$ and $Y$ are independent, it should happen that
\[P(X=a,Y=b) = P(X=a)P(Y=b)\]
Mutual information exploits this:
\[I(X,Y) = E_{P(X,Y)} \left (log \frac{P(X,Y)}{P(X)P(Y)} \right )\]
which, in two samples becomes
\[\hat{I(X,Y)} = \Sigma_i \Sigma_j \frac{n_{ij}}{n} (log \frac{n_{ij}
  n}{n_{i.} n_{.j}})\]
The closer mutual information is to zero, the more independent the
attributes are. Because this value is not normalized, it is hard to
interpret on its own; mutual information is often used to compare
pairs of attributes for feature selection or other tasks.

Another alternative is the fractal dimension. Assume values for $X$
and $Y$ have been normalized (for instance, with z-scores) so that
they lay between 0 and 1. Geometrically, they all are contained in a
square of size 1. This can be divided into $r^2$ squares, each one
measuring $\frac{1}{r}$ on the side. Let $N(r)$ be the number of
rectangles that contain at least one $(X,Y)$ point (i.e. the non-empty
rectangles). As we increase $r$, we get more fine-grained
rectangles. The Hausdorf fractal dimension is given by
\[D_0 = lim_{r \rightarrow \infty} \frac{log(N(r))}{log(r)}\]
Since for a sample of size $n$ we cannot grow $r$ larger than $n$, in
practice $D_0$ is always zero. What is done is to plot $log(N(r))$
versus $log(r)$ for some values of $r$. Note that if all the $(X,Y)$
values are together in a small region, then $N(r)$ remains almost
constant as $r$ grows. If the points are located along a line, $N(r)$
will grow linearly, and if they are scattered all over (i.e. $X$ and
$Y$ are independent) it will grow quadratically. 


Non-parametric approaches include: histograms to identify frequencies
of values, association rules to identify values that co-occur
frequently, classification and neural networks to derive rules about
data structure, Bayesian networks to identify co-dependences among
sets of attributes.

Counting to produce frequency tables and histograms is the most basic
technique. For sets of attributes, counting single occurrences gives
marginal totals, and counting co-occurrences approximates marginal
probabilities. For instance, for attributes X and Y,
\[P(X=a) = \Sigma_b P(X=a,Y=b)\]
gives the marginal probability that $x$ takes value $a$. The
conditional probability can also be estimated, as well as the joint
probability, from the counts. 
In a table, the marginals are the total rows and total column counts;
the conditionals are the rows and columns; and the joints are the
cells. That is, the marginals are given as above; the conditional
$P(X=a|Y=b)$ is the column for value $Y=b$ (assuming $Y$ is in the
columns), and the joint $P(X=a,Y=b)$ is the cell located at column
$Y=b$, row $X=a$. 

Note that these are related, since
\[P(X=a|Y=b) = \frac{P(X=a,Y=b)}{P(Y=b)}\]

Histograms can be univariate (over a single variable) or multivariate.
A critical parameter is the width of the bin or interval. An
equi-space histogram makes all bins the same depth; an equi-depth
histogram sets the boundaries of the bins so that each has an equal
number of data points. Quantiles are an example of this.

Note that for each bin, only a count is kept, so other information
about the distribution (like mean) may be hard to recover. One can
keep additional information (for instance the mean of each bin).

FROM GUIDE TO INTELLIGENT DATA ANALYSIS, Berthold et
al. (\cite{GUIDE}).

\section{Introduction}
Data storage and manipulation always happens in the context of a
project that has certain goals. What is happening more and more is
that the data is later on reused in different projects, with different
goals. Hence, it is importat the deal with the data in such a way that
it makes clear what transformations and decisions about structure and
format are being taken with respect to the current goal, so they can
be re-examined when the goal changes.

Managing projects is a complex field, and not our goal. Here we will
only say a few general things to give context, and then focus on the
tasks that are directly related to data management issues.

One of the first problems, usually, is to understand exactly what the
goal is, and how it affects the data.
As one proceeds with decisions on data cleaning, data structuring, and
data analysis, understanding of the problem usually changes
(improves?). Thus, it is also important to document the decisions
taken and the reasons for them, so that alternative analysis can be
pursued if deemed necessary (which, in many cases, will be the case). 
For instance, if one decides to use a regression model, all attributes
should be transformed into numerical type. Such transformations must
be done carefully, and should be documented, in case other,
alternative transformation are to be tried -and, of course, one may
decide not to use regression after all. 

Metadata and documentation are also crucial in helping with {\em
  repeatability}, which is becoming more and more important in
scientific experiments. 

The very first step is to determine the exact goal of the
project. This may be quite difficult, so besides specifying the goal
as narrowly and exactly as possible, one should include criteria for
deciding success (perhaps on a sliding scale), so that a good solution
can not only be built, but tested. Understanding the domain and the
data also happens at this step. These two are different, but
inter-related tasks. The data can be seen as a sample from the domain,
so it's important to understand both in order to assess whether the
data is truly representative of the domain, and what it can and cannot
tell us about the domain. This is also very helpful in establishing
the {\em data quality}. It would be good also to evaluate, at this
point, whether the project is realistic, and what risks of failure it
faces, but we don't discuss managerial issues here. 

Population is the set of objects under study. Each object is
represented by a set of observations. 

A sample is a subset of the population, extracted to be analyzed and
draw conclusions about the population. A sampling mechanism is used to
extract the subset; it is important that it's not biased so the sample
is random and representative of the population. We can sample
repeatedly from a population, leading to a set of samples, called a
sampling distribution.

Note that what is population
depends on the context. One can consider a population to just be a
sample of some larger set that is not all feasible at once.

Exploratory Data Analysis is a set of tools to get an approximate idea
of what the data is like when we have no other knowledge (like a
hypothesis to test). For
instance, using histograms and scatterplots can give us an idea of the
different values in the data. This is only descriptive, so assumptions
are minimal. However, all assumptions can and should be written down. 

Fitting a probability distribution is one basic tool. Usually, a
single-variable (univariate) distribution for a certain
attribute/feature is the simplest measure to take. But multivariate
distributions for several attributes can also be attempted. The {\em
  join distribution} of two variables is typical. Also the {\em
  conditional distribution} tells us if one attribute is connected on
some way to another. 

Fitting a model means estimating the value of the parameters of the
model using the data. Several methods to choose optimal values for a
given data set exists, like maximum likelihood estimation. 

Overfitting is the phenomenon of trying to attain values of the
parameters that explain the data very well, thereby making them
subject to how good or representative the data is. By overfitting we
run the risk that the values obtained are not good representatives of
data outside the sample. 

EDA uses plots, graphs and summary statistics. At a  minimum, mean,
median, mode, upper and lower quartiles and outliers are sought for
each attribute. Plotting data distributions is also done. Then,
looking at pairwise relationships between pairs of attributes is
attempted. If data is temporal, time series are also tried. 

The goal is to gain some understanding about the data, not necessarily
to bild a model or to confirm/disconfirm any hypothesis. Also, EDA
helps as sanity check (to make sure the data is what is supposed to
be) and even for data cleaning (using information about outliers,
distributions). 


\section{Data Understanding}

Data understanding should not be limited to what is required for the
current goal, since as stated our understanding of this goal (or the
goal itself) may change. 

First, we try to understand the structure, and identify
attributes. Usually, we assume we have a representation of
a set of objects, each one described by several features, or
characteristics, or attributes. This can be structured in different
ways: in a tabular way, data is structured as a collection (set, list
or table) that puts together raw values in records or tuples. When
the relations between objects are the (only) important thing, we
represent the data as a graph or network (which can be represented as
a matrix). We can have both, objects with simple properties and
relations to each other, as in a relational database. But we always
assume that there are some attributes at the bottom of it all.


Some people in the Data Quality literature talks about {\em accuracy}:
how close the value is to the real value. However, this is a complex
issue, and this can be considered as a derived measure using other
factors above like time and precission.


\subsection{Visualization}
Visualization can be used here, especially for a single attribute or
two attributes at a time.

For a single attribute, a {\em bar chart} or {\em histogram} can
give a quick overview of value distributions. There is no general rule
to choose the number of bins, though. {\em Sturges' rule}: the number
of bins for $n$ data points should be (assuming all bins have equal
width) $\lceil (\log_2 n + 1) \rceil$, but this works best for normal
distributions and moderate size. Another approximation: choose the
width $h$ of the bins, and then use $\lceil \frac{max(X) - min(X)}{h}
\rceil$ bins, where $X$ is the data points or sample. To determine
$h$, use the sample' standard deviation, $s$, and calculate $h =
\frac{3.5s}{n^{\frac{1}{3}}}$. Instead of $3.5s$, another possible
value is $2 IQS(X)$, where $IQS(X)$ is the inter-quantile rage of the
sample (the lenght of the interval with the middle 50\% of the data). 
A number of bins too small can be misleading; a number of bins too
high results in a scattered plot with too many peaks. All these
methods are very sensitive to {\em outliers}, which forces the bins to
become too wide; thus, one may want to ignore outliers in these
calculations. 

{\em Boxplots} can also be used for exploration. The boxplot of a
symmetric distribution looks very different from the one of an
asymmetric distribution. 

For two attributes, a {\em scatterplot} or a {\em density plot} is
useful. Also, {\em binning}, as in histograms, can be used in two
dimensions. To avoid coincidence of values (values that fall in
exactly the same place), adding {\em jitter} can be used (slightly move
all values to avoid coincidence and oclusion).

\subsection{The State Space Approach}
The {\em state space} metaphor: datum as vectors, as points in an
n-dimensional space. Normalization of values leads to normalized
space. Idea of distance (Euclidean, others), which can also be
normalized. Idea of neighborhood, density and sparsity. From here, knn
algorithms can be explained. Histograms can be seen as cutting a 1-D
space into segments, 2-D space into rectangle, 3-D space into cubes,
etc. and giving the density of each.

Note: if we calculate the density at each point, and consider it a
continuous function of the points, we can map it and get a map with
valleys (low density points), peaks (high density points), plateaus
(equally dense points), with the gradient giving the difference in
densities. 

The dimensionality of the space (number of dimensions) is very
important. When the space has many dimensions we may want to reduce
them; this helps with sparseness, for instance. When the space has too
few dimensions, we may want to enlarge: this helps distinguish objects
(points) better.


\subsection{Normalization}
Normalization of values: range (where the value is in the
minimum-maximum range), z-score or similar. However, the logistic
function can also be used as it leaves some 'room' at the top and the
bottom for extra large or extra small (outliers) values, and also for
out-of-range values that we want to include. Let $v$ be a value; then
the logistic function is \[\frac{1}{1 + e^{- v}}\] This gives the
typical 'sigmoid' graph, softly approaching the minimum and maximum (0
and 1) without ever reaching it.

Linear scaling: identify maximum and minimum values in domain (or in
sample), then transform value $v$ to $\frac{v\ -\ min}{max\ -\ min}$. 
This will give a value between 0 and 1.

\subsection{Outlier Detection}
While highly intuitive, there is no formal definition of what it means
for a data point to be an {\em outlier}, since it depends on the
context.  Outliers may indicate a data quality problem (an error in
data acquisition/representation), or they may indicate a random
fluctuation, or they may represent a truly unfrequent, exceptional or
abnormal situation. In some applications, this may be exactly what we
are looking for. In others, it may be a good idea to get rid of
outliers (i.e. when trying to summarize the data). 

Outlier detection for single attributes depends on the type of
attribute: for nominal attributes, we can calculate probabilities for
each possible value; an outlier will be a value with much lower
probability of appearance. For numerical values, it is harder to
decide what is an outlier without assuming an underlying
distribution. In a standard distribution, 95\% of values are within 2
standard distributions from the mean, so values beyond that (or, if we
want to be more restrictive, within 3 standard distributions, which
cover over 99\% of all values) can be classified as outliers. If the
underlying distribution is not normal, for instance exponential
distributions (or any distribution which is very skewed or has a long,
heavy tail) it may not be possible to tell outliers from regular
values. 

For multidimensional data, it is usually necessary to do a full
analysis like clustering to decide if a value is an outlier. 

\subsection{Missing Values}
Missing values has two parts: identifying missing values, and dealing
with them. In some dataset, missing values will be explicitly
identified (with some marker), but most of the time they are not: the
value is simply not present. When markers are used, they need to be
identified. For instance, a -1 may be used in a numeric field that is
supposed to have only positive values. In such a case, it is necessary
to eliminate the -1 before doing any statistical analysis. 

Berthold et al. distinguish 3 types of missing values: let $X$ be an
attribute where some values are missing, and $Y$ be all other
attributes on the same event, entity or fact as $X$. Let $X_{miss}$ be
a binary random variable that denotes whether a value is missing
($X_{miss} = 1$) or present ($X_{miss} = 0$). Then values
missing on $X$ can be
\bi
\item {\em missing completely at random (MCR)} or {\em observed at
  random}: the values are missing independently of the underlying
  value of $X$, and of any values of $Y$, that is
\[P(X_{miss}) = P(X_{miss}|X, Y)\]
Ex from Berthold et al: assume a sensor for air temperature that runs
out of battery at some point in time. The battery running dry is not
related to the air temperature, or to any other weather variable.

In this case, the missing values can be filled in because the values
of $X$ that we have present give us a good idea of the distribution of
values in $X$, so we can infer the underlying distribution and replace
the missing values by the mean or other method.
\item {\em missing at random (MAR)}: the values are missing
  independently of the underlying value of $X$, but may depend on
  other values $Y$, that is
\[P(X_{miss}|X) = P(X_{miss}|X,Y)\]
Ex from Berthold et al: assume a person is in charge of replacing
batteries in the air sensor, but he does not do it when it rains. Then
the battery is more likely to be dead when it's raining (one of the
$Y$), although it has nothing to do with air temperature.

In this case, the missing values can be filled if we can find out what
exactly is the relationship between $Y$ and the missing values, for
instance, building a classifier (see below).
\item {\em non-ignorable missing values}: the values are missing
  independently other values $Y$, but may depend on the underlying
  value of $X$, that is 
\[P(X_{miss}|Y) = P(X_{miss}|X,Y)\]
Consider again the air temperature sensor, but now assume that it
malfunctions at temperatures under zero degrees. Then other variables
may be not related (i.e. it may rain or not rain at any temperature)
but the sensor fails in a manner related to the missing value. The
problem with these cases is that there may not be a way to replace the
value meaningfully.
\ei
How to tell the cases apart? One way is to train a classifier in $Y$
and see if one can tell $X_{miss}$ with it (not $X$, but just
$X_{miss}$, so this is a binary classifier). If so, clearly the missing
value depends on the $Y$ and we are in the second case. If not, we are
in the first or the third case. Unfortunately, it may not be possible
to distinguish the last case from the other two from the data alone.

PYLE BOOK, ch. 8

\section{Data Understanding in R}
The command {\tt summary}, with a data set as argument, will make the
system generate basic statistics about the input. It can also be
applied to the result of any analysis!

Histograms are generated by the function {\tt hist}. The partition of
the beans can be specified by passing a vector of values as a second,
optional argument: a vector with values $(a_0,\ldots,a_k)$ will create
bins $(a_i, a_{i+1})$ for $i=0,..,k-1$. A boxplot can be generated by
the function {\tt boxplot}, which takes 1 or several attributes as
parameters, or even a whole dataset (categorical attributes are turned
into numerical by enumerating them, which does not always make sense).

Plots are generated by the {\tt plot} command: with two arguments,
each an attribute, it will generate the scatter plot of both
attributes.  With a complex dataset as an argument, it will generate
all scatterplots, one for each pair of attributes in the dataset. If
an attribute is categorical, the function {\tt as.numeric(attr)} will
transform it into a numerical attribute for the plot. Jitter can be
added to the scatterplot by calling {\tt plot(jitter(attr))}.

PCA can be carried out by function {\tt prcomp(dataset)}. Any
categorical attribute must be excluded from the call to {\tt
  prcomp}. Optional parameters {\tt center=T, scale=T} will carry out
a z-score standardization before applying PCA. Example:
{\tt data.pca $\leftarrow$ prcomp(data,center=T,scale=T)}. The
functions {\tt summary} or {\tt plot} can be applied to {\tt data.pca}.
To apply the results of the PCA to a new data set {\tt x}, {\tt
  predict(data.pca,newdata=x)} will work as long as {\tt x} has the
same schema as {\tt data}.

A distance matrix for a dataset can be calculated with {\tt
  dist(dataset)}. Correlation coefficients can be calculated as
follows: assume {\tt attr1} and {\tt attr2} are two attributes. Then
{\tt cor(attr1,attr2),
  cor.test(attr1,attr2,method=''spearman'')} will calculate the
Spearman coefficient (this can also be used for ``kendall'' and 
``pearson''). 

Finally, there is a library {\tt outliers} that has some test of
outliers. 

\section{Data Understanding in SQL}
NOTE: all the following can be done with one-line commands in R, and
almost as shortly in Python with the right packages. The reason it is
explained here is that: a) it is a very good opportunity to check the
understanding of SQL; and b) if your data is in a database, using SQL
will allow computation of some quick results without having to dump
the data into a file, and then load it into R or Python. Finally, for
small datasets, R and Python will usually be as fast or faster than
the database because they load all data into memory and work there
directly. But for very large datasets (those that do not fit into
available memory), the database approach may be more efficient. Thus,
if one wants an approach that works regardless of size, this may be a
good work-around.

The following are typically applied to numerical variables. However,
analyzing the distribution of values with a histogram can also be done
with non-numerical variables. What cannot be done is to calculate the
mean, standard deviation, and other moments (this would assume that
there is an ordering and a ratio in the domain, see data types). But
counting value frequencies can always be done and is a good way to
start a preliminary data analysis (see Data Understanding).

\subsection{Categorical Variables}
If we cannot apply basic statistics, how do we know when a sample of
categorical variables is representative of the overall population? If
the sample is random, it still should be representative. And (assuming
randomness) the larger the sample the better. One test is to see how
often a new value appears in enlarged samples; when new values rarely
appear, this increases the confidence that the sample covers the
distribution of values quite well.

Sometimes it is necessary to transform non-numeric values into
numerical ones for later use with tools that deal only with numerical
values. There is no universal method to achieve a good transformation;
in fact, there is not even a definition of what would consitute a
'good' transformation. Such transformation adds information (as the
new values have order and perhaps ratio) which didn't exist, and hence
in a sense are making things up. Usually what is required is that no
bias is added, so that tools that are applied to the data after the
transformation reach the same conclusions as they would when applied
to the data prior to the transformation (including, ignoring
non-numerical values).

When the domain is {\em closed} and {\em static} (see types of data)
we know there are exactly $m$ values a variable for that domain can
take (ex: U.S. states, weeks in a year) or at least a bound, a maximum
number of values (ex: U.S. phone numbers). In this case, numbers 1 to
$m$ can be assigned. However, this implies an ordering that may not
exist in reality, or may distort an existing order. For instance,
numbering weeks in a year from 1 to 52 ignores the fact that after
week 52 comes week 1 again, as there is a cycle. 

\subsection{Basic Statistics}
Measures of central tendency (mean, median, mode, geometric mean and
weighted mena) and dispersion (standard deviation, quartiles).

Assume a table Data with column Value; then mean is:

\begin{verbatim}
SELECT AVG(Value) FROM Data; 
\end{verbatim}

We can also calculate a {\em weighted mean} if we have the weights
available; assume table Data has column Value and also column weight:

\begin{verbatim}
Select sum(value*weight)/sum(value) from Data;
\end{verbatim}

Median is more complex: first we need to sort the values and rank
them, which can be done with an autoincrement field if inserting into
a temporary file:

\begin{verbatim}
Create Table Median (seq autoincrement, val type);

Insert into Median (Select Value from Data sort by Value);
\end{verbatim}

Since seq contains the number of values, we can use 

\begin{verbatim}
select max(seq)+1/2 from Median 
\end{verbatim}

to find the value for the median, for instance

\begin{verbatim}
Select val from Median where seq = (select max(seq)+1/2 from Median)
\end{verbatim}

for an odd number of values; for even we need to average max(seq)/2
and max(seq)+2. 

Also, for odd number:

\begin{verbatim}
SELECT a.Value median
FROM Data a, Data b
GROUP BY a.Value
HAVING
    SUM(CASE WHEN b.Value <= a.Value
       THEN 1 ELSE 0 END) >=(COUNT(*))/2
    AND
    SUM(CASE WHEN b.Value >= a.Value
       THEN 1 ELSE 0 END)>=(COUNT(*)/2) 
\end{verbatim}

Mode is the most frequently occurring value, and it can be done with
grouping:

\begin{verbatim}
Select val from
(Select Value as val, count(*) as freq from Data) as Freqs
where freq = (select max(freq) from Freqs)
\end{verbatim}

or also
\begin{verbatim}
SELECT TOP 1 Value, COUNT(*) as freq
FROM Data
GROUP BY Value
ORDER BY freq DESC
\end{verbatim}

Geometric mean is the $n$-root of the product of $n$ data points: it's
not affected by outliers as much as the regular mean.
HOW IN SQL?? One trick is to use logs: sum the logs of all values,
divide by the number of values, take the antilog of the result:

\begin{verbatim}
select exp(sum(log(Value)) / count(Value))
  from Data
\end{verbatim}

Since sum divided by count is how average is calculated, this can be
simplified to 

\begin{verbatim}
select exp(avg(log(Value))
  from Data
\end{verbatim}

Average rate of change for data that represents
a growth or decline, usually over time can be calculated by checking
the ratios of successive values. Sometimes we want the geometric mean
of a rate of change: assume table {\tt Prices(price,year)}. We
calculate the geometric mean of the rate of change of the prices by 

\begin{verbatim}
slect lastprice/firstprice\^(1.0/(lastyear - firstyear))
(select price as firstprice, year as firstyear 
from Prices where year = (select min(year) from Prices) as first,
(select price as lastprice, year as lastyear
from Prices where year = (select max(year) from Prices) as last,
\end{verbatim}

The year difference tells us which root to take; the division of first
and last price tells us the total price change. If the geometric mean
is greater than 1, we have increase; otherwise we hace decrease.

For dispersion: a histogram can be built with a grouping query; the
key is what to do when the bin size is not a value in a column. Assume
a table {\tt Heights(age,size)} where the age is an integer, and we
want to create a histogram with a bar for each age. This is trivial
with a group by. But assume now that we want our bars to represent 5
years each, starting with the earliest present age.

\begin{verbatim}
select age, size, ceil((age-minage)+1/5) as bin
from Heights, select min(age) as minage from Heights
\end{verbatim}

This maps each value to a bin starting at 1; the minimal age gets
mapped to 1 by (age-minage)+1; the second smallest to 2, and so on;
then the ceiling of the division acts as a modulus function.
Now we can group by bin, add the sizes.

In general, if we have a column with {\tt Values} on it, and want a
histogram with width $n$,

\begin{verbatim}
Select Value/n as number, (Value/n)*n as Low, (Value/n+1)*n as High, 
       Count(Value) as Frequency
From Values
Group by number, Low, High
Order by number
\end{verbatim}

returns a table where {\tt number} goes from 0 to $m$ (the number of
distinct values), {\tt Low} is 0 and {\tt High} is $n$ for the first
interval, {\tt Low} is $n$  and {\tt High} is $2n$ for the second
interval, and so on (and of course {\tt Frequency} is the number of
values in the interval). Note that each bin covers from {\tt Low} to
{\tt High-1}. To add percentages, instead of raw counts, we need to
pre-compute the total number of values. To have a {\em cumulative
  total} is more complex in SQL. It can be done with a join or,
equivalently, a subquery. The easier and more efficient way to do this
nowadays is to use the {\tt windows} construct. Assume again a column
{\tt Value}. We add a column {\tt Order} if one does not exist.

\begin{verbatim}
Select Value, (Select Sum(Value)
               From Values V
               Where V.Order <= V2.Order) as Cumulative
From Values V2
\end{verbatim}

Equivalently:

\begin{verbatim}
Select V2.Value, Sum(V.Value) as Cumulative
From Values as V2, Values as V
Where V.Order <= V2.Order
Group by V2.Value
\end{verbatim}

With windows:

\begin{verbatim}
SELECT V2.Value, SUM(Value) OVER(PARTITION BY Order
                                 ORDER BY Order 
                                 ROWS BETWEEN UNBOUNDED
                                 PRECEDING AND CURRENT ROW) AS Cumulative
FROM Values as V2
GROUP BY V2.Value
\end{verbatim}

The range is trivial to calculate:
\begin{verbatim}
select max(size) - min(size) from Heights;
\end{verbatim}

The standard deviation is a built-in function in most databases
nowadays, but if you want to or need to calculate it, the following
formula  can be recreated in SQL\footnote{The typical formula for
  standard deviation is 
\[\frac{\Sigma_{i=1}^n (x_i\ -\ \mu)^2}{(n - 1)}\]
 with $\mu$ the mean; but the formula in the text is
  equivalent and easier to compute in one pass over the data. Note
  that the original formula can also be computed in one pass by using
  the function {\tt AVG} to yield the mean.}:

\[\frac{n (\Sigma_{i=1}^n x_i^2) - (\Sigma_{i=1}^n x_i)^2}{n (n - 1)}\]

\begin{verbatim}
select sqrt((sum(Value)^2)* count(Value)) -
(sum(Value)^2)/count(Value)*(Count(Value)-1) as dev 
From Data
\end{verbatim}

In some cases we will need the {\em variance}, which is simply the
square of the deviation. Variance is also available as a built-in in
most system, but it's good to remember the formula:

\[Var(x) = \frac{\Sigma (x - \mu)^2}{n} = \frac{\Sigma x^2}{n} -
(\frac{\Sigma x}{n})^2\]
 
The third moment about the mean or skew can also be calculated
similarly; as the fourth moment about the mean or kurtosis.


\subsection{Hypothesis Testing}
Many times we fit a known statistical distribution to a set of data;
we must then test whether the model is a good fit.

First, a model is chosen. Second, parameters are estimated from the
data. Third, the model with the parameters is used to generate some
data. Fourth, the data generated is compared to the original, given
data. 

First: a distribution is chosen. Discrete distributions typically are
binomial, Poisson, geometric, hypergeometric. Continuous typically are
uniform, normal, gamma, exponentialWeibull, beta, chi-square,
student's t, F. Each one comes with certain parameters. 

Second: the mean and standard deviation are calculated from the data
(sample). Besides that, sample size, degrees of freedom, and (for
discrete) number of 'successes' is usually all that is needed.
 The
degrees of freedom are usually calculated as: number of data points
minus number of distribution parameters (2, if mean and standard
deviation are all that's used\footnote{Since the standard deviation is
  calculated using the mean, many statistics textbooks consider these
  two parameters as related and hence they only count as 1.}) minus 1
(customary to account for the fact that we are working with a sample
that only approximates the real dataset). 

Third: the distribution is used to generate data (in some cases, like
the normal, tables exist to tell us how many data items we should
expect to find at certain points of the distribution).  We need to know
the formulas for the {\em density (mass) function} for each distribution;
these usually require a mean and a standard deviation. The ones
estimated from the data are used.

Fourth: for each distribution interval (for discrete) or area (for
continuous) $i$ we compare observed ($O_i$) data to expected ($E_i$)
data with a chi-square test: 

\[\chi^2 = \Sigma_{i=1}^n \frac{(O_i\ -\ E_i)^2}{E_i}\]

The second step we've already seen how to do; the fourth one is pretty
simple once one has a table where each row is an interval and there is
an attribute for observed data and one for expected data.
The complex thing in SQL is to generate the expected data from the
distribution. A trick is to generate the range of expected values in a
table with a zeroed column (one value per row), and then run a query
on this table changing each zero to the distribution prediction by
applying the formula to the corresponding value. Another thing that we
may need to do is to standardize the values, that is, to replace each
value $x$ with $\frac{x\ -\ \mu}{\sigma}$, where $\mu$ is the mean of
the distribution and $\sigma$ the standard deviation. 


Another method that can be used is to examine the data and generate a
{\em histogram}. From the histogram, we can identify generic
properties of the data distribuion, like symmetry and skewness, as
well as whether the distribution is unimodal, bimodal, or
multimodal. This can help us narrow down the choice of a theoretical
distribution. Alternatively, we can compare the histrogram generated
from data with the histogram that a theoretical distribution would
generate. (SQL TO GENERATE HISTOGRAMS)

One key question is the width of the bins for the histogram. One has
to be careful here as bins that are too wide may hide important
characteristics of the data (they are too coarse), while bins that are
too thin may make the data look quite irregular not fit well into any
known distribution. One rule of thumb is, if one has $n$ data points,
to pick $\sqrt{n}$ intervals for the histogram. (LOOK AT DATA ANALYSIS
OPEN SOURCE).

This method has limitations. In particular, distributions with heavy
tails are usually not well accounted for with histograms. To look at
the tail of a distribution, an approach is to use {\em
  quantile-quantile plots}. The most commonly used quantiles are the
percentiles (out of 100): the quartiles (25, 50, 75 and 100\%), the
quintiles (20, 40, 60, 80 and 100\%) and the deciles (10, 20,... and
100\%). The quantiles obtained from data are compared to the quantiles
estimated from a theoretical distribution and plotted: the closer we
come to a straight line, the better that the distribution fits the
data. 
NEED EXAMPLE

FITTING A STANDARD DISTRIBUTION: The density function for the standard
distribution is \[\frac{1}{\sqrt{2 \pi \sigma}}e^{- \frac{1}{2}
  (\frac{x - \mu}{\sigma})^2}\]
where $\mu$ is the mean, $\sigma$ the standard deviation, and $x$ is
the data. Example: a phone company records the lengths of telephone
calls. The data is converted to a table DATA with columns {\em number
  of minutes, observed number of calls with those minutes}. This can
be done for each number of minutes (1,2,3,\ldots) or as a histogram
with time intervals, after choosing a width (0 to 2 minute, 2 to 4
minutes, etc. for a width of 2 minutes). We calculate estimates for
$\mu$ and $\sigma$ from the data, either original one or in table
DATA. We then apply the density function to the {\em number of
  minutes} column (so {\em number of minutes} is $x$) to get the
column {\em estimated number of minutes} (if we used a histogram, we
can calculate expected values for the upper and lower margin and take
the average). While this is a general procedure that will work with
any distribution (see next examples), in the case of the standard
distribution another possibility is to calculate the {\em normal
  variate} for each value $x$ as $\frac{x - \mu}{\sigma}$, carry a
running total for this variate, and then
look up in a normal table the percentage of all values that we should
expect to see for up to $x$ (if this normal table has been loaded into
the database as a table with schema {\em x, percentage}, this is a
simple join). This number is a percentage, but multiplied by the
total number of observations in the sample it still yields an {\em
  estimated number of minutes}. Now, all we have to do is to compare
the {\em observed} and {\em estimated} number of minutes (in general,
the observed and estimated frequencies) using the chi-square rule. In
SQL:
\bi
\item create table DATA and populate it with data. If the data is in
  {\em raw} format {\em callid, number of minutes}, a simple group by
  and count query will produce the table DATA.
\item create an table table NORMAL with columns {\em data,
  observed-freq, expected-freq}, and fill in first two columns from
  DATA, last one with nulls.
\item calculate mean and standard deviation in DATA (see above). This
  can be left in a table of their own (or added to NORMAL as two
  additional columns)
\item populate column {\em expected-freq} by applying the formula
  above to column {\em data} and the mean and standard deviation of
 previous step. In the case of a normal distribution, this can be done
 with two methods, as explained above. 
\item Run a query that applies the chi-squate test to {\em
  observed-freq} and {\em expected-freq} to get a single result.
\ei

FITTING A POISSON DISTRIBUTION: The density function for Poisson
distributions is \[\frac{e^{- \mu} \mu^{x}}{x!}\]
where as usual $\mu$ is the mean, and $x$ is the data. Example: a web
site collects the number of orders they get each day for a certain
time period ($n$ days). This is then converted to a table DATA with
schema {\em number of orders per day, observed number of days with that
  number}. From the original data or from DATA we can calculate the
same mean, which is used for $\mu$. Then the table DATA is expanded by
using the density function and computing, for each number of orders
per day, the Poisson expected value: that is, $x$ is the number of
orders per day, and we calculate the expected percentage of days that
would have that value (column {\em expected percentage}). This expected
percentage is then multiplied by the total number of days to get the
expected number of days with the value (column {\em expected number of
  days}). Note that since the Poisson distribution is continuous, and
the sample is discrete, the distribution will give a small but not
zero value for $x$ that are not present in the data. Because of that,
and rounding, the column {\em expected number of days} may not exacly
add up to the total number of days. We can check this by adding the
{\em expected percentage} column in a cumulative or running total (see
above) and seeing how close to 1 it
gets. Finally, the chi-square value is calculated from columns {\em
observed number of days} and {\em expected number of days}.

FITTING AN EXPONENTIAL DISTRIBUTION: The density function for the
exponential distribution is \[\lambda e^{- \lambda x}\]
Once again, $x$ the data value. This
distribution describes the time between events in a Poisson process,
and uses the parameter $\lambda > 0$ for the rate of the process (the
rate at which events arrive). It is common to simplify the above
function using $\frac{1}{\mu}$ instead of $\lambda$ (with $\mu$ as the
(sample) mean), yielding \[\frac{1}{\mu} e^{- \frac{x}{\mu}}\] instead. It is also
common to use the cumulative distribution function: \[1 - e^{-
  \frac{x}{\mu}}\] 
In the latter form, as usual we can plug in value $x$ to get a
cumulative percentage, which is then multiplied by the total number of
data points to get an expected value. Note that this number is
cumulative; to get an expected value we should subtract the sum of the
previous values. 

\subsection{Testing the Null Hypothesis}
The above compared a sample to a statistical distribution for goodness
of fit. Now suppose you are comparing the value of two parameters,
each one coming from a different set of data (say, two means). A very
common question is whether the two data sets have the same underlying
distribution, which could indicate that they are produced by
the same process (if they have different distributions, and the
process has not changed over the time the two samples were acquired,
then we assume they come from different processes). But two samples
may differ just for being samples; a better question is if the
difference between means can be attributed to the randomness of the
sampling (is not significative) or to two underlying processes (it is
significative). This is sometimes expressed as a test of the {\em null
  hypothesis} (the differences are not significative) versus the
non-null hypothesis (the differences are significative). Three things
may happen:
\be
\item if we accept the null hypothesis and the differences are indeed
  not significative; or if we reject the null hypothesis and the
  differences are significative, we are right.
\item if we reject the null hypothesis and the differences turn out
  not to be significative, this is a {\em type I error}.
\item if we accept the null hypothesis and the differences turn out to
  be significative, this is a {\em type II error}.
\ee
There are several procedures to minimize both types of errors. 

F-distribution, t-test.

Comparing more than 2 samples: {\em contingency test}.

Assume 4 car makers, A, B, C, and D test the mileage of a model each
over 3 days. Then end up with table (spreadsheet):

\begin{tabular}{c|c|c|c|c|} \\ \hline
Day & A & B  & C & D \\ \hline
day 1 & $v_{11}$ &  $v_{12}$ &  $v_{13}$  &  $v_{14}$\\ \hline
day 2 &  $v_{21}$ &  $v_{22}$ &  $v_{23}$ &  $v_{24}$ \\ \hline
day 3 &  $v_{31}$ &  $v_{32}$ &  $v_{33}$ &  $v_{34}$ \\ \hline
\end{tabular}

The question is: are there significant differences between the models?
To calculate this:
\be
\item we calculate totals across rows, across columns, and the grand
  total (all columns = all rows).

\begin{tabular}{c|c|c|c|c|c}  \\ \hline
Day & A & B  & C & D \\ \hline
day 1 & $v_{11}$ &  $v_{12}$ &  $v_{13}$ &  $v_{14}$ & $row_1$ \\ \hline
day 2 &  $v_{21}$ &  $v_{22}$ &  $v_{23}$ &  $v_{24}$ & $row_2$ \\ \hline
day 3 &  $v_{31}$ &  $v_{32}$ &  $v_{33}$ &  $v_{34}$ & $row_3$ \\ \hline
& $column_1$ & $column_2$ & $column_3$ & $column_4$ & $total$ \\ 
\end{tabular}

\item for each cell we calculate expected values as follows: $e_{ij} =
  (row_i/ total) column_j$. For instance, for
  $v_{11}$, expected value is $(row_1 / total) column_1$. 

\begin{tabular}{c|c|c|c|c|c}  \\ \hline
Day & A & B  & C & D \\ \hline
day 1 & $v_{11}$ ($e_{11}$) &  $v_{12}$ ($e_{12}$) &  $v_{13}$
($e_{13}$) &  $v_{14}$  ($e_{14}$) & $row_1$ \\ \hline 
day 2 &  $v_{21}$ ($e_{21}$) &  $v_{22}$ ($e_{22}$) &  $v_{23}$
($e_{23}$) &  $v_{24}$  ($v_{24}$) & $row_2$ \\ \hline
day 3 &  $v_{31}$ ($e_{31}$) &  $v_{32}$ ($e_{32}$) &  $v_{33}$
($e_{33}$) &  $v_{34}$  ($v_{34}$) & $row_3$ \\ \hline
& $column_1$ & $column_2$ & $column_3$ & $column_4$ & $total$ \\ 
\end{tabular}

\item once we have expected values for all cells, we compute a
  chi-square test between observed and expected values: $\Sigma_{i,j}
  \frac{(e_{ij} - v_{ij})^2}{v_{ij}}$. This is
 divided by the number of cells minus 1 plus the number of rows
  minus 1 (the degrees of freedom).
\ee

NOTE: the table above can be represented in a relational database as
{\em row, column, value}. Then a CUBE operator can be used to
calculate all row and column totals and the grand total. The
computation of the chi-square then requires a self-join between the
table resulting from the CUBE and itself. This may not be a good idea
for large datasets, so a separate table with the totals can be set up
instead, as it will be smaller than the data table (the data table has
number of rows $\times$ number of columns tuples; the CUBE produces
number of rows + number of columns results). 

\subsection{Fitting Distributions to Data}
FROM SQL BOOK

For numerical data, it's common to try to fit a distribution to it. By
using histograms, as indicated above, one can try to guess an
underlying distribution. Kernel Density Estimation (KDE) can also be
used for continuous data, in order to 'fill in' holes in the data set
and see which (smooth) distribution approximates the data
better. However, after the initial guess one should check that indeed
the chosen distribution is a good fit. This is called 'hypothesis
testing' in statistics. Usually, the chi-square test is used for
this. 

If we want to fit a normal distribution to the data, we estimate mean $\mu$
and standard deviation $\sigma$ from the data. Then we build a histogram, and
calculate the difference in frequency of values with a standard
table. However, before we do that we may need to normalize the values
to the z-score: value $v$ becomes $v_n = \frac{v - \mu}{\sigma}$. This
is called the {\em normal variate}. We can find $v_n$ in the normal
table to see which percentage of values should be there, and compare
what the table says with what the data says (the table is expressed as
a percentage, so we may need to add percentages to the raw counts of
the histogram). In a histogram with depth $n$, the first normal
variate is $\frac{n - \mu}{\sigma}$; the second is $\frac{2n -
  \mu}{\sigma}$; and so on. In normal tables we may get the cumulative
function (how many values between the beginning of the distribution
and a given value). Thus, for $2n$ we get all values up until $2n$,
but the histogram has the values between $n$ and $2n$. We can then
subtract the values up to $n$, or we can have a cumulative total (and
a cumulative percentage) added to the histogram, to make the
comparison easier (see sql.stats.tex). An approach to make this easier
in SQL is to create a table with columns {\tt RAWSUM(average, stddev,
  total)} from the data; a table with columns {\tt TEMP(order, low, high,
  frequency, (high-average)/stddev as variate, 0 as expected)} from
RAWSUM and the data histogram (order, low, high, frequency come from
the histogram); fill in the {\tt expected} column from a standard
table, and compare {\tt frequency} (as a cumulated percentage) to {\tt
  expected}. The chi-square is computed as 
\[\chi^2 = \Sigma \frac{Observed\ -\ Expected)^2}{Expected}\]
This is a trivial query in SQL. The result can then be compared to
$\chi^2$ tables.

To fit a Poisson distribution to data, we start with the histogram as
usual. The histogram of a Poisson rises very rapidly, and once the
peak is reached it drops very gradually. This is typical of discrete
events, where the non-occurrence makes no sense. This is also typical
of arrival and departure events, and it's why this distribution is so
frequently used in queue theory. The density function for Poisson is
\[\frac{e^{- \mu} \mu^x}{x!}\]
where $\mu$ as usual is the mean of the distribution, and $x$ is the
expected number of times the event can happen in a given time period
(so the above can be calculate for $x = 1,2,\ldots$.) For instance, if
$x$ is number of orders per day in a e-commerce site, this is the
percentage of days we can expect $x$ orders. Thus, we can build a
table {\tt Poisson(Xval, Expected)}, where for $Xval = 1, 2, \ldots$
can calculate the expected value with the formula above, multiplying
the result (which is a percentage) by a total value of $x$. For
instance, for 100 days, we multiply the percentage by 100, which will
tell us how many days, out of 100, we can expect $x$ orders. Again,
this is an expected value. We can compare this with the oberved value
from the data, using $\chi^2$ again. Note that the Poisson values can
also be accumulated, to determine the days where orders will be $x$ or
less. 

To fit an exponential distribution, we again start with a histogram
and calculate the data from the mean. The density for an exponential
distribution is given by
\[\frac{1}{x} e^{- \frac{x}{\mu}}\]
where $\mu$ is again the mean, and the $x$ is the random variable. In
practice, the above formula is integrated (from 0 to $x$) to give the
cumulative expected frequency, so $(1 - e^{\frac{x}{\mu}})$ is used
instead. Thus, again we use the histogram and use the formula to
calculate the expected value for {\tt High} on each bin ($x = n, 2n,
\ldots$ for a bin of width $n$). This gives us the expected value,
cumulative. Again, we can add a cumulative value to the histogram or
we can subtract from each computation of $x$ the previous value. In
the end, we can apply $\chi^2$ to the expected and observed values. 

Number of degrees of freedom = number of bins in histogram - number of
parameters (1 for mean, 2 for mean and standard deviation) - 1.

Common distributions to try: for discrete, binomial, Poisson,
geometric, negative binomial, hypergeometric. For continuous, uniform,
normal, gamma, exponential, Weibull, beta, chi-square, F, student's t.




\section{Comparing two Data Sets}
Sometimes we want to compare two data sets. We may have some
information about one of them (say, mean and standard deviation), and
we want to know if the second data set 'fits' with the first one -in a
sense, if both data sets could have been generated by the same
underlying random process. Usually the first data set is large,
representing a population; and the second data set is a small sample,
and we want to determine that the sample came from the population. Let
$\mu$ and $\sigma$ be the mean and 
standard deviation of the first data set; let $\bar{x}$ be the mean of
the second data set, and $n$ its size. We then compute $\frac{\bar{x}
  - \mu}{\sigma \sqrt{n}}$ and compare this value to a normal
distribution table at the required significance level. If the standard
deviation of the first data set is not available, the standard
deviation of the second set can be used; this is what the t-test uses.

Suppose now that we have two samples, two data sets of about the same
size. Let $\mu_1$ and $s_1$ be the mean and variance (square of
standard deviation) for the first data set; and $\mu_2$ and $s_2$ be
the mean and variance for the second data set. First, we check how
different the variances are, using $F = \frac{s_1}{s_2}$ if $s_1 >
s_2$ and a table for the F distribution (and a given significance
level). If the variances are similar, we compute
\[\frac{\mid \mu_1 - \mu_2 \mid}{\sqrt{\frac{s_1(n_1 - 1) + s_2 (n_2 -
      1)}{n_1 + n_2 - 2} \times
    \frac{n_1 + n_2}{n_1 \times n_2} }}\]
where $n_1$ is the degrees of freedom of the first data set, and $n_2$
is the degrees of freedom of the second data set, and check the
results against a t-table. If the variances are disimilar, we compute
\[\frac{\mid \mu_1 - \mu_2 \mid}{\sqrt{\frac{s_1}{n_1} + \frac{s_2}{n_2}}}\]
and again check the result against a t-table.


When comparing more than two samples, we can use a {\em contingency
  test}. Ex: assume 4 vehicles (A, B, C, D) are tested for gas
mileage, 3 days each, getting

\begin{tabular}{c|c|c|c|} \hline
Vehicle & Day 1 & Day 2 & Day 3 \\ \hline
A & $a_1$ & $a_2$ & $a_3$ \\ \hline
B & $b_1$ & $b_2$ &  $b_3$ \\ \hline
C & $c_1$ & $c_2$ &  $c_3$  \\ \hline
D & $d_1$ & $d_2$ &  $d_3$ \\ \hline
\end{tabular}

First we calculate column totals (per day) and row totals (per car):

\begin{tabular}{c|c|c|c|c|} \hline
Vehicle & Day 1 & Day 2 & Day 3 & Total \\ \hline
A & $a_1$ & $a_2$ &  $a_3$ & $A_t$ \\ \hline
B & $b_1$ & $b_2$ &  $b_3$ & $B_t$ \\ \hline
C & $c_1$ & $c_2$ &  $c_3$ & $C_t$ \\ \hline
D & $d_1$ & $d_2$ &  $d_3$ & $D_t$ \\ \hline
Total & $D_1$ & $D_2$ & $D_3$ & $T$ \\ \hline
\end{tabular}

We then calculate the expected value for $A$ on Day 1 as
$a_{1e} = \frac{A_t}{T} \times D_1$. We can do the same for each
vehicle and day (each data point). Once we have that, we can compare
the expected and observed values on each data point (for $A$ and Day
1, $\frac{a_1 - a_{1e}}{a_1e}$), and add them all up. The final sum is
compared to what the $\chi^2$ table says.

The problem with this approach is that this looks like a spreadsheet,
not a database table. However, we can represent this and calculate the
results above using the new CUBE and ROLLUP operators (REF to Gray
original paper). In particular, the table above can be represented by
{\tt DATA(VehicleType, Day, miles)} and then a CUBE will calculate all
totals. Once that is done, computing the expected value ad the
chi-square test is not that difficult
