\chapter{Types of Data}
\label{types.data}

There are several ways to classify data.
Data can be classified according to size or complexity, or according
to structure. A useful idea is to think of data as samples from
certain domains, which are organized in certain ways. The
classification then depends on the domain and the structure.

\section{Types of Data Domain}
EVERYTHING HERE FROM GUIDE TO INTELLIGENT DATA ANALYSIS, Berthold et
al. (\cite{GUIDE}).


What kind of attibutes are there? What are their domains?
What type of domain is it: nominal (categorical), ordinal, numeric? If
numeric, continuous or discrete? interval, ratio or absolute scale?

As far as complexity, we have two types of domains: scalar and
vector. Scalar domains are 
indicated by a simple value, while vector domains have components. For
scalar, we have two categories: nominal or categorical vs numerical
(some people call them qualitative and quantitative
data\footnote{There is no total agreement on the details of this
  division. For some people, qualitative data includes both nominal
  and ordinal data, while interval and ratio are the only real
  numerical values}.). The first
one is data that is observed and categorized, while the second one is
the result of a measurement, and so some particular numerical domain
is used to express the result. In more detail:
\bi
\item Nominal/categorical: domain is a finite set, with no order among
  elements. There may be a hierarchical organization, and then we need
  to decide a {\em granularity level}: at which level of the hierarchy
  we see the data. Some domains are fixed (months of year) and others
  are dynamic, changing over time (products in a supermarket
  chain). Note that they may be encoded as numbers, but that does not
  make them numerical attributes (difference
  value/representation). Some people distinguish between nominal and
  categorical values, using the latter to denote groups of things, or
  categories. For instance, ZIP codes are a label for a number of
  places, and so are breeds of dogs. The only meaningful operation on
  this type of domain is equality/inequality (i.e. asking whether a
  value is or is not present) and putting values in
  categories. However, we can associate {\em frequencies} (number of
  occurrences) with each category, and then ask more questions of the
  frequencies, since they are a numerical attribute.
\item ordinal attributes: a linear ordering in the domain. This order
  is irreflexivy, asymmetric and transitive. This allow us to compare
  values with respect to the order. Examples would be classifying
  symptoms of a sickness as very mild, mild, medium, severe, very
  severe; or opinions on a subject (strongly agree, agree, neutral,
  disagree, strongly disagree). Note that, since values can be ranked,
  we can compute a {\em median} (and, in the frequencies, a mode), but
  we cannot do further operations.
\item numerical: representation of continuous ones have a precision,
  as they have to be represented finitely.
\bi
\item interval: zero is arbitrary. Ex: dates, temperature. Here, the
  distance between one value and another is (approximately) the same,
  so we can compare values and add/substract them, but taking ratios
  make no sense. We can take the mean, median and value, as well as
  range and standard deviation/variance.
\item ratio: there is an absolute zero, ratios make sense. Ex: time,
  distance. Temperature measured in Kelvin (which has an absolute
  zero) is here, while temperature in C or F is not). We can compare
  the values, add/subtract, multiply/divide, find mode, median and
  mean (including arithmetic and geometric mean), as well as pretty
  much any mathematical manipulation.
\item absolute scale: the zero and the unit of measurement are
  absolute. Ex: any counter.
\ei
\ei

Discrete vs continuous domains: discretes have a finite number of
values (some value within a range, i.e. an integer or rational
number), while continuous have an infinite number of values (any value
within a range, i.e. a real number). Further, 
the real value of a continuous variable may not be finitely
specifiable, so what we have is almost always an approximation (and a
discrete one!).


For numerical attributes, what is the distribution of values?

What is known about the domains, eg. do they change over time?
What is it known about the data, eg. precision and timeliness?
Are attributes (co)related? Or independent?

\section{Types of Data Structure}

Data comes as {\em structured}, {\em semistructured} and {\em
  unstructured}. To explain what this means, we introduce the concept
of {\em schema}.

\subsection{Structured Data}
Structured data most commonly comes in what is called the {\em
  tabular} or {\em columnar} format: it is organized into rows and
columns. Intuitively, each row describes an entity, or an event, or a
fact that we wish to capture. Sometimes the row is called a {\em
  record}: each component of the record (each column) is called an
{\em attribute} or a {\em feature}. 

Identification with {\em vectors}.

Difference with spreadsheets: spreadsheets are two-dimensional records
or vectors.

Each component or attribute is unique, in that it would not make sense
to have two copies of it. This is different from having several
values! Variations of this approach: 
\bi
\item each component is simple vs. some components are complex.
\item components can be repeated (several values).
\item some components can be optional vs. all components must be
  present. 
\ei

First one is simpler. Combining the variants leads to XML and other
formats like Json, and to semi-structured data. In this section we
stick to first one.

In files, tabular data comes almost always in the same format: each
row is separated by a newline character (so that each row takes one
line), each column is separated by a character that acts as 
a delimiter: usually, a comma or a tab. When each row is separated by
a newline and each column by a comma, we talk about CSV (Comma
Separated Values) file.

Flat files (tabular data with no complex components) can be dealt with
in Linux and in relational databases; this is where they
shine. Repeated components can be modeled. Optional components can be
dealt with too but the cost is added complication. In relational
databases, repeated components are treated with {\em
  normalization}. Optional components can be sometimes modeled with
extra tables, but this may lead to a proliferation of tables with
the attributes are very variable. In modern relational systems, there
is the possibility of creating types and subtypes: in this case, all
commonalities create a type. Example: all real estate properties are
divided into commercial and residential; residential is further
subdivided into houses and apartments. Inheritance allows us to reuse
common attributes. 

\subsection{Semistructured Data}
XML, Json. Formally, this can be seen as trees. Trees and graphs are
also two important formats for data. There is some structure there,
but also loose as the number of children (neighbor) for a node can
vary.

\subsubsection{Tree and Graph data}
Storing trees in relational: note similarity between one-to-many
relationships, hierarchies, and trees. Limits of SQL. Encoding of
trees and use of WITH RECURSIVE. 

XML as important example of trees: calculate ancestors, descendants,
siblings. 

Storing graphs in relational: limits of SQL again. This time, there is
no perfect encoding. Most analysis are iterative. One idea is to store
graphs as matrices, and use matrix algorithms. Some defs. and examples
from social networks.  

\subsection{Unstructured Data}
Unless data has been generated randomly, it always has some
structure. The term {\em unstructured} really refers to data where the
structure is not {\em explicit}, as it is in structured and
semi-structured data. Most of the time, this refers to {\em text},
that is, to a group of sentences written in some natural
language. Clearly, such data has structure (the grammar of the
language), but it is not shown with the text. Thus, in order to
process such data, extra effort is needed.

Depending on the level of process required, we distinguish 3 levels of
text analysis:
\bi
\item {\em Information Retrieval (IR)}:
\item {\em Information Extraction (IE)}:
\item {\em Natural Language Processing (NLP)}:
\ei

IR is supported in most databases nowadays, thanks to {\em inverted
  indices}. Idea of {\em keyword search} and ranking; many databases
support that. In files, a lot can be done with {\tt grep} and
variants. But approximate string matching is usually not supported.
 Does R have keyword search support?

IE and NLP are vast, specialized fields that rely on machine learning;
toolkit like Mallet, systems like GATE are available. 

\section{Data Representation}
Data in the computer is stored in files. The data is encoded,
i.e. values are in ASCII or in unicode.

\subsection{Data Types}
Distinction between data and encoding. 

Data as strings: ASCII value. Number 123 is string ``123'', not the
same.

Non-unicode data: need to convert.

Most systems support numbers and strings. Numbers are usually of
different types: integers, floats. Some systems offer special types
for date, time, and a few more (Boolean, etc.). 

Social security numbers are not numbers (cannot be added,
subtracted). Identifiers are not numbers (same). Zip codes are not
numbers: leading zeros may be important. 

In some systems, different types of numbers are different, i.e. 5 is
not the same as 5.0. This is important for comparison, merging.

Categoricals and ordinals are not strings: string operations make no
sense on them.


\subsection{Data Structure}
 The file type describes the
type of data it containst, although sometimes the extension reflects
the application that created it, and there isn't a 1-1 correspondence
(ex: text editor can be used to create text files but also csv files,
etc.). Typical types: .txt, .doc, docx, .xml, .html, .xls, .xslx,
.csv, .json, .dat. Other, proprietary formats.

How file types map to data type:
\bi
\item tabular: with/out headers, fixed width, delimited (csv); 
\item semi (XML, HTML, JSON).
\item unstructured: text.
\ei
Programming languages define their file types, although most of the
time they are simply text: .c, .c++, .py.

Data from databases can be {\em dumped} into some tabular type;
likewise, most database systems can {\em load} data from a tabular
type. 

The command {\tt file} helps determine the type of a file. However,
this command will only check is  text (the file
     contains only printing characters and a few common control characters and
     is probably safe to read on an ASCII terminal), executable (the
     file contains the result of compiling a program in a form
     understandable  to some
     UNIX kernel or another), or data meaning anything else (data is usually
     “binary” or non-printable).  Exceptions are well-known file formats (core
     files, tar archives) that are known to contain binary data.

To transform between file types:
antiword takes a Word document and produces a plain text version.
py\_xls2csv is a python utility that extract csv from an Excel
spreadsheet.

ASCII vs binary: for numbers, they take much less space in binary than
ascii. For example, 128 takes 3 bytes in ascii ('1', '2', '8'), but
since it is $2^{7}$, it takes 1 byte in binary. This is specially good
for large files.

HDF5 files?
