\chapter{Data Management with Databases}

\section{Introduction}
Advantages and disadvantages of databases:
storage and access control; integrity constraint declaration,
techniques for dealing with large datasets. Need to declare schema in
advanced, difficult to change afterwards, limited functionality. 

\section{Database Design}
Simple, intuitive. Use E-R concepts (1:1, 1:M, M:N), idea of
normalization (eliminate 1:N and M:N from tables). Description of 1NF;
ex. of phones. Decomposition. Issue of object
decomposition. Irregularity: objects with no attrs vs several
(employee with several/no phones).

\section{Basic SQL}
\bi
\item
SPJGH block. 
\item
Semi and outerjoins (problem with employee with no phones).
\item
Order, limit, rank, windows.
\item
Materialized views, subqueries in FROM: how to break down queries into
subqueries. 
\item
Dumping, loading database.
\item
How to represent matrices, vectors, add, substract, multiply,
transpose, invert.
\ei

\section{Advanced SQL}
\subsection{Other joins}
Semijoins and outerjoins to recreate objects; connect with design
issues.

\subsection{Vectors and Matrices}
Simulating vectors and matrices in SQL. From Joe Celko:
A 3D matrix (Celko calls matrices structures of any dimension) as

\begin{verbatim}
CREATE TABLE ThreeD
(element_value REAL DEFAULT 0.00 NOT NULL,
 i INTEGER NOT NULL
  CONSTRAINT valid_i
  CHECK(i BETWEEN 1 AND 3),
j INTEGER NOT NULL
 CONSTRAINT valid_j
 CHECK(j BETWEEN 1 AND 4),
k INTEGER NOT NULL
 CONSTRAINT valid_k
 CHECK(k BETWEEN 1 AND 5),
PRIMARY KEY (i, j, k));
\end{verbatim}

Matrix equivalence test:

\begin{verbatim}
NOT EXISTS
((SELECT * FROM MatrixA
  INTERSECT
  SELECT * FROM MatrixB)
 EXCEPT
 (SELECT * FROM MatrixA
  UNION
  SELECT * FROM MatrixB))
\end{verbatim}

Matrix addition and subtraction are possible only between matrices of
the same dimensions. The obvious way to do the addition is simply: 

\begin{verbatim}
SELECT A.i, A.j,
       (A.element_value + B.element_value) AS element_tot
  FROM MatrixA AS A, MatrixB AS B
 WHERE A.i = B.i
   AND A.j = B.j;
\end{verbatim}

Multiplication by a scalar constant is direct and easy

\begin{verbatim}
UPDATE MyMatrix
   SET element_value = element_value * @in_multiplier;
\end{verbatim}

Matrix multiplication:

\begin{verbatim}
SELECT i, j, SUM(MatrixA.element_value * MatrixB.element_value)
  FROM MatrixA, MatrixB
 WHERE MatrixA.k = MatrixB.k
 GROUP BY i, j;
\end{verbatim}

Matrix by vector multiplication:

\begin{verbatim}
SELECT i, SUM(A.element_value * V.element_value)
 FROM MatrixA AS A, VectorV AS V
 WHERE V.j = A.i
 GROUP BY A.i;
\end{verbatim}

Transpose:

\begin{verbatim}
CREATE VIEW TransA (i, j, element_value)
AS SELECT j, i, element_value FROM MatrixA;
\end{verbatim}

It is also possible to store matrices using PostgreSQL arrays.

\begin{verbatim}
CREATE TABLE matrices(
    matrixid        integer;
    matrix        numeric[][]
);

insert into matrices(1, '{{1,2,3},{4,5,6},{7,8,9}}');
\end{verbatim}
But now array subscripts are implied, as in programming languages, and
cannot be manipulated except via procedures (see below). However,
generate\_subscripts is a convenience function that generates the set
of valid subscripts for the specified dimension of the given
array. Zero rows are returned for arrays that do not have the
requested dimension, or for NULL arrays (but valid subscripts are
returned for NULL array elements).  

\begin{verbatim}
SELECT * FROM arrays;
         a          
--------------------
 {-1,-2}
 {100,200,300}
(2 rows)

SELECT a AS array, s AS subscript, a[s] AS value
FROM (SELECT generate_subscripts(a, 1) AS s, a FROM arrays) foo;
     array     | subscript | value
---------------+-----------+-------
 {-1,-2}       |         1 |    -1
 {-1,-2}       |         2 |    -2
 {100,200,300} |         1 |   100
 {100,200,300} |         2 |   200
 {100,200,300} |         3 |   300
(5 rows)
\end{verbatim}
NOTE: would need to loop over dimensions somehow! array\_ndims returns
the number of dimensions in the array, maybe we can generate a
series??
For each dimension x
generate a series for that dimension y
use x,y to access the array.

Another option: use unnest(array) (converts to a set of rows), and
modulus over original array dimensions. 

The current dimensions of any array value can be retrieved with the
array\_dims function; array\_dims produces a text result, which is
convenient for people to read but perhaps inconvenient for
programs. Dimensions can also be retrieved with array\_upper and
array\_lower, which return the upper and lower bound of a specified
array dimension, respectively.

FROM WEB:
PostgreSQL arrays follow the basic definition of a mathematical
matrix.  The array must be rectangular so if one element has an array
in it, all other elements must have an array of the same dimensions.
All members of an array must be of the same data types. 
Like tuples, arrays are ordered.  However:
\bi
\item    Unlike tuples every array element must have the same data type.
    This means that arrays of text can represent tuples of any type.  
\item    Unlike tuples, arrays do not have a fixed number of elements.
    Elements can be added without disturbing the basic type. 
\ei
Like arrays, relations are basically rectangular, and open ended
(items can be added or removed from the end).  However: 
\bi
\item    Arrays are ordered, relations are not.  This means that an array
    value is a domain, while a relation value is a set or bag of
    domains (depending on constraints). 
\item    All data types in an array must be identical.  Relations do not
    have this restriction. 
\ei

 we could store matrices for matrix arithmetic as numeric[] arrays. 
Example:  Solving Simultaneous Linear Equations in PL/PGSQL 

\begin{verbatim}
01.create or replace function solve_linear(numeric[]) returns numeric[]
02.language plpgsql as
03.$$
04.declare retval numeric[];
05.c_outer int; -- counters
06.c_inner int;
07.c_working int;
08.upper1 int;
09.upper2 int;
10.ratio numeric; --caching of calculation
11.begin
12.IF array_upper($1, 1) <> array_upper($1, 2) - 1 THEN
13.RAISE EXCEPTION 'bad input, must be n x n+1 matrix';
14.END IF;
15.upper1 := array_upper($1, 1);
16.upper2 := array_upper($1, 2);
17.FOR c_outer IN 1 .. upper1 LOOP
18.FOR c_inner IN 1 .. upper1 LOOP
19.IF c_inner = c_outer THEN CONTINUE;
20.END IF;
21.ratio := $1[c_inner][c_outer]/$1[c_outer][c_outer];
22.for c_working in 1 .. upper2 loop
23.$1[c_inner][c_working] := $1[c_inner][c_working]
24.- ($1[c_outer][c_working] * ratio);
25.end loop;
26.end loop;
27.end loop;
28.retval := '{}';
29.for c_outer in 1 .. array_upper($1,1) loop
30.retval := retval || $1[c_outer][array_upper($1,1) + 1]/ $1[c_outer][c_outer];
31.end loop;
32.return retval;
33.end;
34.$$;

select * from solve_linear('{{1,2,3},{2,3,4}}'::numeric[]);
\end{verbatim}
One minor issue with the above code is that it throws a division by 0
if the lines are parallel. 

\section{SQL in Linux}
The following assume tabular files.

To get the basic SQL functionality,
\bi
\item Project: the command {\tt cut} extracts columns from a file. The
  {\tt cut} command prints selected parts of lines from each FILE to 
standard output. Arguments include: -b (select only these bytes), -c
(select only these characters), -d (use DELIM instead of TAB for field
delimiter) -f (select only these fields;  also print any line that
contains  no delimiter character, unless the -s option is
specified). Only one of -d, -c or -f should be used. 
 To remove duplicates, combine {\tt sort} and {\tt uniq}:
\begin{verbatim}
sort myfile.txt | uniq
\end{verbatim}
List only the unique lines (the usual): {\tt sort myfile.txt | uniq -u}

List only the duplicate lines: {\tt sort myfile.txt | uniq -d}

One can even skip fields when doing the comparison:
{\tt uniq -f 3 logfile.txt} skips the first 3 fields.
NOTE: {\tt uniq} without sorting will only remove lines that are
duplicates and adjacent.

\item Select: it depends on the condition. To select y lines
  starting at number x (i.e. from line x to line x+y), 
\begin{verbatim}
cat file | head -n x | tail -n y
\end{verbatim}
will do it. For a condition like ``age > 30'', we need to know which
column 'age' is.
\item Join: the command {\tt join} takes two or more files and merges
  them into one file by matching contents line by line.
{\tt join file1 file2} would produce a single file; it assumes that
each line in each input file consists of fields separated by
whitespaces. Join will match lines in the input files using one of the 
fields for matching (to check for equality). By default, the first
field is used. However, this can be changed with the -1 and -2
options. Each tables a positive integer as an argument: if {\tt -1 n}
is used, the nth field of the first line is used; similarly for {\tt
  -2 n}. 

When comparing fields, if they are both strings, difference in case
can be ignored with the -i option.

Join assumes that both input files are sorted by the first field;
otherwise it will do as much work as possible and give an error if a
field is found out-of-order (to check that both files are sorted
before doing any work, the --check-order option can be used). For this
reason, join can be used with the {\tt sort} command.

Typical use: to join file pw.tab and grep.tab by 3rd field:

\begin{verbatim}
$ sort -t \$'\t' -k 4,4 /tmp/pw.tab > /tmp/pw.sort.tab

$ sort -t \$'\t' -k 3,3 /tmp/grp.tab > /tmp/grp.sort.tab

$ join -t \$'\t' -1 4 -2 3 /tmp/pw.sort.tab /tmp/grp.sort.tab
\end{verbatim}

This is tedious because (1) each file must be sorted by the join
column, (2) the field delimiter must be specified for each invocation
of sort and join, and (3) the join column index must be determined and
specified.

Using R to perform a join:

\begin{verbatim}
$ /usr/bin/r

> pw = read.delim('/tmp/pw.tsv', quote='')

> grp = read.delim('/tmp/grp.tsv', quote='')

> j = merge(pw, grp, by.x='gid', by.y='gid')

> write.table(j, '/tmp/pw_grp.tsv', row.names=F, sep='\t', quote=F)
\end{verbatim}

Using the Python library pandas to perform a join:

\begin{verbatim}
$ python

> import pandas as pd

> pw = pd.read_table('/tmp/pw.tsv')

> grp = pd.read_table('/tmp/grp.tsv')

> j = pd.merge(pw, grp, left_on='gid', right_on='gid')

> j.to\_csv('/tmp/pw_grp.tsv', sep='\t', index=False)
\end{verbatim}


If both the input files cannot be mapped one to one then through
-a[FILENUM] option we can have those lines that cannot be paired while
comparing. FILENUM is the file number (1 or 2). NOTE SIMILARITY TO
OUTER JOIN. It is also possible to print only unpaired lines with the
-v option. NOTE SIMILARITY TO ANTIJOIN.
\item Union: this can be done simply with {\tt cat file1 file2
  >result}. Note that there may be duplicates after {\tt cat}, so we
  can pipe the result through {\tt sort} and {\tt uniq}: {\tt cat
    file1 file2 |sort | uniq >result}.

\item Intersection: this can be achieved by a union followed by a
  deletion of duplicates lines, as shown above, if the input had no
  duplicates. 
\item Difference: {\tt comm}, which compares two sorted files line by
  line, can be used to implement difference with
\begin{verbatim}
comm -23 <(sort file1) <(sort file2)
\end{verbatim}
Without parameters, this would produce a 3 column output; the first
one would be lines unique to file1, the secon wuold be lines unique to
file2, the third one would be lines common to both files.
The {\tt -23} tells is to suppress lines that appear in the second
file (2) or in both (3). Clearly, this command can also be used to
implement intersection with {\tt -12}.

\begin{verbatim}
sort file1 file2 | uniq           Union of unsorted files
sort file1 file2 | uniq -d        Intersection of unsorted files
sort file1 file1 file2 | uniq -u  Difference of unsorted files
sort file1 file2 | uniq -u        Symmetric Difference of unsorted files
join -t'\0' -a1 -a2 file1 file2   Union of sorted files
join -t'\0' file1 file2	          Intersection of sorted files
join -t'\0' -v2 file1 file2       Difference of sorted files
join -t'\0' -v1 -v2 file1 file2   Symmetric Difference of sorted files
\end{verbatim}
\ei
Ironically, there is no Cartesian product-equivalent operation.

To make complex commands, use piping with '|'.

Besides the basic SQL, we have added  functionality that can be very
helpful.

{\tt split filename} splits file into several files. By default, each
file contains 1000 lines, and the name of each file is generated
automatically as {\tt x**}, where each '*' is a letter. This can be
    changed as follows:
\bi
\item The name for each file can be modified: the suffix length (by
  default 2) can be changed using -a option: {\tt split -a5 filename}
  will create names with 5 characters after the 'x'. To create split
  files with a numeric suffix instead,  use the -d option: {\tt split
    -d filename} will use as filenames {\tt x00, x01,...}.
\item The output of the command (what is one each file) can be
  controlled as follows:
\bi
\item the size of each output split file can be controlled using -b
 option. This takes a number of bytes as argument. Also, the number
  of lines per output split file can be customized using the -l
  option. 
\item To get control over the number of files or chunks, use the -C
  option. Note that if there isn't enough input, zero-size files are
  created. To avoid zero sized files, use the -e option. 
\ei
\ei
To put back together the result of a split, one can use the {\tt cat}
command. This command takes several file names as arguments and
produces a single file by concatenating the contents of all files, in
the order given.

The command {\tt paste}: this writes  lines
consisting  of  the sequentially corresponding lines from each FILE,
separated by TABs, to standard output.   With  no  FILE,  or when FILE
is -, read standard input. If the lines are not separated by TABs, -d
can be used to indicate the line separator. With -s, the command
pastes one file at a time instead of in parallel. 

The {\tt sort} command can be used to sort any file; it is necessary
for {\tt join} to work, but it can also be used as an auxiliary step
before others.
