\chapter{Data in the Small: File Management}

\section{Introduction}
When working with small data sets, one may do without a database; most
SQL can be simulated in Unix with command line actions. One big caveat
is data management: data in files should be in {\em tabular} or
table-like format. For each data set, one needs metadata -at the very
least, schema information. This can be managed in one of two ways: as
the very first row in each file, or in a separate file. Having a
separate file allows each data file to be completely uniform, and it
gives the opportunity to record links among files. However, it also
means that metadata management needs a separate program, and
consistency has to be maintained. For now, we talk exclusively about
data analysis, and leave data management (and metadata) for later.

\be
\item Unix structure and commands processing, file system.
\item processes. Input and output redirection, pipes.
\item Users, permissions.
\item basic commands: ls, cat, head, tail.
\ee
\section{Unix Basics}
Interaction with the shell; command line interface.

\subsection{Processes}
A program is a set of instructions that tells the computer how to
do something. A processes is a program that is being executed; we say
that the process is running.

Commands are programs that are built in the shell, to allow you to do
basic manipulations. Some common commands to deal with files are in
table 1.

Table 1: ls, cd, pwd, mkdir, less, head, tail, mv, rm, rmdir, file

A command is given as {\tt command-name options arguments}, where ...

Commands in the background;  several commands at once with '\&\&',
which can be read as 'and'. 
Use of {\tt tee}: it takes the output of a process and splits it in
two: it shows it on the screen {\em and} saves it to a file too.

Put programs in background with Control-z (suspend) and 'bg'; stop it
with Control-C; kill it for good with Control-/\.

The {\tt ps} command lists information about all the programs that are
running on a computer. For instance, {\tt ps -aux|grep username} will
give all programs run by {\tt username}.  This command lists, for each
program, a process id or PID (it's the second column); this can then
used to stop the program with the command {\tt kill pid}; to truly
terminate it, use {\tt kill -9 pid}. 

\subsection{Files and Streams}


A {\em stream} is a sequence of elements. It has a beginning but not
necessarily an end; it may be created over time. Usually a stream is
treated by dealing with each element in turn.

A {\em file} is two concepts in one. It is a {\em storage unit} and a
special unit of data -a finite stream. As a storage unit, it is a part
of a hard disk where data is stored; it has a name so we can refer to
it. 

Files are organized together in a {\em file system}. Hierarchy, paths.

Programs deal with streams and files. There are three streams (called
the {\em standard streams}) that are always present: {\em standard
  input}, {\em standard ouput}, and {\em standard error}. Standard
input provides data to the program; it is by default connected to the
keyboard. Standard output is where the program delivers data it
produces; it is by default connected to the display. Standard error is
where the program outputs any error messages; it is also connected by
default to the display. However, this default connection can be
changed, in a process known as {\em redirection}. $>$, $<$, $>>$ for
'append'. 

Streams can be connected together with a {\em pipe}. This way, the
output of a program can be the input for another one. This process can
be repeated, thus having several programs operate on a single stream
or file. When used together, the pipes form a {\em pipeline}.
In Unix, the '|' (vertical bar) is used to denote a pipe. To examine
file contents, {\tt head} and {\tt tail} are very useful.

Example: {\tt ls -lt |head -5} gives the five most recently modified
files. 

Files may contain data in binary format or ascii format. In binary
format, data is stored ... In ascii format, ...

Files for data tend to indicate the type of data in their name, which
usually has a type attached to it as in {\tt name.type}. The type part
is a 3 or 4 letter abbreviation: .csv, .xml, .txt, .json, .jpeg, .py,
.cpp, .tex, .ppt (.pptx), .doc (.docx), and so on.

{\tt df} displays filesystem information; {\tt du} display disk usage.

To seach for files, {\tt find} is a useful command.

\subsection{Users}
Unix is a multi-user system: more than one user can use the system at
the same time. Each user has a {\em username} and a {\em password}. A
user has assigned a {\em user directory}, where all the user's files
are placed (typically under {\tt /home/username}). A user will have
certain rights given in a system, determining what the user can and
cannot do.

Users are collected together in {\em groups}. Sometimes one can
associate rights with groups (i.e. with all the users in a group).

Files have associated with them a set of {\em permissions}. The
permissions are of three type: {\em read (r)}, {\em write (w)} and {\em
  execute (x)}. Permission to read means permission to look into the
file; permission to write means permission to make changes to the
contents of the file. Permission to execute is used for files that
contain code for a program: it means permission to actually run the
program. 

The permissions work on three levels: with the owner
of the file, the group of the owner of the file, and everyone, for a
total of 9 permissions. For instance, a file with permissions {\tt
  rwxrw\_r\_\_} gives the user (first 3 letters) permission to read,
write and execute the file;  to the group (middle set of 3 letters)
permission to read and write (but not execute); and to everyone else
(last set of three letters) permission to read (but not to write or
execute). 

To change permissions on a file: {\tt chmod}: the command structure is
one or more of {\tt u} (user/owner), {\tt g} (group), {\tt o}
(others), or {\tt a} (all); = to set permissions or +/- to add/remove to
existing permissions, 
and one or more of {\tt r} (read), {\tt w} (write) and {\tt x}
(execute). For instance, {\tt chmod a+w filename} gives everyone the
permission to write into the file. This command can be used to
restrict permissions, and hence to protect files from undesired
access. For instance, {\tt chmod u=rwx filename} gives the user
permission to read, write and execute, but no permissions to anyone
else. Note that directories also have permissions; stopping users from
reading a directory means they cannot see which files are inside.

\subsection{Help}
To get help on Unix, you can type {\tt man command}, where {\tt man}
is short for 'manual' and will get you the official documentation for
it.

Unix has 'tab completion'; if you type in the first few letters of a
command, and hit the Tab key, the shell will complete the name for you
-as long as there is no ambiguity as to which command you meant. 

Unix also provides {\em command history}: each command you type in the
command line is remembered. This is useful for two things: when you
want to remember what exactly you typed (which command, and with which
options and arguments); and to issue the same command again without
retyping it. You can navigate through the history: to move back to a
previous command, Control-p will do  (Control-n moves forward). Once
you locate a command, you can edit it before submitting it: Control-a
moves the cursor to the beginning of the line, Control-e to the end;
Control-f moves one character forward and Control-b one character
backward; Control-d deletes the character where the cursor is (and
typing adds new ones, of course). 

Editors are programs that allow you to create a file and insert some
content on it; {\tt vi} is very simple, powerful, and present in all
flavors of Unix, albeit a bit cryptic. {\tt Emacs} is even more
powerful, and customizable, but not that simple.

\section{Unix Tools}
We can have tabular files or text files. There are some commands that
will work on both (search and replace, for instance) but even then
there is some differences (in tabular files, the search is usually
constrained to some given column(s)). 

We can have structured or unstructured  files. In structured files,
data has a regular structure, a pattern, that repeats. In unstructured
files there is no such thing. The most common
structured files are flat files and CSV files, but there are other,
more complex formats like JSON and XML. The most common unstructured
files are text files. Note that even when we have structured data we
may not have information about the structure (what is usually called
the schema), and it may have to be inferred from the data itself. 

Most commands are designed to work on either
structured or unstructured files. There are some commands that
will work on both (search and replace, for instance) but even then
there is some differences (in tabular files, the search is usually
constrained to some given column(s)). 

\section{Basics of Unix commands}

Basic commands: {\tt split} {\tt join}, {\tt sort}, {\tt cat}.

\subsection{Getting Data}
wget, curl.

GNU Wget is a free utility for non-interactive download of files from
       the Web.  It supports HTTP, HTTPS, and FTP protocols, as well as
       retrieval through HTTP proxies.

       Wget is non-interactive, meaning that it can work in the background,
       while the user is not logged on. 
Wget can follow links in HTML, XHTML, and CSS pages, to create local
       versions of remote web sites, fully recreating the directory structure
       of the original site.
Basic options:
\bi
\item
-b
       --background
           Go to background immediately after startup.  If no output file is
           specified via the -o, output is redirected to wget-log.
\item
-o logfile
       --output-file=logfile
           Log all messages to logfile.  The messages are normally reported to
           standard error.
\item
       -a logfile
       --append-output=logfile
           Append to logfile.  This is the same as -o, only it appends to
           logfile instead of overwriting the old log file.  If logfile does
           not exist, a new file is created.
\item
-i file
       --input-file=file
           Read URLs from a local or external file.  If - is specified as
           file, URLs are read from the standard input.  (Use ./- to read from
           a file literally named -.)

           If this function is used, no URLs need be present on the command
           line.  If there are URLs both on the command line and in an input
           file, those on the command lines will be the first ones to be
           retrieved.
\item
      -B URL
       --base=URL
           Resolves relative links using URL as the point of reference, when
           reading links from an HTML file specified via the -i/--input-file
           option
\item     -t number
       --tries=number
           Set number of tries to number. Specify 0 or inf for infinite
           retrying.  The default is to retry 20 times, with the exception of
           fatal errors like "connection refused" or "not found" (404), which
           are not retried.
\item
       -O file
       --output-document=file
           The documents will not be written to the appropriate files, but all
           will be concatenated together and written to file.  If - is used as
           file, documents will be printed to standard output, disabling link
           conversion. 
\item   -c
       --continue
           Continue getting a partially-downloaded file.  This is useful when
           you want to finish up a download started by a previous instance of
           Wget, or by another program.  For instance:

                   wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z

           If there is a file named ls-lR.Z in the current directory, Wget
           will assume that it is the first portion of the remote file, and
will ask the server to continue the retrieval from an offset equal
           to the length of the local file.
\item 
    --user=user
       --password=password
           Specify the username user and password password for both FTP and
           HTTP file retrieval.  These parameters can be overridden using the
           --ftp-user and --ftp-password options for FTP connections and the
           --http-user and --http-password options for HTTP connections.


\ei

{\tt curl}  is  a tool to transfer data from or to a server, using one of the
       supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS,  IMAP,
       IMAPS,  LDAP,  LDAPS,  POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS,
       TELNET and TFTP).  The command is designed to work without user  inter‐
       action.

       curl offers a busload of useful tricks like proxy support, user authen‐
       tication, FTP upload, HTTP post, SSL connections, cookies, file  trans‐
       fer  resume,  Metalink,  and more. 

You can specify multiple URLs or parts of URLs  by  writing  part  sets
       within braces as in:
        http://site.{one,two,three}.com
       or you can get sequences of alphanumeric series by using [] as in:
        ftp://ftp.numericals.com/file[1-100].txt
        ftp://ftp.letters.com/file[a-z].txt
 Nested  sequences  are not supported, but you can use several ones next
       to each other:

        http://any.org/archive[1996-1999]/vol[1-4]/part{a,b,c}.html

       You can specify any amount of URLs on the command line.  They  will  be
       fetched in a sequential manner in the specified order.

       You  can  specify a step counter for the ranges to get every Nth number
       or letter:

        http://www.numericals.com/file[1-100:10].txt
        http://www.letters.com/file[a-z:2].txt

 If you specify URL without protocol:// prefix,  curl  will  attempt  to
       guess  what  protocol  you might want. It will then default to HTTP but
       try other protocols based on often-used host name prefixes.  For  exam‐
       ple,  for  host names starting with "ftp." curl will assume you want to
       speak FTP.


\subsection{Outputing Data}
Head, tail. Head print the first $n$ lines of a file (10, by default,
but a number can be given with option -N). Likewise, tail prints the
last $n$ lines of a file (again, 10 by default, but that can be
changed with the -N option). These can be used also to inspect what's
in a large data file.

Note that head and tail can be applied to several files by supplying a
pattern instead of a file name for argument, or by supplying a list of
filenames. In that case, they apply to each file. For instance, 

head file1 file 2 

will produce the first 10 lines of file1 followed by the first 10
lines of file2 (usually the output includes each file name).
Sort, uniq


\subsection{Manipulating Tabular Data}

{\tt split filename} splits file into several files. By default, each
file contains 1000 lines, and the name of each file is generated
automatically as {\tt x**}, where each '*' is a letter. This can be
    changed as follows:
\bi
\item The name for each file can be modified: the suffix length (by
  default 2) can be changed using -a option: {\tt split -a5 filename}
  will create names with 5 characters after the 'x'. To create split
  files with a numeric suffix instead,  use the -d option: {\tt split
    -d filename} will use as filenames {\tt x00, x01,...}.
\item The output of the command (what is one each file) can be
  controlled as follows:
\bi
\item the size of each output split file can be controlled using -b
 option. This takes a number of bytes as argument. Also, the number
  of lines per output split file can be customized using the -l
  option. 
\item To get control over the number of files or chunks, use the -C
  option. Note that if there isn't enough input, zero-size files are
  created. To avoid zero sized files, use the -e option. 
\ei
\ei
To put back together the result of a split, one can use the {\tt cat}
command. This command takes several file names as arguments and
produces a single file by concatenating the contents of all files, in
the order given.

The command {\tt paste}: this writes  lines
consisting  of  the sequentially corresponding lines from each FILE,
separated by TABs, to standard output.   With  no  FILE,  or when FILE
is -, read standard input. If the lines are not separated by TABs, -d
can be used to indicate the line separator. With -s, the command
pastes one file at a time instead of in parallel. 
Thus, the final result is a tabular data file, with each input data
corresponding to a column.

The {\tt sort} command can be used to sort any file; it is necessary
for {\tt join} to work, but it can also be used as an auxiliary step
before others.

To change formats:
\bi
\item {\tt tr ',', '\\n'} substitutes commas by newlines, so a row or a
  whole file becomes a single column of values.
\ei

\subsection{Basic Analysis}
To build a histogram of a field in a file, we can combine {\tt sort}
and {\tt uniq -c} to get frequencies: assume {\tt example.txt} has a
single field with a value; then

{\tt sort example.txt | uniq -c}

will produce an output with two columns; the first is the frequency,
the second the value. Of course, if the file has several columns, we
can extract one in particular with {\tt cut}. Note that this is
similar to the GROUP BY in SQL. Note also that we can sort the
resulting histogram by the first column (to get the frequencies in
ascending or descending order) or by the second (if there is an
ordering in the values, to get a typical distribution-like result).

\subsection{Modifying Files}
We can also implement the equivalent of INSERT, DELETE and
UPDATE. This would make a good exercise/project!

{\tt tr} is a (very) simplified variant of sed: it will replace one
character with another or remove some characters altogether. You can
also use it to remove repeated characters. And that's about all tr can
do. {\tt tr abc xyz} replaces all occurrences of the letter "a" with
the letter "x," all letters "b" with the letter "y," and all letters
"c" with the letter "z." The number of characters listed in these two
groups does not have to be equal. 
You can also specify ranges of characters. For example, {\tt tr a-z A-Z}
will replace all lower case letters with their upper case equivalents.

Based on this, we can even do some {\em data cleaning}, which is SQL
is ill-prepared to handle. Another neat project!!

Eliminate blank lines with sed:

\begin{verbatim}
sed '/^\s*$/d' datafile
\end{verbatim}

Eliminating duplicate lines can be done with uniq. 


\section{An Example}
Relational DBs vs UNIX 

http://socialmediacollective.org/2011/10/06/using-off-the-shelf-software-for-basic-twitter-analysis/

Paper on how to use MySQL to analyze Twitter data.

\subsection{Getting the data}
Interestingly, they do not show how to get the tweets to begin
with. My previous post discusses this, but it might be useful to show
a simple Ruby program that collects Tweet data, especially since the
method has changed slightly since my post. The biggest hurdle is
setting up authentication to access Twitter’s data—discussed in full,
here, but the crucial thing is that you have to register as a Twitter
developer, register a Twitter application, and get special tokens. You
create an application at the Twitter apps page; from that same
location you generate the special tokens. 

The example here tracks mentions of football, baseball, soccer, and
cricket, but obviously, these could be other keywords. 

Counting the number of football, baseball, etc. mentions is easy: 

\begin{verbatim}
$ grep -i football nsports.tsv | wc -l
$ grep -i baseball nsports.tsv | wc -l
$ grep -i soccer nsports.tsv | wc -l
$ grep -i cricket nsports.tsv | wc -l
\end{verbatim}

As well as getting the number of lines in the file:
\begin{verbatim}
$ cat nsports.tsv | wc -l
\end{verbatim}

The second analysis was to count who is retweeted the most, done by
counting the username after the  standard Twitter “RT ” (eg “rt @willf
good stuff!”). The following pipeline of commands accomplishes this
simply enough: 

\begin{verbatim}
egrep -io "rt +@\w+" nsports.tsv | perl -pe "s/ +/ /g" | cut -f2
-d\  | sort | uniq -c | sort -rn | head 
\end{verbatim}

Each of these is a separate command, and the pipe symbol (|),
indicates that the output from one command goes on to the next. Here’s
what these commands do: 

\be
\item 
\begin{verbatim}
 egrep -io “rt +@\w+” nsports.tsv 
\end{verbatim}  
searches through the tweets for the pattern RT space @ name, where
there is one or more spaces, and one or more ‘word’ characters. It
only prints the matching parts (-o), and ignores differences in case
(-i).  
\item {\tt perl -pe “s/ +/ /g”} — I noticed that from time to time,
  there is more than one space after the ‘RT’, so this substitutes one
  or more spaces with exactly one space. 
\item {\tt cut -f2 -d\ } – Each line looks like “RT @name”, now, and
  this command ‘cuts’ the second field out of each line, with a
  delimiter of a space. This results in each line looking like
  ‘@name’. 
\item {\tt sort | uniq -c | sort -rn} — this is three commands, but I
  type them so frequently, it seems like one to me. It sorts the text,
  so they can be counted with the uniq command, which produces two
  columns : the count and the name; we reverse sort (-r) on the first
  numeric field (-n) 
\item {\tt head} — this shows the top ten lines from a file. 
\ee

This command pipeline should have no problem handling 475k lines.

The third analysis was to put the data in a format that can be used by
Excel to create a graph, with counts by day. Because we have printed
the date and time in separate columns, with the date in column 3. So, 
we can simply do the cut, sort, uniq series: 
\begin{verbatim}
cat nsports.tsv | cut -f3 | sort | uniq -c > for_excel.tsv
\end{verbatim}
This will put the data into a format that Excel can read.

\subsection{Basic Counting}
Let's assume our data, which we'll call data.csv, is pipe-delimited (
| ), and we want to sum the fourth column of the file. 

\begin{verbatim}
cat data.csv | awk -F "|" '{ sum += $4 } END { printf "%.2f\n", sum }'
\end{verbatim}

The above line says:
\be
\item
    Use the cat command to stream (print) the contents of the file to stdout.
\item    Pipe the streaming contents from our cat command to the next
  one, awk.
\item
    With awk:
\be
\item
        Set the field separator to the pipe character (-F "|"). Note
        that this has nothing to do with our pipeline in point 2.
\item
        Increment the variable sum with the value in the fourth column
        (\$4). Since we used a pipeline in point 2, the contents of
        each line are being streamed to this statement.
\item
        Once the stream is done, print out the value of sum, using printf to format the value with two decimal places.
\ee
\ee

Given a file with data separated by some comments, like:
\begin{verbatim}
# group1
1.123
2.123
1.239
# group2
1.2e-10
2.4e-08
# group3
3.8
4.2
\end{verbatim}
The following creates a separate file, called group1 (group2, group3)
with the data for that comment:

\begin{verbatim}
awk '/#/{x="group"++i;}{print > x;}' datafile
\end{verbatim}
which can then be combined with paste or join.

The cat command reads a file to standard output and the grep command
uses this output of cat as standard input to search if the city
‘Munich’ is in a city file. The example dataset is available on github.

\begin{verbatim}	
bz@cs ~/data $ cat city | grep Munich
    3070,Munich [München],DEU,Bavaria,1194560
\end{verbatim}

In the example above you can see the structure of the sample data
set. The dataset is a comma separated list. The first number
represents the id of an entry, followed by the name of a city, the
countrycode, district and the last number represents the population of
a city. 

Now, let’s answer an analytical question: What is the city with the
biggest population in the data set? The second and the fifth column
can be selected with the help of awk (ALSO CUT!).  Awk creates a list
where the population is on the first position and the city name is on
the second position. The sort command can be used for
sorting. Therefore, it is possible to find out which city in the
dataset has the biggest population.

Let’s get a deeper look in the city data set. The question for this
example is: How is the distribution of cities in the city data set? A
combination of the sort and the uniq commands allows us to create data
for a density plot. This data can be streamed (>) to a file.

\begin{verbatim}
bz@cs ~/data/ $ cat city | cut -d , -f 3 | uniq -c | sort -r | head -n 4
    363 CHN
    341 IND
    274 USA
    250 BRA
bz@cs ~/data/ $ cat city | cut -d , -f 3 | uniq -c | sort -r >
count_vs_country
\end{verbatim}

\section{Appendix I: Alias}
An alias is a shorcut for a command. To list all existing aliases in
the system, type {\tt alias}. Aliases can be defined on the command
line with command {\tt alias}, as in {\tt alias lsd='ls -la|grep \^ d},
or they can be put in your environment file.

To create a shell script, use 

\be
\item Take a command or pipeline;
\item add an interpreter directive (for Unix systems): ``\#!/bin/sh''
  or other shell.
\item parameterize it using \$1s and \$2s and so on.
\item make it executable with {\tt chmod +x}.
\ee

\begin{verbatim}
alias name=value
alias name='command'
alias name='command arg1 arg2'
alias name='/path/to/script'
alias name='/path/to/script.pl arg1'
\end{verbatim}

To remove one, type {\tt unalias aliasname}.

An alias lasts only for a login session; for a permanent alias, add
the alias definition to you shell (.bashrc or whatever you use).
 System-wide aliases (i.e. aliases for all users) can be put in the
 /etc/bashrc file. 

