\chapter{Metadata}

\section{The Need for Metadata}
NEED TO INCLUDE METADATA MANAGEMENT for data sharing.

Relation to information integration; connect to entity resolution and
conceptual modeling.

\bi
\item Metadata is {\em data about data}.
\item It describes what the data is, and its characteristics (as
  opposed to particular values).
\item In relational databases, metadata indicates the schema: table
  names, attribute names, data types.
\item However, this is not nearly enough to describe the data.
\ei
Why Metadata?
\bi
\item Traditionally, metadata has been ignored -with very negative
  consequences. 
\item To operate meaningfully in data, we must only carry out
operations that make sense for that data: it makes no sense to add a
temperature and a height, even though they are both numbers.
\item If we need to integrate two databases, we need to merge
  attributes that denote the same real world entity -but we won't know
  what entities data denotes without metadata.
\item If we need to share/export our data, we need to explain what the
  data are (what they mean or refer to) for others to be able to use
  them. 
\ei

\section{Types of Metadata}
\bi
\item Many classifications of metadata in the literature.
\item No universal agreement on what constitutes good metadata.
\item {\em Structural metadata} (data model: relational, XML, etc.),
  {\em Administrative metadata} (rights/licenses, version number)
  and {\em Descriptive metadata} (content: what data is about).
\item We discuss here:
\bi
\item {\em Provenance}: where data comes from. Extended to include how
  data is manipulated within the database (cleaning, etc.). This is
  important for scientific databases. 
\item {\em Quality}: how good data is; includes recency (timeliness),
  accuracy (for measurements), trustworthiness, etc.
\item {\em Representation}: how data is represented, what kind of
  values it can take. For numeric values, appropriate range (min,
  max); for strings, valid patterns. 
\item {\em Domain}: what the data is supposed to represent.
\ei
\ei

\subsection{Provenance}

\bi
\item Describes origin and evolution of data (also called {\em
  lineage}).
\item In theoretical approaches, three types: metadata that explains
\bi
\item {\em where} data comes from;
\item {\em why} a particular record is part of some output;
\item {\em how} an output record is produced.
\ei 
\item Let $q$ be an SQL (RA) query over database $D$; and $q(D)$ the {\em
  answer} to $q$ on $D$.  Then the lineage of $t \in q(D)$ is the (set
  of) tuples $T$ in $D$ that 'contribute to' or 'witness' $t$ being in
  $q(D)$. That is, if $T \subseteq D$, then $t \in q(D)$.
\item However, a particular tuple may have more than one set of
  'witness' (for instance, tuples obtained by projection).
\item In why-provenance, a {\em minimal witness basis} set is calculated for
  each $t \in q(D)$; this set is minimal and each witness set contains
  a witness basis.
\item {\em How} provenance adds to the set information on how the
  tuple was derived.
\item {\em Where} provenance adds the origin of each cell in each
  tuple $t \in q(D)$. 
\item Where provenance includes attributes from tuples in the why
  provenance -but some tuples in the why provenance may not be in the
  where provenance! (example: joins).
\item None of these is invariant under equivalent queries: given $q$
  and $q'$ equivalent, $t \in q(D)$ iff $t \in q'(D)$, but the (how,
  where, why) provenance of $t$ in $q$ may be different from $t$'s
  provenance in $q'$. 

\item Given a query $Q$ in SQL (RA), we can compute $q(D)$ and
  annotate each $t \in q(D)$ with its provenance. If $q(D)$ is used to
  answer another query, the process (called {\em annotation
    propagation}) can be repeated.
\item There are two main approaches to computing provenance: eager and
  lazy.
\item Eager ({\em bookkeeping, annotation}) approach: query is
  rewritten, annotations produced with answer.
\item Lazy ({\em non-annotation}): provenance is computed when needed
  by examining query, source data, and answer.
\item This has similar advantages/drawbacks as materialized/virtual
  views. 
\ei

Provenance: Examples

\begin{tabular}{|c|c|c|c|} \hline
 & \multicolumn{3}{|c|}{{\bf Agencies}} \\ \hline
{\bf tupleid} & {\bf name} & {\bf based-in} & {\bf phone} \\ \hline
$t_1$ & BayTours & S F & 415-1200 \\ \hline
$t_2$ & HarborCruz & S C & 831-3000 \\ \hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
 & \multicolumn{4}{|c|}{{\bf Tours}} \\ \hline
{\bf tupleid} & {\bf name} & {\bf destination} & {\bf type} & {\bf price} \\ \hline
$t_3$ & BayTours & S F & cable-car & 50 \\ \hline
$t_4$ & BayTours & S C & bus & 100 \\ \hline
$t_5$ & BayTours & S C & boat & 250 \\ \hline
$t_5$ & BayTours & Monterey & boat & 400 \\ \hline
$t_7$ & HarborCruz & Monterey & boat & 200 \\ \hline
$t_8$ & HarborCruz & Carmel & train & 90 \\ \hline
\end{tabular}

\begin{verbatim}
SELECT a.name, a.phone
FROM Agencies a, Tours t
WHERE a.name = t.name and type='boat'
\end{verbatim}

\begin{tabular}{|p{.7in}|p{.5in}|p{.75in}|p{.9in}|p{.85in}|} \hline
\multicolumn{2}{|c|}{{\bf Answer}} & \multicolumn{3}{|c|}{{\bf
    Provenance}} \\ \hline 
{\bf name} & {\bf phone} & {\bf Lineage} & {\bf Why-Provenance} &
{\bf How-Provenance} \\ \hline
BayTours & 415-1200 & Agencies($t_1$), Tours($t_5,t_6$) &
\{\{Agencies($t_1$), Tours($t_5$)\}, \{Agencies($t_1$),
Tours($t_6$)\}\} & $t_1\ +\  t_5 \cdot t_1 + t_6$ 
 \\ \hline
HarborCruz & 831-3000 & Agencies($t_2$), Tours($t_7$) &
\{Agencies($t_2$), Tours($t_7$)\} & $t_2 + t_7$
 \\ \hline
\end{tabular}


Where-provenance: provenance of (BayTours), (415-1200), (HarborCruz),
(831-3000).

All these provenances are sensitive to how query is written: for two
equivalent queries $q$, $q'$, their (why, how, where)-provenance may
be different.

\subsection{Metadata for Quality}

\bi
\item Quality is usually considered a many-dimensional property.
\item Usually included: accuracy, completeness, consistency, currency,
  precision, certainty.
\item Accuracy: how close the represented value is to the actual
  value.
\item Completeness: do all records have values? If not, what is
  missing? Why?
\item Consistency: are any (functional) dependencies respected? Are
  there contradictions across records?
\item Currency: this includes a temporal dimension (sometimes it is
  called {\em timeliness}): how often do
  values change?  when were values acquired? How long are they valid? 
\item Precision (for measurements): how precise are the values? How
  was the measurement taken?
\item Certainty: how reliable is the data (how much do we trust the
  source)? Note that this is related to provenance metadata.
\ei

Source: Data Quality: Concepts, methodologies and techniques, Carlo
Batini and Monica Scannapieca, Springer, 2006.



{\bf Other Data Quality Dimensions}
\bi
\item In specific domains, additional dimensions can be defined. Ex:
  {\em archival domain} (collections of documents), the {\em
    condition} of a document is important. In {\em geospatial domain},
  the accuracy of positional information is very important.
\item In distributed systems, it is important to {\em characterize}
  sources of data as well as possible. This includes accessibility
  information (when and how the source is accessible) as well as
  content (which data and metadata is available). It's also good to
  know how reliable and trustable data sources are. 
\ei
NOTE: this all refers to data, assumes a given schema as fixed. But
schema can have its own (quality) metadata: for instance, a
completeness of 70\% for an attribute in a relation means that about
70\% of values are really present (30\% are nulls or bad
data). Clearly this applies to the attributes as whole. However, a
precision value applied to an attribute must be an average or similar
aggregate; precision applies to particular values, and therefore is
assignd to tuples, not to schemas. 


Data quality (missing values, wrong values, outliers) should be
detected.
\bi
\item Accuracy:
\bi
\item We can distinguish between {\em syntactic} and {\em semantic}
  accuracy.
\item Syntactic refers to whether the value represented is from the right
  domain. 
\item This measure depends on how (tightly) the domain has been
  described: simply as a data type ('string') or a more accurate type
  ('ddd-dddd').
\item It also depends on the type of domain: for an enumeration
  domain, we can have a list of all possible values.
\item Semantic accuracy (aka {\em correctness}) asks whether the value
  represented is the correct value.
\item This necessitates of a procedure to establish the correct value,
  or to decide whether the given value is correct or not.
\item Sometimes the approach is to check the value in different data
  sources, and see if they coincide (example: age of an
  actor). However, this requires solving the {\em record linking}
  problem (are these two references to the same actor?).
\item syntactic accuracy: value in the right domain. A '0' for sex,
  which can be M or F, is syntactically inaccurate. Weight should be a
  positive number; a negative one is also syntactically inaccurate. It
  can be checked easily.
\item semantic accuracy: right value. A F for sex on the record of Jim
  Jones is semantically inaccurate. It can be impossible to check;
  business rules (domain knowledge) help.
\ei
\item Completeness:
\bi
\item This refers to ``the extent to which data are of sufficient
  breadth, depth and scope for the task at hand''. 
\item In a relational database, we can talk of {\em schema
  completeness} (do we have all entities and attributes needed?), {\em
  column completeness} (how many values are missing or null?), and
  {\em population completeness} (do we have all values from the
  domain?).
\item W.r.t. nulls, it is important to know why a null is in the
  database: the value does not apply, the value applies but is not
  known, or it is not known whether the value exists.
\item Databases usually operate under the {\em closed world assumption
  (CWA)}, stating that values not present in the database do not
  exist. 
\item Note that this is all static: if data changes over time, then
  completeness also does. Ex: courses offered by a given department
  change from semester to semester.
\item for attributes: missing values. Sometimes these are explicitly
  marked, sometimes they are not.
\item for records: enough entities for the population to carry out analysis.
\ei
\item Timeliness: how data reflects the current situation. In turn
  depends on when data created, when it was collected, how long data
  lasts (stability).
\item Currency:
\bi
\item These and similar dimensions are sometimes called time metadata
  or temporal metadata.
\item They define whether data changes over time, and if it changes,
  how fast (how long it is valid).
\item Volatility refers to the frequency with which data changes (rate
  of change). Data can be {\em stable} (volatility: 0), {\em
    long-term} (volatility: low), {\em changing} (volatility:
  high). Note that this also indicates how long data remains valid!
\item Currency refers to how quickly data is updated, and how this
  correlates to its volatility.
\item Timeliness refers to wether data is available when needed. Note
  that data may be current and not timely (ex: a schedule of classes
  is posted after the semester starts); timeliness depends on currency
  {\em and} volatility.
\item Usually, data that changes should be timestamped with {\em last
  update} metadata, so we can deduce timeliness; and {\em age}
  metadata indicating how old data is when entered into the system.
\ei
\item Consistency:
\bi
\item This refers to whether data violates any constraints to all data
  should obey.
\item In a relational databases, this includes integrity and key
  constraints.
\item Sometimes constraints can be expressed as domain constraints
  ('age should be between 18 and 65') or as {\em business rules}.
\item Consistency can refer to agreement of values in different fields
  of a record, or across records: Ex: a record should not have '5' for
  {\tt age} and 'married' for {\tt marital-status}.
\item In numerical domains, we can also establish statistical rules,
  which reflect the underlying distribution of data.
\item Usual data quality problems: noise and outliers, missing values,
  duplicate data. There are well known statistical procedures for these.
\ei
\ei

Each attribute has an associated {\em domain}, the set of all possible
values. Information about this domain is vital metadata, so a primary
task is to determine the domain of each attribute and its
characteristics, including:
\bi
\item type: nominal/categorical, ordinal, numeric (and, if numeric,
  interval, ratio or absolute). Numeric domains can also be discrete
  or continuous. Note that continuous values are represented
  discretely, so information about precision is important for them.
\item dynamics: static vs dynamic domain. For instance, month of the
  year is a static domain; person name is dynamic. Dynamic domains can
  be problematic when we study data over time, as new values come up
  and old values disappear.
\item quality: depending on how data values were extracted from the
  domain, we can have that the data obtained is of varying
  quality. This has many dimensions:
\bi
\item timeliness: when the value was obtained, how long it is supposed to be
  valid. 
\item precision: for numeric (especially continuous) values.
\item reliability: how trustable the values are. When obtained through
  a sensor, for instance, this depends on how reliable the sensor
  is. Also, when the data is entered manually, problems are likely to
  show up.
\item completeness: how many values are missing. This is difficult to
  determine because missing values may not always be easy to identify;
  some are marked explicitly, some by an absence of value, some are
  mistakes. An aspect of this is also whether the data available is
  representative of the underlying domain (at least for the purposes
  of the analysis), or it is biased or unbalanced.
\ei
\ei


Some people in the Data Quality literature talks about {\em accuracy}:
how close the value is to the real value. However, this is a complex
issue, and this can be considered as a derived measure using other
factors above like time and precission.

Representation Metadata (From Tan, Steinbach, Kumar)

\bi
\item Data for an attribute comes from an underlying domain (set of
  all possible values).
\item Values in a domain can be represented in different ways (height
  can be given in feet or meters).
\item The same representation can be used for different domains (age
  and ID can be both represented by integers).
\item Thus, the type of domain and the representation chosen should
  both be in the metadata.
\item Different types of domains:
\bi
\item Nominal: no (meaningful) order.  Operations: $=$, $\neq$. Mode,
  entropy, $\chi^2$ test.
\item Ordinal: order, but no distance. Operations: $=$, $\neq$,
  $<$. Median, percentile, rank correlation.
\item Interval: distance, but no dimension. Operations: $=$, $\neq$,
  $<$, $+$, $-$. Mean, standard deviation, t-test, Pearson's
  correlation. 
\item Ratio: everything.  Operations: $=$, $\neq$,
  $<$, $+$, $-$, $\times$, $/$. (Geometric, harmonic) mean, 
\ei
\item Domains can also be {\em discrete} or {\em continuous}.
\item Each type of domains constraints what can meaningfully be done
  with the values.
\item Also, each representation (datatype) constraints what can be
  done with it.
\item Further constraints by datatype: strings (min and max length,
  format, patterns), dates (valid dates), enumerated fields (list of
  values). 
\ei

Domain Metadata
\bi
\item This is the most open-ended of all: difficult to come up with a
  list of attributes that will describe any domain.
\item For specific domains, attempts at creating core agreement:
  Dublin Core for documents, microformats, geo-spatial standards,
  $\ldots$  
\item Also related to the {\em linked data} concept.
\ei

Metadata Levels
\bi
\item Usually, tabular data is interpreted as a collection of objects
  (rows, tuples) for which we have attribute (field/column)
  information. Each attribute describes a property or characteristic
  of the object.
\item In tabular data, metadata is usually about fields/attributes.
\item But metadata could also be used to register intra-record
  dependencies, as well as inter-record dependencies.
\item In turn, metadata itself can be constrained so as to make sure
  it is reliable, up-to-date and consistent. 
\item This leads to a meta-metadata framework...
\item ... but in practice, having metadata at all is hard enough!
\ei

\section{Capturing and representing Metadata}

\subsection{Standards}
SQL for Data Transformation:
from data to metadata and back is possible with some limitations:
Select *
From (Select tickerid as 'IBMticker', date
      From TickerData
      Where CompanyName = 'IBM') as ibmdata,
     (Select tickerid as 'MSticker'
      From TIckerData
      Where CompanyName = 'Microsoft') as MSdata,
.... one per company

Where ibmdata.date = MSdata.date and .....


\section{Domain Semantics}

\section{Managing Metadata in Files}
Tabular files: with and without header. Semistructured: XML is
self-contained (but pay price in size). Other types need to have
metadata apart. In general, up to the user to generate and maintain
metadata; big weaknesses of working with files.

{\tt Find} command to find files.
The find command will search for files or directories using all or
part of the name, or by other search criteria, such as size, type,
file owner, creation date, or last access date.
The -exec option can be used for as many purposes as your imagination can dream
up. For example:\\

{\tt find . -empty -exec rm '\{\}' $\backslash$;}

removes all the empty files in a directory tree, while\\

{\tt find . -name "*.htm" -exec mv '\{\}' '\{\}l' $\backslash$;}

renames all .htm files to .html files.

File names often have a suffix such as gif, jpeg, or html that give a
hint of what the file might contain. Linux does not require such
suffixes and generally does not use them to identify a file
type. Knowing what type of file you are dealing with helps you know
what program to use to display or manipulate it. The file command
tells you something about the type of data in one or more files. 

The file command attempts to classify each file using three types of
test. Filesystem tests use the results of the stat command to
determine whether a file is empty or a directory, for
example. So-called magic tests check a file for specific contents that   
identify it. These signatures are also known as magic
numbers. Finally, language tests look at the content of text files to
attempt to determine if a file is an XML file, C or C++ language
source, a troff file, or some other file that is considered source for  
some kind of language processor. The first type that is found is
reported unless the -k or --keep-going option is specified.

It's possible to use the -i (or --mime) option to display the file
type as a MIME string instead of the normal human-readable output. 

In its simplest form, the dd command copies an input file to an output
file. You have already seen the cp command, so you may wonder why have
another command to copy files. The dd command can do a couple of
things that regular cp cannot. In particular, it can perform
conversions on the file, such as converting lowercase to uppercase or
ASCII to EBCDIC. It can also reblock a file, which may be desirable
when transferring it to tape. It can skip or include only selected
blocks of a file. 

\subsection{Monitoring}
To {\em monitor} files and automate tasks, one can use a {\tt cron}
job: a shell script is written in {\tt file.sh} and  then it is called
and run by {\tt cron} every so often.
About {\tt cron}: Cron job are used to schedule commands to be
executed periodically. You can setup commands or scripts, which will
repeatedly run at a set time. The cron service (daemon) runs
in the background and constantly checks the /etc/crontab file,
/etc/cron.*/ directories. It also checks the /var/spool/cron/
directory. {\tt crontab} is the command used to install, deinstall or
list the tables (cron configuration file) used to drive the cron
daemon. To edit your crontab file, type the following command at the
UNIX / Linux shell prompt: 
{\tt  crontab -e}

Your cron job looks as follows for user jobs:
 
{\tt 1 2 3 4 5 /path/to/command arg1 arg2}

where
\bi
\item    1: Minute (0-59)
\item    2: Hours (0-23)
\item    3: Day (0-31)
\item    4: Month (0-12 [12 == December])
\item    5: Day of the week(0-7 [7 or 0 == sunday]) 
\item    /path/to/command - Script or command name to schedule
\ei
You can use '*' in 1..5 as a wildcard. Several values can be separated
by commas, or ranges of values given with a dash '-'. There also
shortcuts to use instead of the 5 numbers: @reboot, @yearly, @monthly,
@weekly, @daily, @hourly.

To list all crontab jobs, use {\tt crontab -l}. To remove all crontab
jobs, use {\tt crontab -r}.

Another way to do is to use, inside the shell script the {\tt inotify}
command in Linux.
      The inotify API provides a mechanism for monitoring file system events.
       Inotify can be used to monitor individual files, or to monitor directo‐
       ries.   When  a  directory is monitored, inotify will return events for
       the directory itself, and for files inside the directory.
The tool {\tt inotifywait} can be used in a script as follows: call it
with the event of interest, followed by what you need done. {\tt
  inotifywait} monitors for the event, and when the event happens, it
terminates -so execution of the shell goes to the second line, where
you have the code for whatever is needed.

\begin{verbatim}
#!/bin/bash
while true; do
 
inotifywait -e create /home/user/pdfs  && \
./watermark_all.sh
 
done
\end{verbatim}
This script watches directory {\tt /home/user/pdfs} for a file
creation event. When it happens, the script {\tt watermark\_all} is
executed.

\section{Managing Metadata in Databases}
Relational DBs maintain database schema information in database, in
relational form; it can be queried with SQL. Examples from
Postgres. But: only data type kept; more metadata must be enforced
with CHECKS, ASSERTIONS, TRIGGERS, and even then it is up to the user
to keep provenance, capture more semantics. Also, confusion data type
of representation and of data. In general, better than files but still
not ideal.

