\chapter{Data Cleaning}
Data cleaning tasks:
\bi
\item at indidual (domain) level: 
\bi
\item do not change schema: transformations. Parsing dates, addresses,
  names. Classifying numbers, standardizing, rounding. Identifying
  missing values, bad   values/outliers, and filling in. Correcting
  character  encodings.
\item change schema: split, merge attributes. Add new (derived)
  attributes. 
\ei
\item at structural level:
\bi
\item Several data sets merged into one (denormalization).
\item data set with variables in schema (see Tidy data
  paper). Reshaping needed (pivoting). Note difference
  table/spreadsheet. 
\ei
\ei
Filtering based on criteria is part of analysis. Filtering to get rid
of missing values, etc. is here.


\section{Data Quality}
Hard problem because we are comparing data with some ideal that is
very domain-specific and context-dependent. This makes it hard to
develop general solutions.

Data is of quality when it is correct and suitable for the analysis to
be carried out. Correct means that it faithfully represents values
from the domain; this is  impossible to check without knowing
the domain and its characteristics. This is where metadata about the
domain can help. Suitable means that it has the features that are
assumed in the methods to be used. Clearly this is dependent on the
analysis to be done.

The ultimate goal of data quality is to make the data usable and
reliable. Data quality can only be achived by a contibuous process of
monitoring data through its lifecycle, from acquisition through
archival. Analysis of data (especially exploratory) is needed to check
the data; hence, data cleaning is based on EDA.

{\em Data profiling} is the idea of learning typical characteristics
of data attributes/objects/events and use this knowledge as a basis to
determine whether data is correct or not. It usually involves using
EDA over time and recording the results, to determine static and
dynamic characteristics.

OVERLAP WITH METADATA.
In general, we can analyze the data along several dimensions for
quality:
\bi
\item accuracy
\item completeness
\item timeliness
\item consistency
\item uniqueness (lack of duplicates)
\ei
When coming from several sources, we may have duplicates (overlaps),
as well as differences in format or convention that prevent the data
from having common keys that can be used to join/combine the data. 

There are many {\em sources} of issues: manual (unreliable) data
entry,  changes in layout (for record), measurement, scale, or format
(for value); changes in default, missing values or outdated values
(called 'gaps' in time series). Many of these can only be addressed by
changing the data gathering or acquistion phase -this includes data
delivery (see data lifecycle). 

An interesting aspect of dealing with missing values: if they all have
something in common, they can point out to the reason for missing the
data (and perhaps the missing values themselves). Ex: in a dataset of
phone records, all record from a certain area code are missing address
information. This points to a failure at collection in that area, and
narrows down the addresses. Missing data can be detected by scanning
the data for gaps, comparing data with metadata, or comparing data
with past (historical) data from the same source or from the same
domain. 
The process of guessing values for missing data is called {\em
  imputing}. Imputation does not make the resulting values correct,
but it may result in correct aggregates. The simplest technique is to
consider an attribute in isolation, and substitute missing values by a
established default or by the mean of the present values. Another
technique is to simulate a distribution with the present values and
draw from it for each missing value. These all assume that the missing
value follow the same distribution as the present ones. 

A more complex method is to impute a set of possible values, instead
of a single one, and create several data sets, which can then be
compared to return the most suitable one, or a combination of them. In
the {\em regression method}, a regression is fitted for every
attribute using the other attributes present in the record (or all
attributes previous in order of appearance). The latter assumes that
attributes are missing monotonically, that is, if an attribute A is
missing values, all other attributes present before A are not. This
method can be applied to several attributes.

Another advanced method is MCMC (Markov Chain Monte Carlo). It is
assumed that the data are generated by a multivariate normal
distribution. The missing values are generated based on a conditional
distribution that is learned from the data present. Once this is done,
the parameters of the multivariate distribution are learned. Then the
process is repeated (missing values are generated again, using the
conditional distribution; parameters for the distribution are learned)
until the process becomes stable: newly generated values are the same
(or very close) to the already generated ones; new parameters are the
same (or very close) to the already generated parameters. Note that
both things will happen together. Because the new value generated
depend only on the parameters estimated in the previous step, this
forms a Markov Chain. 

\subsection{Issues}
For tabular data, we have field or column validation; record
validation, and data set validation.

It is necessary to have at least an idea of what the field or column
semantics is, so we can test the values. If one does not know what a
field means, any analysis is temptative: even if one has a string in a
column otherwise made of numbers, the value may be significant.

Issues:
\bi
\item incorrect values.
\item missing values.
\item outliers.
\ei

Missing values can be explicitly stated (an empty field between two
commas in a CSV file); those are at least easy to detect. Sometimes,
the missing value is not indicated (a row in CSV file with one less
value than every other row: which is the one missing?).
Checks for bad values depend: for (finite) enumerated types, a simple
membership test is needed. Ex: months of the year as integers 0 \ldots
12 or strings ``January'' to ``December''. 

For strings, there may be patterns (ex: telephone numbers, IP addresses).

For numbers, usually statistical information or domain information is
used. For statistical, if we assume a standard distribution, any value
which is more than 3 deviations from the mean is rejected. For
semantic information, we usually can put {\em bounds}: if the field is
supposed to be the price of a product, usually in the tens of dollars,
then any number beyond 100 is suspicious (as well as any negative
numbers). Also, maximum and minimum values should make
sense. Likewise, ratios should not exceed one.

One thing to remember about data cleaning is that we can take it only
so far -and that may be good enough. If data, for instance, seems
quite dirty, and some of it seems specially problematic, it may be
worthwhile to estimate how much of the data is problematic. For
instance, if file example.txt has messed up records each time a double
quote precedes a comma, {\tt grep -c ",'' example.txt} will count the
number of lines in this situation, while {\tt wc -l example.txt} will
count the total number of lines in the file. If less than 0.1\% of the
lines are problematic, maybe we should simply delete them. The cutoff,
of course, depends on the application: sometimes it may be 0\%
(everything must be saved and used), but sometimes it may be 1\%
(saving us a lot of effort for little reward). 
Finally, if the data is very large and so is the resulting histogram,
we can repeat the analysis by building a histogram of the histogram,
which will tell us how many items were repeated how many times:

{\tt sort example.txt|uniq -c| sort -n|uniq -c}
will produce output with 2 field lines, the first one how many items,
the second one how many times. Ex: a file with keywords will lead to
an output with (number, keyword) telling us how many times a keyword
appear; a second time will lead to an ouput with (number1, number2),
telling us that number1 keywords appeared number2 times.

{\bf Outliers} are values that are very different from the rest of the
data. Dealing with outliers is hard because they may be genuine data
reflecting some underyling (and interesting) phenomenon, or they could
be a data glitch, hence just a mistake. It may be hard to tell one
from the other without domain expertise. In general, legitimate
changes tend to have some structure (regular re-appearance), while
outliers and data problems tend to be random. There are exceptions to
this; for instance, a malfunction sensor will spoil all measurements
from the point of malfunction.

There are several approaches for dealing with outliers. All are based
on having a notion of 'expected' or 'normal' values and a notion of
'difference' from the expected/normal value that is considered
acceptable. These can be established by domain expertise, by examining
historical data, by fitting a distribution to the data and assuming
the data is generated according to the distribution, by examining data
distribution and looking for high density/low density regions, or (for
time series) by examining the data for temporal patterns (bursts,
regularity). Also, if attributes are inter-related, using other
attributes can be useful to determine if a given attribute is
correct. 

\subsection{Data Quality in a Relational Database}
A modern RDBMS can enforce quite a bit of semantics:
\bi
\item in attributes, CHECKs or DOMAIN or TYPE declarations can be used
  to make sure individual values are as they are supposed to be;
\item all keys in a table can be declared with PRIMARY KEY or UNIQUE;
\item all foreign keys should be declared with FOREIGN KEY and a NOT
  NULL should be added; also, ON UPDATE/INSERT/DELETE can be used to
  specify what to do with changes;
\item stored procedures or triggers can be used to enforce that all
  changes in the database are meaningful;
\item metadata can be stored in additional tables, beyond what the
  database already stores.
\ei
However, these tools does not cover all data quality problems. Also,
many of them are not used because of cost considerations. Many times
the checking and cleaning of data is done during the ETL process.

To profile the database, there are several operations that are simple:
to find the number of nulls, an SQL query with the ISNULL predicate
will do; a query with COUNT(*) and group by will show the frequencies
of values -focusing on the last few may reveal bad data entries. To
find if a field is unique, we can make sure that it contains as many
distinct values as there are rows (there should be no nulls on it). To
find if a key is (potentially) a foreign key, we compare it to the
possible primary key with a NOT IN. Unfortunately, it can be expensive
to try to find unique/key fields, since they can be a combination of
fields. As a rule of thumb, values with nulls or repeated values or
based on data types that are not integer or string can usually be
ruled out.

{\bf Basic Definitions}
\bi
\item Clean data is data that is consistent and error-free.
\item Data cleaning is the process of checking a dataset to determine
  whether it is clean (and if not, clean it).
\item In many cases, several datasets from different sources must be
  integrated, adding to the challenge. 
\item Usually, it is assumed that data is structured into a set of
  {\em records}, which are more or less homogeneous -but sometimes
  records have to actually be parsed out of the input stream.
\item Intuitively, data on those records are about some {\em entities}
  and their {\em relationships}.
\ei



{\bf Sources of Problems}
\bi
\item Records do not have {\em universal keys (identifiers)}.
\item There may be syntactic (data representation, formatting)
  differences among records.
\item Data can be inconsistent (within a source or across).
\item Records may hold different information (not exactly the same fields).
\ei



{\bf Example}
\bi
\item {\tt @INPROCEEDINGS{Quass96makingviews, \\
    author = {Dallan Quass and Ashish Gupta and Inderpal Singh and
      Mumick Jennifer Widom}, \\
    title = {Making Views Self-Maintainable for Data Warehousing},\\
    booktitle = {}, \\
    year = {1996}, \\
    pages = {158--169}}
}
\item {\tt  [12] D. Quass, A. Gupta, I. Mumick, J. Widom: Making Views
  Self-maintable for Data, PDIS'95.}
\ei



{\bf Challenges to Automation}
\bi
\item Very difficult to establish when errors occult and when data is
  correct. 
\item Many different types of errors: difficult to determine before hand.
\item In principle, comparing each records against each record
  required -but this won't scale.
\item the more 'dirty' the data is, the more difficult to automate the
  cleaning. 
\ei




{\bf Data Cleaning Hierarchy}
\bi
\item Schema-Level vs. Instance-Level: data source integration
  vs. data cleaning.
\item Instance-Level: Single Source: Record Level: misspelling,
  out--of-range, illegal values, missing values, violated constraints.
\item Instance-Level: Single Source: Table Level: inconsistent values,
  duplicate values, name/value conflicts.
\item Instance-Level: Multiple Source: inconsistent values,
  overlapping values, contradictory values.
\ei

{\bf Relation with MetaData}
\bi
\item Metadata is supposed to describe data, giving features and
  properties of it.
\item As such, it describes the data as it {\em should} be. Example:
  age should be between 18 and 65.
\item This provides a target for data cleaning and also may provide
  guidance as to how to correct problems.
\item Unfortunately, many times metadata not present. Circular
  problem: inferring metadata from data cannot be done until we know
  data is clean.
\ei



{\bf Relation with Logic}
\bi
\item We may need to do some {\em reasoning} to discover inconsistent or
  contradictory data. 
\item There are logics that deal with such contradictory states.
\item Also, when inconsistent/contradiction found, we need to {\em
  repair} the database.
\item There is a logical theory of {\em (minimal) database repair}.
\ei



{\bf Data Profiling}
\bi
\item Analysis of data sets to detect potential problems and learn
  overall characteristics of data.
\item Typical example: numerical attributes are analyzed to determine
  mean, deviation. This allows the detection of {\em outliers} as
  values more than $n$ standard deviations from the mean.
\item Can also check for values of the wrong type (i.e. numbers where
  strings are expected).
\item Some common types of data (addresses, phone numbers) can be
  analyzed more in depth for missing/suspicious values.
\ei



{\bf DataFlow}
\bi
\item In most systems (prototypes and commercial tools), a {\em data
  flow} is built by connecting nodes in a data flow graph.
\item Edges represent input-output relations.
\item Nodes represent basic cleaning operations.
\item The whole graph represents a workflow that is applied to a dataset.
\ei

{\bf Cleaning Operations}
\bi
\item Most systems rely on a pre-defined set of cleaning operations.
\item Basic operations are to {\em extract} fields from records, to
  compare them to values/patterns, to create new fields/records.
\item It is common to have specific functions for data types
  (e.g. approximate string matching) and semantic types (e.g. address
  cleaning operations).
\item More general types have been proposed in the literature. They
  are classified by number and type of inputs and outputs, general
  transformation they implement.
\item Common operations: parsing/splitting, format, mapping, matching,
  clustering, merging. 
\item Note: SQL queries can be seen as operators too, but they are
  restricted -for instance, they define 1-to-1 or M-to-1 mappings only.
\ei

Most basic data cleaning: detecting missing and empty values, and
outliers.

Note that replacing a missing value with some value may introduce a
distortion in the model, if the fact that values were missing is in
itself significant. If a sensor malfunctioned under certain
conditions, for instance, the fact that a a value is missing is
important, and replacing this value, especially if not done in a
consistent manner, may lose this information. So if there is a pattern
to the missing values, this should be noted and investigated.

If we replace missing values, what do we replace them with? One idea
is to try to not introduce bias, i.e. not to change the
characteristics of the overall population. But this may not always be
possible: in the set 1, 2, 3, x, 5 if we want to preseve the mean x
should be 2.75; if we want to preserve the standard deviation x should
be 4.66. Also, if there are correlations between the attribute with
missing values and other attributes, then we should 'respect' any such
correlations. So a method to replace values is to estimate those from
other variables; but this can be quite costly depending on which other
variables we choose and which kind of correlation we use (for instance
using multiple linear regression from a set of other attributes).

{\bf Parse/Split}
\bi
\item Many times records must be broken down into fields/attributes.
\item The fields themselves may be defined by the user or proper to
  the application.
\item Example: ``Taylor, Jane, JFK to ORD on April 23, 2000 Coach''
  can be split into {\em name, airport, airport, date, class}.
\item In Potter's Wheel, user gives examples on data, and system
  infers general transformations.
\item 
\ei

{\bf Format}
\bi
\item Takes as input one relation, produces as output one relation. It
  maps one record to one record.
\item changes in representation applied at the individual field
  (column) level are common.
\item example: make strings all lowercase, trim, fit a regular
  expression.
\item Merging several fields into one (first name and last name), as
well as splitting a field into several (area code and phone number),
is common. 
\item Generating new derived fields (e.g. derive age from date of
  birth) is also a format operation.
\ei


{\bf Map}
\bi
\item Takes as input one relation, produces as output one or more
  relations.
\item Each tuple in input can generate zero or several tuples into the
  output relations (it's 1-to-1 or 1-to-many).
\item Example: generate a distinct key for each record.
\item Example: partition a {\tt paper} record by extracting fields
  {\tt author, title, EventName, year} and putting them in tables {\tt
    AUTHOR(name, title)}, {\tt PUB(title, event)} and {\tt EVENT(name,
    year)}. Note: pks and fks may need to be generated; also,
  multivalued {\tt author} may have to be dealt with.
\ei

{\bf Match}
\bi
\item Takes as input two relations, produces one relation as
  output.
\item Implements an {\em approximate join} of the input.
\item Technically, for each pair of tuples $(t_1, t_2)$ in the
  Cartesian Product of the input, it computes a distance $d(t_1,
  t_2)$. 
\item The distance expresses how 'similar' the two records are.
\item It could be expressed in SQL with a call to an external
  function, but usually it is implemented separately for efficiency
  reasons (see below).
\item Example: compare two author lists by using approximate string
  matching in attribute {\tt LastName}.
\ei

{\bf Implementation of Matching}
\bi
\item In principle, every tuple should be checked against every tuple,
  but this is too costly.
\item {\em blocking criteria} are usually specified to restrict the
  pairs of tuples checked.
\item Example: compare two last names iff they start with the same
  letter.
\item Usually implemented by partitioning inputs and matching only
  within similar partitions (called {\em Neighborhood Join}).
\item Note: (defective) blocking criteria can affect (decrease)
  overall quality of process!! Partitions should avoid {\em false
    dismissals}.
\item Example: Damerau-Levenstein edit-distance is bound by the
  difference in lengths $l$ of the strings compared. If we compare
  string lengths and eliminate differences above a threshold, we avoid
  the (costly) edit distance computation. 
\ei

{\bf Implementation of Matching (Cont.)}
 Another example: {\em multi-pass neighborhood (MPN)}
\be
\item perform outer-union of both relations;
\item choose a key (one or more attributes) and sort records;
\item compare records within a fixed-size (sliding) window;
\item repeat the sorting and matching several times;
\item apply transitive closure to all pairs of records generated.
\ee

{\bf Clustering}
\bi
\item Takes as input one relation and returns as output a relation of
  relations (or nested relation).
\item SQL's group-by is an example of this.
\item But more general conditions can be specified (usually a
  distance, instead of pure equality).
\item Example: group a set of authors by same last name; apply
  transitive closure.
\ei

{\bf Merging}
\bi
\item Takes as input one relation and returns as output one relation.
\item One or more input tuples are {\em collapsed} into a single
  output tuple.
\item Need to specify: how to determine which tuples to collapse, what
  to include in the output tuple, how to decide values in case of
  conflict.
\item Example: merge a set of authors. Unlike before, we need to
  decide not just the criteria to put tuples together, but also how to
  create the output (what if different first names? We could use
  largest string).
\item Similar to SQL's group by, but would require arbitrary aggregate
  functions. 
\ei

{\bf Using Constraints}
\bi
\item To determine that data is correct, we need information about it,
  in the form of {\em constraints} and {\em metadata}.
\item Example of constraint: every person can have only one age.
\item Example of metadata: all birth dates must have year after 1900.
\item In general, not all information can expressed (city =
  ``Germany''). 
\item Many times, external sources (thesauri, etc.) are used
  to check, clean, transform data (e.g. list of US states and
  two-letter abbreviations).
\item Using functional dependencies is very important to check data
  consistency (e.g. {\tt deptno} in {\tt Employee} refers to a real
  department in {\tt Department}).
\ei

{\bf Conditional Functional Dependencies}
\bi
\item But FDs are not sufficient; {\em Conditional FDs (CFD)}
  introduced recently in literature.
\item A CFD is only valid for a subset of data: in table {\tt
  CUSTOMER(name, country, city, zip, street, country-code, area-code)}
 the FD $\fd{country,zip}{city}$ holds, but also
 $\cfd{country='UK'}{zip}{street}$. 
\item The first part is a pattern with data values; it specified for
  which subset of tuples the dependence holds.
\item Also, $\fd{country-code}{country}$, and
  $\fd{country-code=44}{country='UK'}$. 
\item Note that a traditional FD can be considered as a CFD without
  conditions! 
\item CFDs are manually supplied by user and/or automatically
  discovered from data.
\item It is possible to generate SQL queries for CFDs that are then
  run to return all tuples that violate the CFD.
\item {\em Repair} techniques consist of modifying the involved
  attribute(s) value(s). {\em Candidate repairs} (cleansed data that
  obeys the CFDs) are generated; among them, it may be possible
  to identify those that are {\em minimal repairs} (differ from
  original values as little as possible, giving a cost function).
\ei


{\bf Data Enrichment}
\bi
\item Sometimes, there is not enough information {\em inside} the
  database to determine whether data is clean -and, if dirty, how to
  clean it.
\item Data enrichment is the process of linking data inside the
  database with data in other data sources to determine what the data
  inside the database is about (and whether it's clean, etc.).
\item Can be used to discover constraints on the schema, erroneous
  data values.
\item Example: zip code field does not correspond to state field in
  database (found from list of zip codes and states from outside
  source). 
\item Challenge: Make sure link (between data inside and outside
  database) is correct; as hard as duplicate detection.
\item Very common for text: named entities, disambiguation.
\item 
\ei

BATINI ET AL TExTBOOK:

{\bf Activities in Data Cleaning}
\bi
\item New data acquisition
\item Standardization
\item Object Idenfitication (AKA record linkage, record matching,
  entity resolution)
\item Data Integration
\item Determining Source Trustworthiness
\item Quality Composition.
\item Error Detection
\item Error Correction
\ei

{\bf Composition}
\bi
\item given quality metadata of some data source, determine quality
  metadata of extracted/trasnformed data.
\item it comes down to given an algebra for (quality) metadata.
\item Given data source $X$ in model $M$, we can estimate the quality
  of $Q(X)$. A function $F$ is applied to $X$ to get the
  extracted/transformed data $F(X)$. We can estimate $Q(F(X))$ from
  $F(X)$ but also from $Q(X)$; the latter is the preferred approach. 
\item Most literature takes $F$ to be a relational algebra query and
  $M$ the relational model. Some assumptions may be made on the
  sources  (whether disjoint or overlapped, mainly). 
\item Naumann investigates how quality evolves as data from different
  sources is integrated. He proposes the {\em full outer join merge}
  operator: two tuples $t_1$ and $t_2$ which have been identified as
  referring to the same real world entity are merged: if they have the
  same values, then there is no conflict; if one has a value and the
  other one has a null, the value is taken; if both have values and
  these are different, a {\em resolution function} is assumed to
  exist. A CWA is assumed, as all tuples must come from the source
  tables. 
\ei

{\bf Error Correction}
\bi
\item This involves dealing with inconsistent data, with incomplete
  data, and with outliers. 
\item To deal with inconsistency, one must have knowledge of the
  domain, to check data values against such knowledge. Ex: each
  employee has a salary; age is between 18 and 65.
\item Once inconsistencies found, how to do you repair them? Fellegi
  and Hold propose an approach based on two heuristics:
\bi
\item make the minimum number of changes to the data in your fix;
\item maintain marginal and joint frequence distribution of values in
  your fix.
\ei
\item Given a set of rules, they must be consistent (without
  contradictions), and their consequences must be derivable. As a
  result, this becomes a {\em reasoning} problem. Ex: assume records
  {\tt (Age, MaritalStatus, Relationship-to-Head-Household)} and
  rules: NOT(Age $<$ 18 and MaritalStatus= married);
  NOT(MaritalStatus=not married and
  Relationship-to-Head-Household=spouse). Then the record (10, not
  married, spouse) can be fixed in several ways. Which one is the
  correct one? The rules imply that NOT(Age $<$ 18 and 
  Relationship-to-Head-Household=spouse), so two fixes are possible.
\item To deal with incomplete data, one can think of it as dealing
  with inconsistent data w.r.t. rules that state ``attribute NOT
  NULL''. 
\item To discover outliers is usually only considered in the context
  of numerical values, and asuming a certain data distribution (standard).
\item Outliers are then defined as values that deviate from
  expectation (too large or too small = several deviations from mean).
Another method is to consider {\em density} (how close a value is to
others), but this will not work if several outliers are clustered.
\item in time series data, values tend to be highly correlated due to
  cyclic patterns. Outliers can be defined there w.r.t. expectation.
\ei

{\bf Object Identification (Record Linkage)}
\bi
\item This assumes values in a database are grouped in a tuple or
  record that denotes an entity in the real world. Sometimes an
  identifier is available. The goal is to identify whether 2 or more
  records represent the same real-world entity.
\item Problem arises very commonly when integrating several data
  sources, as attributes (including identifiers) may be represented
  differently. 
\item Variations: two records represent different aspects of the same
  entity (little or no overlap); two records represent similar
  information about the same entity (considerable overlap). 
\item Ex: within
  a single table, two records may refer to the same entity: total overlap
  is expected (same schema). This is called {\em record deduplication}
  or {\em duplicate identification}. In a document, several references
  can be made to the same person, each one with different information
  (called {\em entity resolution}). 
\item Usual process consists of: data preprocessing; choice of
  comparison function; verification/checking step. 
\item Three major categories of techniques: probabilistic,
  knowledge-based, empirical.
\item Preprocessing is used to satisfy some requirements of the
  comparison function (ex: string comparison assumes all characters in
  lowercase); to simplify and clean data (ex: eliminate nulls and
  outliers, standardize address information). 
\item In a set with $n$ records, in principle each one should be
  compared to all the others, yielding $n^2 - n$ comparisons. As each
  comparison can be costly, this is considered too much.
\item Usual optimization techniques: {\em blocking} (partition set
  into mutually exclusive blocks; only records likely to match are
  together in the same block); {\em (sorted) neighborhood} (records
  are sorted to put them close to those more likely to match;
  comparisons are done with a movign window); {\em pruning
    (filtering)} (a simplified, more efficient comparison function is
  used to get rid of records unlikely to match).
\item Comparison functions usually are distance functions that apply
  to each field/attribute and   then add up all the distances in a
  linear (weighted) fashion. For string attributes, edit distances are
  typical (Jaro, Hamming, Smith-Waterman). For set-valued attributes,
  Jacqard distance or versions is the most popular. For numbers, some
  (normalized) numerical distance is usual. 
\item In probabilistic models, comparison functions return a
  probability of match. It is necessary to set thresholds, but these
  can be learned from labeled data (supervised learning) or we can
  maximize expection (EM algorithm). Fellegi-Sunter is the better
  known approach.
\item Empirical techniques is based on detecting exact and approximate
  matches.
\item Knowledge-based techniques require that rules be provided. The
  rules are usually called {\em merge/purge}, in that they determine
  when two records should or should not be considered a match.
\item Note that rules could be probabilistic (offer a certainty
  factor). Note also that sometimes rules could be learned.
\item Techniques for matching are evaluated on errors: false positives (FPs)
  and false negatives (FNs). True positives (TPs) and true negatives
  (TNs) are what we want. The typical evaluations are {\em recall} $R
  = \frac{TP}{TP+FN}$, {\em precision} $P = \frac{FN}{TP+FN}$ and
  F-score $F = \frac{2PR}{P+R}$.  Efficiency is evaluated in the
  number of matches actually carried out.
\ei



