
  
\section{Introduction}
%\begin{itemize}
%\item We pose the problem first
%\item We explain that it is very relevant for record matching, mention that the work of \cite{DBLP:journals/pvldb/ArasuCK09} is not good because it learn to transform North to N but not South to S. It mostly limited to a set of transformations and is not generalizable.
%\item Explain the strength of our algorithm.
%\end{itemize}
 
The task of transforming a string from a source format into a target format is encountered in many  information-processing tasks. Consider the task of  transforming a list of names in the form \textit{``firstname lastname"} (e.g.\  ``Michael Jackson") into the target form \textit{``lastname first letter of the firstname"} (e.g.\ ``Jackson M"); or the task to transform a   list of dates in the form ``31/12/2009" into the form ``12-31-2009".  These types of string transformations  are performed by hundreds of millions of end-users of spreadsheets and data cleaning tools every day. Microsoft has recently implemented a string transformation functionality \cite{DBLP:conf/popl/Gulwani11} in the Excel Spreadsheet 2013 \cite{DBLP:journals/cacm/GulwaniHS12} and Google has launched Google Refine\footnote{http://code.google.com/p/google-refine/}, both aiming to support end-users (specially non-programmers) on data transformation related tasks (e.g.\ data cleaning). 

Recently, Gulwani et.\ al \cite{DBLP:journals/cacm/GulwaniHS12}  introduced an effective way to do string transformations. Their method allows  end-users to provide an example  of a transformation  that they want to obtain, and then they can generalize the transformation to the rest of the users' data. This \textit{Programming by Example strategy} (PBE) \cite{DBLP:journals/aim/Lau09} allows non-programmers (e.g.\ data analysts, business analysts) to perform transformations that before only expert programmers could perform using shell scripts combined with Awk\footnote{https://en.wikipedia.org/wiki/AWK}, regular expressions or other advanced programming techniques. PBE avoids repetitive manual labour and allows non-professional programmers to perform complex string transformation tasks. 

\begin{table*}[t]

%\tiny
\small
\centering
\caption{Examples of String Transformations} 

\begin{tabular}{ | c | c |   c | c | c |   } 
\hline
i & $u_i \to v_i$ & $s_i \to t_i$ \\
\hline 
1& 25/12/2003  $\to$ 2003-12-25 & 12/11/2032  $\to$ 2032-11-12 \\
2&Aug 06, 2013  $\to$ 06/08/13 & Aug 20, 2017  $\to$ 20/08/17 \\
3 & Desoxyn (Tablet)   $\to$ DESOXYN & Cataflan (For Injection)  $\to$ CATAFLAN \\
4&Glycine  $\to$ $<$tag$>$GLYCINE$<$/tag$>$ & Taurines  $\to$ $<$tag$>$TAURINES$<$/tag$>$ \\
5&Oakland (Calif)  $\to$ Oakland, California & Dionisio (Calif)  $\to$ Dionisio, California \\
6&regulation  $\to$ regulate & legislation  $\to$ legislate \\
7&regulate  $\to$ regulation & legislate  $\to$ legislation \\
8&studies  $\to$ study & dummies  $\to$ dummy \\
9&Durbin, Richard J  $\to$ Durbin & Amaren, Maro F  $\to$ Amaren \\
10&Deutch, John M  $\to$ John\_M.\_Deutch & Kalikow, Peter S  $\to$ Peter\_S.\_Kalikow \\
11&maria joana  $\to$ maria\_joana & brunela averedo  $\to$ brunela\_averedo \\
12&michael j. jackson  $\to$ Jackson, Michael J. & jerry l. lewis  $\to$ Lewis, Jerry L. \\
13&Perle, La  $\to$ La Perle & Circulo Magico, El  $\to$ El Circulo Magico \\
14&microsoft corp.  $\to$ Microsoft Corporation & oracle corp.  $\to$ Oracle Corporation \\
15&microsoft  $\to$ MICROSOFT & oracle  $\to$ ORACLE \\ 
 
\hline 
\end{tabular}  
\label{table:ruleexamples}
\end{table*}  

Although PBE tools  are undoubtedly  useful, to design these tools is a challenging task. Particularly, it becomes challenging because  users want  to provide as few  example transformations as possible  and to produce rules, from this limited set of examples, that generalize for the rest of the data is not trivial.  

Existing methods, discussed in Sec. \ref{sec:relatedwork}, either focus on a small set of string transformations  or require  a large amount of examples to guarantee accurate transformations.  For instance, in the well-known area of record matching, some authors \cite{DBLP:journals/pvldb/ArasuCK09} have proposed to use string transformations in the process of matching of records. Basically, these methods concern  the problem of  learning one-to-one mapping rules from  pairs of matching examples, i.e., they learn tokens that map to each other. Then they can use these mappings to improve the matching of records. Because they do not learn the character replacements that lead a string into another, these learned transformation rules can only transform what they have observed in the set of examples, i.e., the learned rules generalize poorly. For instance, they can learn  $North \to N$, but they cannot transform $``South"$ into $``S"$. 

Other authors \cite{DBLP:conf/icai/MichelsonK09} looked into the problem of finding   mappings between synonyms, abbreviations and acronyms, which are types of transformations that, usually, cannot be generalized for other strings. For example, a rule that transforms $New\ York \to Big\ Apple$ is specific for the string $``New\ York"$ and  cannot be applied in any other string. This problem (that can also be described as learning a dictionary) is considered orthogonal to the problem of learning string transformation rules of interest here. 

Closely related to our problem is the work of Okazaki et al.\ \cite{DBLP:conf/emnlp/OkazakiTAT08}. They  tackle the problem of word lemmatization (e.g.\ $studying \to study$) and spelling correction (e.g.\ $vapour \to vapor$) by learning   transformations to be applied on the   strings. Their work limits the transformations to a single substring replacement in the source string. Although quite effective for the transformation task that they designed, their method requires a large number of examples to learn the transformation rules. Apart from the works mentioned so far, only  few works exist tackling the same problem.  We discuss them in more details, in Section \ref{sec:relatedwork}.





This paper presents a general string transformation algorithm that learns transformation rules from a few given examples.
% It is standalone approach that can be integrated in  data processing tools to support the large numbers of non-programmers doing data processing, currently. 
Given a pair of strings $(u, v)$, this work tackles the problem of learning a \textit{transformation rule} (for short, \textit{a rule}), which transforms $u$ into $v$, so that this learned rule can be used to transform an unseen string $s$ into $t$, where $s$ is in the same form as $u$. For example, a rule learned from the pair of strings (``31/12/2009",  ``12-31-2009") should also transform ``01/04/2012" into ``04-01-2012" . Particularly, a transformation rule is considered as a set of character permutations, insertions, deletions and updates that takes place in $u$ to transform it into $v$.  We will use the notation $u \to v$ to denote  a transformation of a string $u$ into $v$  (e.g.\ $Michael\ Jackson \to Michael\ J.$). 

Table \ref{table:ruleexamples} shows a few relevant examples of transformations addressed in this paper. They are real-world use cases drawn from data cleaning and spreadsheet processing literature, representing user questions in data cleaning forums and discussion lists. 






The method proposed in this paper, called \textit{STransformer}, differs from the state-of-the-art string transformation methods in two ways. First, it learns an edit-distance based transformation rule (i.e.\ a set of character permutations, insertions, deletions and updates) from a single example $(u,  v)$, which is the most general set of basic operations that can transform $u \to v$. The  learned rule  can not only transform $u \to v$ but, also, an unseen string $s$ similar to $u$ into its desired target form $t$. Second, it focuses on string transformations that change the formatting of a string into another. For example, in Table \ref{table:ruleexamples}, a rule that transforms $u_i \to v_i$  must also transform $s_i \to t_i$.

The proposed method requires only a few positive examples to learn   transformation rules, which are user-provided examples of valid transformations $u \to v$. The basic idea is to learn a rule  for each example transformation  that the user has to perform. Next, the learned rules  are used to transform the remaining strings that the user has at hand; for each string to be transformed,  STransformer  selects the rule that is likely to transform it correctly. We assume that the user can provide a compilation of examples of the desired transformations. We do so, because usually the user has a specific need, over a specific dataset, and examples are not available beforehand and cannot be generated or selected by  automatic means. The user can potentially participate in the process providing more examples when she observes incorrect transformations being produced.  In this work, we assume that the examples are given, and we focus on  learning the transformation rules.


\subsection {Overview and Contributions} 

Basically, a string can be transformed into another, by eliminating the differences between them, i.e., the parts of a string that are not parts of the other and vice-versa. To do so, the basic edit-distance operations of character permutations, insertions, deletions and updates can be applied. To find a set of operations (a transformation rule) that generates a transformation $u \to v$ can be trivial (e.g.\ delete all characters in $u$ and add all characters of $v$); to find a set of operations that transforms a large number of string correctly is, however, a challenge. The problem involves not just selecting the best set of operations to compose a rule but also  how to express these operations as  general as possible. To this end, we propose a set of primitive string operations that can precisely describe a transformation $u \to v$ but  can also be used to  transform   a large number of unseen strings similar to $u$, correctly. To make the problem linear w.r.t the size of the learning sample, each rule is learned independently for each pair of example strings $(u, v)$.  As this learning approach leads to many learned rules, a method is  proposed to select, automatically, the likely probable rule to transform correctly an unseen string $s$.

%Contrarily,  the joint learning of a rule leads to an exponential problem of finding the best set of operations that transforms correctly all input examples (assuming that such a rule exists).

The problem of learning  a general transformation rule is formalized in   Section \ref{sec:problem}. 
Any transformation can be expressed with four basic string operations. We propose a generalization of the four basic string operations  so that a transformation rule learned from a single example can transform other strings with similar features. As these operations include characters permutation as well, in Section \ref{sec:algorithm} we provide a linear time algorithm to find the best permutation of characters of $u$ for permutation-based transformations (e.g.\ $01/04/2012 \to 04/01/2012$).   In Section 4, we describe an algorithm to select a learned rule to transform a given unseen string $s$. 
%We demonstrate empirically that in practice, using a few examples,  the algorithm produces transformation  rules that generalize for a large number of strings.

%Contributions of this paper: 
%\begin{itemize}
%\item A generalization of basic string operations that can express a transformation in a higher level of abstraction.
%\item An edit-distance based transformation rule learning algorithm, which can account for any string transformation. 
%\item An efficient algorithm to learn a transformation rule.
%\item An algorithm to compute the most discriminative and general relative position of an absolute position $i$ of a string. 
%\end{itemize}

We conduct a detailed empirical investigation of our algorithm (Section 5) using real-world string transformation scenarios. For example, in one of these scenarios the user faces the task of putting   book titles in a standard format, given a dataset of available titles   described in a non-standard format, i.e., transformation like \ $Art\ of\ Science,\ The\to The\ Art\ of\ Science$.   We study various aspects of the algorithm including the generality of   learned rules  and the linearity w.r.t.\   the input size. Particularly, we investigate the accuracy of the algorithm in the real-world setting where  a very limited set of examples is provided. We do so because in the real world transformation tasks, the user wants to obtain the correct string transformations providing the minimal number of examples as possible. Also, we compare the proposed algorithm,   \textit{STransformer}, to the state-of-the-art string transformation algorithm implemented in Microsoft Excel 2013, called \textit{FlashFill}. The results show that STransformer is 30\% more accurate than FlashFill, on average. Additionally, STransfomer accuracy  dependents less upon a particular  example, which is an important property, given that the variety of available examples can be large. Finally, we discuss related work in Section 6 and conclude the paper in Section 7.

\section{Learning Transformations}
\label{sec:problem}
%\begin{itemize}
%\item We pose the problem first
%\item We explain that it is very relevant for record matching, mention that the work of \cite{DBLP:journals/pvldb/ArasuCK09} is not good because it learn to transform North to N but not South to S. It mostly limited to a set of transformations and is not generalizable.
%\item Explain the strength of our algorithm.
%\end{itemize} 

This section formulates our problem. The input for the transformation-learning problem is a set of N positive string transformation examples $\Lambda^+=\{(u_i,v_i) : i \in [1,N]\}$, where $(u_i, v_i)$ are pairs of strings from an arbitrary domain; for example, organization names or log file entries. Our high-level goal is to learn transformations from these examples that can be applied to new instances of our problem where the desired output is not yet available. First we introduce basic definitions, and then we formulate the learning problem.

\subsection {Preliminary Definitions}

\begin{definition}[Characters]
A character $a$ belongs to an alphabet $\Sigma$. A distance between two characters $a$ and $b$ is denoted as $\Delta(a,b)=n$, where $n \in \mathbb{Z}$. Also, a character $a$ belongs to a class of characters denoted as $C(a)=\textbf{e}$, where \textbf{e} $\in E^\Sigma$, the set of all regular expression for the alphabet $\Sigma$.
\end{definition}

\begin{definition}[Strings]
A string $u \in \Sigma^*$. The set of all substrings of $u$ is denoted as $u^*$. The length of $u$ is denoted as $|u|=n$. An empty string is denoted as $\lambda$, where $|\lambda|=0$. A character at position $i$ in a string $u$ is denoted as $u[i]$. A substring of $u$ is denoted as $u[i..j]$, where $0 \leq i \leq j \leq |u|$. The $u^c$ is a string where $\forall i, u^c[i]=C(u[i])$.
\end{definition}

Without loss of generality, unless noted otherwise, we assume the ISO-8859-1\footnote{http://en.wikipedia.org/wiki/ISO\_8859-1} character set as the alphabet $\Sigma$ and the distance $\Delta(a,b)$ as the difference of the decimal representation of $a$ and $b$ in the ISO-8859-1 character set table. For example, the distance $\Delta(``a", ``A")=97-65=32$, while $\Delta(``A", ``a")=65-97=-32$. We defined the following specific classes of characters: lowercase letter (\textbf{l}), uppercase letter (\textbf{u}), digit  (\textbf{d}), space  (\textbf{s}), punctuation (\textbf{p}), lowercase accented letter (\textbf{a}), uppercase accented letter (\textbf{b}), and any other character (\textbf{f}). We denote as \textbf{z} the class of delimiter characters, which unless noted otherwise, we consider existing in the beginning and end of every string. 

%; consequently, $u[0]$ and $u[|u|]$ represent a delimiter.

We defined four basic string operations: permutation, insertion, deletion and update such as:

\begin{definition}[Permutations] The function permutation $P(u, i, j): \Sigma^* \times \mathbb{N} \times  \mathbb{N} \to \Sigma^*$;  permutes a character $u[i]$ with $u[j]$ in $u$.
\end{definition}
\begin{definition}[Insertions] The function insertion $I(v,u,i,): \Sigma^* \times \Sigma^*  \times  \mathbb{N} \to \Sigma^*$;   inserts a string $v$ in the position $i$ of $u$.  
\end{definition}

\begin{definition}[Deletions] The function deletion $D(u,i): \Sigma^* \times  \mathbb{N} \to \Sigma^*$;   deletes the character in the position $i$ of $u$.  
\end{definition}

\begin{definition}[Updates] The function update $U(a, u, i): \Sigma  \times \Sigma^*  \times  \mathbb{N} \to \Sigma^*$;   replace the character in the position $i$ of $u$ by $a$. 
\end{definition}

The last three operations recall the operations used in the Levenshtein edit distance \cite{levelshtein-66-binary}, a string metric for measuring the difference between two strings. However, in this paper, these operations will be used to determine a string transformation instead of measuring the difference between two strings. Moreover, an additional permutation operation is considered,  to describe a transformation represented by the permutation of characters in a string (e.g.\ $Michael\ Jackson \to Jackson\ Michael$). 

Notice that string permutations and updates can be represented by insertions and deletions, but these four operations better represent  a  transformation $u \to v$ that consists of eliminating differences between $u$ and $v$, i.e., first, if $|u|= |v|$ and they have the same characters but $u\neq v$, then characters in $u$ have to be permuted to transform it into $v$; second, if $|u| > |v|$, it means that characters in $u$ should be deleted; third, if $|u| < |v|$, it means that characters from $v$ have to be inserted in $u$; and finally, if $|u|= |v|$ and set of characters of $u$ differs from the set of characters from $v$, it means that characters in $u$ have to be replaced by a character from $v$. For example, a transformation rule to produce the transformation $michael\ jackson \to J.\ Michael$ would require all four types of string operations. It would require to permute $michael$ with $jackson$ ($jackson\ michael$), to delete ``akson" ($j\ michael$), to insert ``." after ``j" ($j.\ michael$), and to update ``j" with ``J" and ``m" with ``M" ($J.\ Michael$). 

A string operation $t$ (permutation, insertion, deletion or update) over a string $u$ that results in a string $u_1$ is denoted as $u \mapsto^t u_1$.   

\subsection {Transformation Rules} 


The set of transformations that we consider in this work is based on these four string operations. Basically, a transformation $u \to v$, is seen as a set of permutations, insertions, deletions and updates that have to be performed in $u$ to transform it into $v$. A \textit{transformation rule} is a chain of string operations. A \textit{transformation model} is a set of transformation rules learned from a set of examples $\Lambda^+$, denoted by $\Omega (\Lambda^+)$.

\begin{definition}[Transformation Rule]
Given a pair of strings $(u,v)$, a transformation rule is a chain of string operations $T = \{t_1, \dots, t_n\}$ that transforms $u$ to $v$. It is denoted as $u \to^T v$, where $\to^T$ can be expanded to $u \mapsto^{t_1}  u_i \mapsto^{t_2} \dots \mapsto^{t_n} v$.
\end{definition}

For example, the transformation $12/01/2009 \to 12/01/09$ can be achieved using the specific transformation rule $12/01/2009$ $\to^D$ $12/01/09$, where $D=\{D(u, 7), D(u_2, 7)\}$ is a chain of deletion operations. This transformation can also be expressed in terms of its operations; i.e., $12/01/2009 \mapsto^{D(12/01/2009,7)} 12/01/009 \mapsto^{D(12/01/009,7)} 12/01/09$. 


Different transformations require  different types of transformation rules. For example,  the four transformations $michael\ jackson \to jackson\ michael$, $michael\ jackson \to michael\ (jackson)$, $michael\ jackson \to michael\ j$, $michael\ jackson \to Michael\ Jackson$, require  permutation, insertion, deletion and update transformation rules, respectively. More complex transformations require a concatenation of transformation rules. 

In this work, we are particularly interested in complex transformation rules in the form $u \to^P u_1 \to^I u_2 \to^D u_3 \to^U v$, where $P$, $I$, $D$ and $U$, are sequences of permutation, insertion, deletion and update operations; respectively.  We denote such a transformation rule as $\Psi_{u,v}$. Notice this chain of transformations can represent any string transformation. 


\textbf{Universe of permutations:} The purpose of using permutation in our rules is to capture permutation of tokens in a string (e.g.\ $michael\ jackson$ vs.\ $jackson\ michael$, or $12/01/09$ vs.\ $09/12/01$). As these permutations usually involve a separator character (e.g.\ space, slash), we will consider only permutations of substrings (tokens) that are separated by a character $a$ (separator) of $u$. Such a subset of permutations captures permutations on the majority of the human readable strings and drastically reduces the problem search space.
%The algorithm is theoretically not limited to this subset, but in practice it dramatically reduces the search space.

\textbf{Universe of  insertions:} When transforming a string $u$ to $v$, it only makes sense to consider insertions that insert a substring $x \in v^*$ into $u$, because the insertion of any substring $y \notin v^*$ would require further an update or deletion in $u$ to obtain $v$. For that reason, we restrict our insertions $I$ to the set of insertions that insert $x \in v^*$ into $u$. Notice that an insertion is only performed if $|u| < |v|$.

\textbf{Universe of  deletions:} Deletions always occur after insertions in a transformation rule. Their purpose is to reduce the length of $|u|$ to $|v|$, if $|u|>|v|$. 

\textbf{Universe of  updates:}  In the chain of transformations that we propose, updates only occur in the last step. At that point,  $|u| = |v|$. We only consider updates of a character $u[i]$ with its corresponding character $v[i]$. This set of updates always transforms $u$ into $v$.
 
%Now, we state our first learning problem:

%\begin{definition}[Learning Transformation Rule]
%Given a pair of string $(u,v) \in \Lambda^+$, learn a set of permutations $P$, insertions $I$, deletions $I$ and updates $U$ so that, $u \to^P %u_1 \to^I u_2 \to^D u_3 \to^U v$. We denote such a transformation rule as $\Psi(u,v)$.
%\end{definition} 

\subsection {Generalization of Transformation Rules}
Our ultimate and most important goal is to transform an unseen string $w$, using the transformation rules learned from $\Lambda^+$.  However, a transformation rule learned for a single example $(u, v) \in \Lambda^+$ is very specific and would, if at all, only generalize to a limited set of strings highly similar to $u$. For example, a transformation rule   $12.00 \to^\Psi 12,00$ can be created based on replacing the ``." by a ``,": a rule $\Psi$ composed of $P=I=D=\emptyset$ and $U={U(``,",12.00, 3)}$. Such a rule can generate  a correct transformation for $w=``32,03"$, i.e., $32.03  \to^\Psi 32,03$, but when applied to $w=``145.00"$, the transformation $145.00 \to^\Psi 14,.00$ results in an incorrect string. Any human would proper understand from the example that the intended transformation is $145.00 \to^\Psi 145,00$.

To achieve this goal, a generalization of the basic string operations is proposed, so that a larger range of strings can be transformed correctly by the same transformation rule. Consequently, fewer examples will be needed to cover a larger number of strings that the user may want to transform. We denote  as $\Psi^G_{u,v}$, a generalization of a transformation rule $\Psi_{u,v}$. Mainly, they differ on the level of abstraction that their operations are defined. Intuitively, to make a rule general, covering the highest number of strings   possible, it makes sense to consider information about all strings to be transformed. However, to make this problem feasible, only information in the string $(u_i, v_i)$ is used to produce a rule $\Psi^G_{u_i,v_i}$. Although it looks like a limitation, such strategy is very effective (as we will show empirically) and avoids an exponential problem of combining information of many strings in this process of generalization.  

Two elements in a rule can be generalized: the position in the string where an operation takes place and the character(s) part of the operation. We propose to infer the position as a function of a character in the string; and a character itself to be abstracted to its class (e.g.\ 7 can be seen as a digit). For example, the transformation $Michael \to M$ could be expressed as: remove all characters after the first uppercase letter. This rule is clearly more general than a rule that specifies a chain of deletion operations that transforms ``Michael" to ``M". Consequently, it could also be used to transform $Elvis \to E$. Therefore, we define the new notion of \textit{relative position} in a string, to allow us to express more general transformations.

\begin{definition} [Relative position]

A relative position function $R_u(k, e, j): \mathbb{N}  \times E^\Sigma  \times \mathbb{Z} \to \mathbb{N} $ retrieves a position $i$ in a string $u^c$ of $u$ that is $j$ positions distant from   the k-th occurrence of a substring $e$ in $u^c$.

We define $e$ as an $n$-$gram$ (a string of size $n$) in $u^c$. The set of all relative positions in $u$ is denoted as $R^u$.
%A relative position is denoted as $R(u, k, c, j)$.
\end{definition}

Using relative position, the   ``/" between day and month in the string $u=``12/01/09"$, can be  represented as $R_u(1, \textbf{p}, 0)=3$, i.e., the position of the first punctuation (\textbf{p}).  As well as by $R_u( 3,\textbf{d},-1)=3$, i.e., the position before the third digit (\textbf{d}).

%Considering this generalization of a string position, we generalize the string operations as follows:
Using relative position, we introduce generalizations of the four basic operations introduced above: generalized permutation, insertion, deletion and update.

\textbf{Generalized Permutation.}  A permutation rule $P=\{p_1, \dots, p_n\}$ is generalized into a permutation rule $P^G_a=\{s^G_1,\dots, s^G_m\}$, where $m\leq n$, $a$ is a character (separator) and $\to^{P^G_a}$ is a chain of concatenations of the type $s^G_1 + a + s^G_2 + a + \dots + a+s^G_m$, where $s^G_i$ is selection operation defined below:

\begin{definition}[Selection] The function selection $S(u, r_1, r_2): \Sigma^* \times R^u  \times R^u    \to \Sigma^*$; selects the substring $u[r_1..r_2]$.
\end{definition}

For example, the transformation  $30/05/09 \to 09/30/05$ could be done with the generalized permutation rule $P^G_{/}=\{S(u, R_u(2, \textbf{p}, 1 ),  R_u(2, \textbf{z}, -1 ))$, $S(u, R_u(1, \textbf{d}, 0 ),  R_u(2, \textbf{d}, 0 ))$, $S(u, R_u(3, \textbf{d}, 0 ),  R_u(4, \textbf{d}, 0 )\}$, where $u=``30/05/09"$. Notice that \textbf{z} denotes an unprintable delimiter character in the beginning and end of all strings. The three selection operations in $P^G_{/}$ select the substrings ``09", ``30" and ``05"; respectively. The chain of concatenations ``09" + ``/" + ``30" +``/" + ``05" results in the string ``09/30/05". Notice that the previous rule can be used to transform any date in the source format into the target format, including  a date such as ``12/20/2009", where the year is represented with four digits.  

\textbf{Generalized Insertion.}  An insertion rule $I=\{I_1, \dots, I_n\}$ is generalized into $I^G=\{I^G_1, \dots, I^G_n\}$, by making the positions of the insertions $I_i$ relative. 
 
For example, to transform $Microsoft\ Corp\ \to Microsoft\ Corporation$, the operation $I( ``oration" , u, 14)$ is required, where $u=``Microsoft\ Corp"$. The position 14 can also be represented as $R_u(2, \textbf{z}, -1)$, where \textbf{z} is the set of delimiter characters. By making the position relative, i.e. $I( ``oration" , u, R_u(2, \textbf{z}, -1))$, this rule would also transform $Apple\ Corp \to Apple\ Corporation$.

%, where the position 14 does not exist for $u=Apple\ Corp$. Regarding an insertion operation, only its position can be generalized.
 
\textbf{Generalized Deletion.}  A deletion rule $D=\{D_1, \dots, D_n\}$ is generalized into a deletion rule $D^G=\{D^G_1,\dots, D^G_m\}$, where $m\leq n$, $\to^{D^G}$ is a chain of operations $D^G_i$ defined below:

\begin{definition}[Deletion] The function deletion $D^G(u, r_1, r_2): \Sigma^* \times R^u  \times  R^u \to \Sigma^*$;  deletes from $u$ the  substring $u[r_1..r_2]$.
\end{definition}

For example, $James\ Brown \to J\ Brown$ can be transformed by deleting the characters between the first uppercase character and the subsequent space, i.e., by the generalized deletion rule  $D^G(u, R_u(1, \textbf{u}, 1), R_u(1, \textbf{s}, -1))$, where $u=``James\ Brown"$. The relative positions $r_1 = R_u(1, \textbf{u}, 1)=2$ and $r_2 = R_u(1, \textbf{s}, -1)=5$ define the substring $u[2..5]=``ames"$, which is the substring to be deleted from $u$.
 

\textbf{Generalized Update.}  An update rule $U=\{U_1, \dots, U_n\}$ is generalized into an update rule $U^G=\{U^G_1,\dots, U^G_m\}$, where $m\leq n$, $\to^{U^G}$ is a chain of operations $U^G_i$ defined below:

\begin{definition}[Update] The function update $U^G(u, r, e, d): \Sigma^* \times R^u  \times  E^\Sigma  \times \mathbb{Z}\to \Sigma^*$; applies a distance $d$ to each character of the substring resultant from the first match of the regular expression $e$ in the substring $u[r..|u|]$.
\end{definition}

For example, the transformation $VICTOR\ HUGO \to Victor\ Hugo$ can be achieved with the update rule $U^G=\{U^G_1(u, R_u(1, \textbf{u}, 1), \textbf{u}^+, -32), U^G_2(u, R_u(1, \textbf{s}, 2), \textbf{u}^+, -32)\}$, i.e.,  $VICTOR\ HUGO \to^{U^G_1} Victor\ HUGO  \to^{U^G_2} Victor\ Hugo$, where $u=``VICTOR\ HUGO"$ \footnote{Adding -32 to each character in $``ICTOR"$ produces its correspondent lower case version $``ictor"$.}.

Concluding, we are particularly interested in learning the transformation rules $\Psi^G_{u,v}$ of the form $u \to^{P^G} u_1 \to^{I^G} u_2 \to^{D^G} u_3 \to^{U^G} v$. Section\ \ref{sec:algorithm} introduces our algorithms to learn the components of $\Psi^G_{u,v}$: $P^G, I^G, D^G$ and $U^G$.

\subsection{Learning Problem}
 
 This section  defines the \textit{validity} and \textit{coverage} of a transformation rule $\Psi^G_{u,v}$ for a transformation $x \to y$; and, finally, formalizes the string transformation-learning problem.
\begin{definition}
Given a pair of strings $(x,y)$, the validity of a transformation rule $\Psi^G_{u,v}$ is defined   as:
\begin{equation}
  Validity(\Psi^G_{u,v}, x,y)= \left\{ 
  \begin{array}{ll}
    1 &  \text{if } x \to^{\Psi^G_{u,v}} y  \\
     0 &   \text{otherwise }   
  \end{array} \right. 
\end{equation} 
\end{definition} 

The coverage of $\Psi^G_{u,v}$, given a set of examples $\Lambda^+$ is defined as:
\begin{equation}
Cov(\Psi^G_{u,v}, \Lambda^+) = \sum_{(x,y) \in \Lambda^+}   Validity(\Psi^G_{u,v}, x,y)
\end{equation}

The string transformation-learning problem is divided into two sub problems. The first sub problem concerns the problem of learning a transformation rule from a pair of strings $(u, v)$. The second sub problem concerns the problem of selecting a rule from a set of learned rules to transform an unseen string $w$. We now formally state our first learning problem.

\begin{definition}
Given a pair of strings $(u, v)$ and a set of positive examples $\Lambda^+$, find the transformation rule $\Psi^G_{u,v}$ that maximizes its coverage over $\Lambda^+$.
\end{definition}


We now state our second and last learning problem.

\begin{definition}
Given a transformation model $\Omega(\Lambda^+)$ and a pair of strings $(x,y)$, find a transformation rule $\Psi^G_{u,v} \in \Omega(\Lambda^+)$, such that $Validity(\psi^G_{u,v}, x,y)=1$.
\end{definition}

Assuming that we have learned a transformation model $\Omega(\Lambda^+)$, this second learning problem is the problem of selecting from $\Omega(\Lambda^+)$ a rule that can possible transform an unseen string $x$ correctly. The algorithm that solves the first learning problem is called the \textit{rule learner}, while the one that solves the second learning problem is called the \textit{rule selector}. They are described in Section\ \ref{sec:algorithm} and \ref{sec:rulelearner}; respectively.


\section{Rule Learner Algorithm}
\label{sec:algorithm}
This section describes our algorithm to produce a transformation rule $\Psi^G_{u,v}$ from $(u,v)$.

\subsection{Rule Learning}
The rule learner  algorithm is composed of four parts, corresponding to the four basic string operations permutation, insertion, deletion and update. Intuitively, the algorithm learns  possible permutations in $u$, then on the permuted string $u_p$ it identifies insertions and deletions to transform it to $u_{i\_d}$, such that $|u_{i\_d}|=|v|$; and finally, in the resultant string $u_{i\_d}$, it learns the character replacements necessary to transform $u_{i\_d}$ into $v$.


Firstly, we describe the algorithm to find a relative position in $u$, to be used in the string operations.
\subsection{Relative Position Algorithm}

Alg.\ \ref{alg:relativeposition} obtains a relative position $R_u(k, e, j)$ of $u[i]$ in the string $u$. The purpose of the algorithm is to select the most general relative position as possible, i.e., the one that is likely to represent the same absolute position over a set of similar strings. Practically, the most general relative position is also the most discriminative one, with the smallest offset $j$. In other words, we identify $e$ that is the least frequent in $u^c$ while closest   to the absolute position $i$.

In order to compute the relative position $R_u(k, e, j)$ of a position $i$ of $u$, all substrings $e$ in $u^c$ have to be considered. Let $E_n$ be the bag of all n-grams of $u^c$, i.e., $E_n=\{e_p| \forall p,  0<p <|u^c|-n : e_p=u^c[p..p+n]\}$, where $1 \leq n \leq |u^c|$, and let $f(x)$ be a function that gives the frequency of a substring $x$ in $u^c$; then, for a specific $e \in E_n$, $k=f(e)$. 

For example, consider the string $u=``Noia, La"$.  Assume $E_2$, the bag of bigrams, of $u^c=\textbf{zulllpsulz}$, i.e., $E_2=\{\textbf{zu, ul, ll, ll, lp, ps, su, ul, lz}\}$. The frequency of each distinct character class $a \in u^c$ is $f(\textbf{u})=2$, $f(\textbf{l})=4$, $f(\textbf{p})=1$ and $f(\textbf{s})=1$. Particularly, $f(\textbf{z})=0$, by definition. While the frequency of each bigram in $E_2$ is: $f(\textbf{zu})=1$, $f(\textbf{ul})=2$, $f(\textbf{ll})=2$, $f(\textbf{lp})=1$, $f(\textbf{ps})=1$, $f(\textbf{su})=1$ and $f(\textbf{lz})=1$. 

The most discriminative $e \in E_n$ is the one with the smallest $k$ (frequency) and smallest absolute distance $j$ to $i$. This discriminative score can be computed by calculating the average frequency of the characters in a substring $w_e \in u^{c^*}$ that starts in $e$ and ends in $i$, (or vice-versa, if $i < p$). 

Take for example the position $i=7$. In the previous example, $u^c[7]=\textbf{u}$. For the first bigram $e_1=\textbf{zu}$, the substring $w_{e_1}$ would be $w_{zu}=\textbf{zulllpsu}$, and for the  bigram $e_9=\textbf{lz}$, $w_{lz}=\textbf{ulz}$. The example is illustrated in Fig.\ \ref{fig:diagram}.


\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{diagram.pdf}
\caption{String $u=``Noia, La"$, $u^c=\textbf{zulllpsulz}$, $E_2(u^c)=\{\textbf{zu, ul, ll, ll, lp, ps, su, ul, lz}\}$, $w_{e_1}=\textbf{zulllpsu}$ and $w_{e_9}=\textbf{ulz}$.} 
\label{fig:diagram}
\end{figure}  

The best $e$ will be the one with the smallest average frequency in $w_e$. It is obtained by solving the minimization described in Eq. \ref{eq:relative}:

\begin{equation}
\argmin_{e \in E_n} F(e)
\label{eq:relative}
\end{equation}
 
 \begin{equation}
F(e)= \sqrt[{f(e)}]{Mean(e)} 
\label{eq:fremean}
 \end{equation}
 \begin{equation}
Mean(e) = \sqrt[m]{\frac{{\sum_{a\in w_e} g(a)^m}}{m}}
 \end{equation}
 

where $m=|w_e|$, $a$ is a character in $w_e$ and $g(a)=\frac{f(a)}{|u^c|}$. Notice that $Mean(e)$ is the \textit{power mean} over the  frequency $f(a)$ of character $a$ in $u^c$. Intuitively, the power mean gives a higher score to longer strings $w_e$, i.e., $e$ that are far from $u^c[i]$. Consequently, $e$ that are closer to $i$ will have a smaller score. In Eq.\ \ref{eq:fremean}, the root $f(e)$  is used to  increase the score of $e$ that are more frequent. Therefore, $e$ that are less frequent in $u^c$ and closer to $i$ will be selected.

In the current example, as $|u^c|=10$, we obtain the following scores $F(e)$ for each $e \in E_2$: $F(\textbf{zu})=0.353$, $F(\textbf{ul})=0.354$, $F(\textbf{ll})=0.356$, $F(\textbf{ll})=0.578$, $F(\textbf{lp})=0.287$, $F(\textbf{ps})=0.149$, $F(\textbf{su})=0.158$, $F(\textbf{ll})=0.562$ and $F(\textbf{lz})=0.288$. $F(\textbf{ps})=0.149$ is the smallest score, therefore, $e=\textbf{ps}$ is selected. Then, the relative position for $i=7$ is defined as $R_u(1, \textbf{ps}, 1)$.  


\begin{algorithm}[]
\caption{RelativePosition($u$, $i$, $n$).}
\begin{algorithmic}[1]
\scriptsize\tt 
\FORALL{$p$ in $0..|u|-n$}
\STATE $e \leftarrow u^c[p..p+n]$ 
\STATE $E_n \leftarrow  E_n  \cup e$ 
\IF{$p\leq i-n$}
\STATE $w_e\leftarrow u^c[p..i]$
\ELSIF{ $p > i-n$ and $p \leq i$}
\STATE $w_e \leftarrow u^c[p..p+n]$
\ELSE
\STATE $w_e \leftarrow u^c[i..p+n]$
\ENDIF
\STATE $sum[w_e] \leftarrow 0$
\FORALL{$a$ in $w_e$}
\STATE $sum[w_e] \leftarrow sum[w_e] + (f(a)/|u^c|)^m$
\ENDFOR 
\STATE $m \leftarrow |w_e|$
\STATE $Mean[e] \leftarrow \sqrt[m]{(sum[w_e] / m)}$
\STATE $F[e] \leftarrow \sqrt[f(e)]{Mean[e]}$
\ENDFOR 
\STATE $e_p \leftarrow  \argmin_{e_p \in E_n} F[e_p]$ 
\RETURN  $[f(e_p), e_p, i - p]$
\end{algorithmic}
\label{alg:relativeposition}
\end{algorithm}

This algorithm is bound by $O(|u|^2)$ for $n = 1$, and $O(1)$ for $n = |u|$. We observed empirically that this algorithm produces the most discriminative relative position with high likelihood when $n=2$. Notice that for $n=|u|$, $j=i$ in $R_u(k, e, j)$, i.e.\ $j$ is the value of the absolute position $i$.

\subsection{Permutation Rule Learner}
To learn permutations in $u$, we have to select among all permutations of tokens in $u$ (separated by a character $c$), the permutation that  the concatenation of its tokens forms a string that is closer to the target string $v$. For example, for the transformation $John\ M.\ Lewis\to John,\ Lewis\ M.$,  considering that the tokens are separated by a space ($c$ is a space character), there are six possible permutations for the tokens $[``John"$, $``M.", ``Lewis"]$: $\{[``John", $ $``Lewis", ``M."]$,
$[``M.", ``John",$ $ ``Lewis"]$,
$[``M.", ``Lewis", $ $ ``John"]$,
$[``Lewis", ``John",$ $ ``M."]$,
$[``Lewis", ``M.", $ $``John"]\}$. The permutation $[``John", $ $``Lewis", ``M."]$ is the most similar to the string $v=John,\ Lewis\  M.$.
 
As we do not know the separator $c$ in advance, all characters in $u$ should be considered as possible separators. So, to learn permutations in $u$ (delimiters are ignored), for each distinct character $c$ in $u$, we build a set of tokens $t_c=\{u_1, \dots, u_n\}$, where $u=u_{t_{c}}=u_1+ c + \dots + c + u_n$. Let $Q^{t_c}$  be the set of all $n!$ permutations of tokens in $t_c$, where $n =|t_c| $.  From the union  $Q = \bigcup_{c \in u} Q^{t_c}$, we select the permutation $q_i \in Q$ that generates a string $u_{q_i}$ with the highest similarity to $v$, i.e.\ we solve the problem:

\begin{equation}
 \argmax_{q_i \in Q} sim(u_{q_i}, v)
 \label{eq:permutation}
\end{equation}
 


The well-known \textit{Levenshtein edit-distance}\footnote{Notice that set-based similarity functions (e.g., Jaccard) cannot be used as $sim(\cdot, \cdot)$ because they ignore the tokens order in $u_{q_i}$ and $v$.} is used as $sim(\cdot, \cdot)$.  

For example, for the transformation $John\ M.\ Lewis\to John,\ Lewis\ M.$, considering $c=$``  $"$ (space), the token set is $t_c=\{``John"$, $``M.", ``Lewis"\}$ and $Q^{t_c}=\{[``John", "M.", ``Lewis"]$,
$[``John", $ $``Lewis", ``M."]$,
$[``M.", ``John",$ $ ``Lewis"]$,
$[``M.", ``Lewis", $ $ ``John"]$,
$[``Lewis", ``John",$ $ ``M."]$,
$[``Lewis", ``M.", $ $``John"]\}$. Among all possible $Q^{t_c}$, the permutation $q_i=[``John", ``Lewis", ``M."]$, which forms the string ``John Lewis  M.", has the highest similarity to the target string. Notice that the complete set $Q$  would be the union of sets $Q^{t_c}$, where $c$ would be any of the characters $\{``J", ``o", ``h", ``n", ``$  $", ``M", ``.", ``L", ``e", ``w", ``i", ``s"\}$.

The generation of all permutations of $t_c$ becomes prohibitive when $t_c$ has too many tokens. As we do not need to list all permutations but only to find the best one, the best permutation of $t_c$ can be efficiently found by sorting the tokens in $t_c$, using $sim$ as criteria of ordering the tokens. Alg.\ \ref{alg:ss} specifies this sorting process, explained below. 

Starting from a set of tokens $t_c=\{u_1, \dots, u_n\}$, the algorithm (Alg.\ \ref{alg:ss}) uses   Selection Sort\footnote{Any sorting algorithm can be used to sort the tokens. We used Selection Sort because in this problem the number of tokens are relatively small and it performs satisfactorily.} to sort the tokens, where $sim$ is the criteria of ordering.  Basically, in this algorithm two tokens $i$ and $j$ in $t_c$ are permuted if $sim(w_{t_{c}}, v) > sim(u_{t_{c}}, v)$,  where $i$ and $j$ are  permuted in $w_{t_{c}}$ and $i$ and $j$ are not permuted in $u_{t_{c}}$. The final sorted $t_c$ is included in $Q$. Notice that $Q$ will have a maximum of $m$ elements, where $m$ is the number of distinct characters in the string $u$.


\begin{algorithm}[]
\caption{Sorting($t_c$, $v$).}
\begin{algorithmic}[1]
\scriptsize\tt 
\RETURN  $t_c $ if $|t_c|= 1$ 
\STATE $u_{t_c} \leftarrow concatenate(t_c)$
\STATE $uscore\leftarrow sim(u_{t_c}, v)$ 
\STATE $i \leftarrow j \leftarrow max \leftarrow 0$ 
\FORALL{$i$ in $0..|t_c|-1$}
\STATE $max \leftarrow j$ 
\FORALL{$j$ in $i+1..|t_c|$}
\STATE $tmp \leftarrow t_c$ 
\STATE $tmp[j], tmp[max] \leftarrow tmp[max], tmp[j]$ 
\STATE $w_{t_c} \leftarrow concatenate(tmp)$
\STATE $wscore \leftarrow sim(w_{t_c}, v)$
 \IF{$wscore > uscore$}
\STATE $uscore \leftarrow wscore$
\STATE $max \leftarrow i$
\STATE $t_c[i], t_c[max] \leftarrow t_c[max], t_c[i]$
\ENDIF
\ENDFOR
 \ENDFOR
 
 \RETURN  $t_c$
\end{algorithmic}
\label{alg:ss}
\end{algorithm} 

  
The Sorting algorithm takes $O(n^2)$ comparisons  to sort $n=|t_c|$ tokens in $t_c$.  In the worse case, when all characters in $u$ are distinct (i.e., $m=|u|$), the algorithm to solve Eq.\ \ref{eq:permutation} is bound by $O(m.n^2)$ because it runs the sorting algorithm $m$ times, once for each possible separator in $u$. As $Q$ contains the  best permutation of each separator, the overall best permutation in $Q$ can be trivially obtained in $O(m)$. Notice that excluding letters as separators decreases the size of $m$, substantially.  
 

\begin{lemma} The Selection Sort is correct \footnote{The proof of the Lemma \ref{lemma:ss} is available in \cite{2009design}.}. 
\label{lemma:ss}
\end{lemma}


\begin{theorem} The procedure $Sorting (t_c, v)$ using Levenshtein edit distance as $sim$ finds the permutation of $t_c$ that maximizes $sim(u_{t_c}, v)$, i.e., it solves the problem:
\begin{equation}
\argmax_{q \in Q^{t_c}} sim(u_{q}, v)
\end{equation}
\end{theorem}

\begin{proof} 

 

The Levenshtein edit-distance measures the number of operations to transform a string into another. Following the Lemma \ref{lemma:ss}, at any iteration, $t_c[i]$ is only permuted to $t_c[j]$ if the permutation reduces the number of operations to transform $u_{t_c} \to v$. Consequently,  the final sorted $t_c$ contains the permutation that reduces the highest number of edit-distance operations in $u_{t_c} \to v$. In other words, the sorted $t_c$maximizes $sim(u_{t_c}, v)$.
\end{proof}


Finally, the permutation $q_i \in Q$ with highest similarity to $v$ is expressed as a transformation rule as follows. For each token $u_k \in q_i$, a selection operation $S(u, r_1, r_2)$ is created, where $r_1$ is the relative position of the first character of $u_k$ in $u$, and $r_2$ is the relative position of the last character of $u_k$ in $u$ . Then, the set of all selection operations forms a permutation rule $P^G_c$. 

For instance, in the current example, considering  $c=$`` $"$, then, $P^G_c$  would contain three selection operations, one for each token in $t_c= [``John", $ $``Lewis", ``M."]$, i.e., $P^G_c=\{S(u, R_u(1, \textbf{zu}, 1) ,  R_u(1, \textbf{ls}, 0)),$ $S(u, R_u(2, \textbf{su}, 1),  R_u(1, \textbf{lz}, 0)),$ $S(u, R_u(1, \textbf{su}, 1) ,  R_u(1, \textbf{ps}, 0))\}$. As $u=``John\ M.\ Lewis"$, the first selection would select ``John", the second would select ``Lewis" and the third ``M.". By definition, the permutation operation concatenates all selected strings by the character $c$, which in this case would produce $u_p=``John\ Lewis\ M."$. In this example an additional insertion of a conman after ``John"  would be required to fully transform  $John\ M.\ Lewis\to John,\ Lewis\ M.$. Notice that the same permutation rule could be used to transform $Michael\ B.\ White \to Michael,\ White\ B.$.


\subsection{Insertions and Deletions Rule Learner}

To learn the insertions and deletions in $u_p$, we propose an algorithm based on the known \textit{longest common substring algorithm (LCS)}.  The purpose of our algorithm is to find all possible alignments of characters between $u_p$ and $v$. As the characters in $u$ have the best permutation at this stage, if $u_p[i]=v[j]$ and $u_p[h]=v[k]$, then $i > h$ and $j >k$ or $i < h$ and $j <k$. If  $u_p$ and $v$ are represented in a matrix as shown in Fig.\ \ref{fig:permutation}; basically, characters in $u_p$ that do  not align  to $v$ (zero columns) have to be deleted from $u_p$ to transform it into $v$; and characters in $v$ that does not align to $u_p$ (zero rows)  have  to be inserted in $u_p$ to transform it into $v$. 

\begin{figure}[h]
\centering
\includegraphics[width=0.2\textwidth]{permutation.pdf}
\caption{All   common substrings between  $u_p  =  ``Aug\ 06,\ 2013"$  and  $v  = ``06/08/13"$.} 
\label{fig:permutation}
\end{figure}  

For example, Fig.\ \ref{fig:permutation} shows a matrix representing all character alignments between $u_p  =  ``Aug\ 06,\ 2013"$  and  $v  = ``06/08/13"$. The characters that align are represented as 1 and the characters that do not align are represented as zero. In this example, the substrings $\{``Aug ", ``, 2"\}$ (zero columns) have to be deleted from $u_p$; and $\{``/ ", ``8/"\}$ (zero rows) have to be inserted in $u_p$. The insertion and deletion rule can be represented as: $D^G=\{D^G_1(u_p, R_{u_p}(1, \textbf{zu}, 1), R_{u_p} (1, \textbf{zu}, 4))$, $D^G_2 (u_p, R_{u_p}(1, \textbf{ps}, 0), R_{u_p}(1, \textbf{ps}, 2)) \}$ and $I^G=\{I^G_1(``/", u_p, R_{u_p} (1, \textbf{ps},0))$, $I^G_2 (``8/", u_p, R_{u_p}(1, \textbf{ps},4)) \}$, respectively.
  

Given a matrix $K$ of $m\times n$ where $m = |u_p|$ and $n=|v|$, the LCS finds the longest diagonal in the matrix where $u_p[i]=v[j]$, $1 \leq i \leq m$ and $1 \leq j \leq n$. We consider $u_p[i]$ equal to $v[j]$, if $u_p[i] = v[j]$ or $lowercase(u_p[i]) = lowercase(v[j])$.

Our algorithm  starts looking for the  longest common substring \textit{(lcs)} between $u_p$ and $v$, using LCS. Once the \textit{lcs}   $u_p[r.. s]=v[x..y]$ is determined, the LCS is applied recursively over the upper and lower matrix $K_U=[1, 1, r, x]$ and $K_L=[s, y, m, n]$, respectively. This step makes this algorithm different from the LCS because it will guarantee, after all character are matched, that if $u_p[i]=v[j]$ and $u_p[h]=v[k]$, then $i > h$ and $j >k$ or $i < h$ and $j <k$.
%However, if $K_U$ (or $K_L$) is a quadratic matrix, then its diagonal is considered as the \textit{lcs} in $K_U$ (or $K_L$). In this case, $C(u_p[i]) = C(v[i])$ is added as an identity in $F$. This process continue recursively until all \textit{lcs} are identified in the matrix $K$.\
After all \textit{lcs} are determined, the lines that are not part of a $lcs$  represent the characters in $v$ that have to be inserted in $u_p$ and the columns  the characters that have to be deleted from $u_p$. This algorithm is bound by the upper bound of the original LCS, which is $O(n.m)$.

Alg.\ \ref{alg:relativeposition} is used to find the relative position for the insertions and deletions. The set of insertions $I^G$ forms a transformation rule $u_p \to^{I^G} u_i$, and the set of deletions $D^G$ forms a transformation rule $u_i \to^{D^G} u_{i\_d}$. 
 
 \subsection{Update  Rule Learner }
  To learn the update  rules, for each $u_{i\_d}[i]$ in $u_{i\_d}$, a rule $U^G_i(u, r_i, e_i, d_i)$ is created, where the relative position $r_i$ of $i$ is found using Alg.\ \ref{alg:relativeposition}, the distance $d_i=\Delta(u_{i\_d}[i], v[i])$, and the RE $e_i=u^c[i]$. Any consecutive set of rules $U^G_i, \dots, U^G_j$ that have the same distance and the same $e_i$, are merged in one $U^G_i$, where $e_i$ is defined to $e_i^+$, the \textit{Kleene Plus} of $e_i$. This produces a set of update rules $U^G$, such that $u_{i\_d} \to^{U^G} v$. This algorithm is bound by $O(|v|)$, due  that it has to walk once through the string $u_{i\_d}$ to compute the distance $\Delta$ to $v$. 

For example, for $jack\_w. \to Jack\ W.$, we have $u^c=\textbf{llllplp}$ and  $v^c=\textbf{ulllsup}$. The algorithm produces 7 rules (ignoring the delimiters, $|u|$=7), i.e., $U^G_1(u, R(1, \textbf{l}, 0), \textbf{l}, -32)$,  $U^G_2(u, R(1, \textbf{p}, -3), \textbf{l}, 0)$, $U^G_3(u, R(1, \textbf{p}, -2), \textbf{l}, 0)$, $U^G_4(u, R(1, \textbf{p}, -1), \textbf{l}, 0)$, $U^G_5(u, R(1, \textbf{p}, 0), \textbf{p}, 63)$, $U^G_6(u, R(1, \textbf{p}, 1), \textbf{l}, -32)$, $U^G_7(u, R(2, \textbf{p}, 0), \textbf{p}, 0)$. The consecutive rules $U^G_2$, $U^G_3$ and $U^G_4$ have the same distance and RE, then they are replaced by the rule  $U^G_{2}(u, R(1, \textbf{p}, -3), \textbf{l}^+, 0)$. Such a set of rules  can transform any string in the form $\textbf{ll}^+\textbf{plp}$ into $\textbf{ul}^+\textbf{sup}$. 
   
\subsection{Discussion}
Table \ref{table:ruleexamples} shows a large variety of examples of string transformations: $u_i \to v_i$ and $s_i \to t_i$.  The algorithm just described can learn a rule using as example a transformation rule  $\Psi^G_{u_i,v_i}$ and produce a correct transformation for any string $s_i$ in this table (i.e.\, $s_i \to^{\Psi^G_{u_i,v_i}} t_i$). Although  some of these transformations could be easily expressed by programmers, non-programmers do not have the skills to express it. The proposed method allows them to perform a large range of non-trivial transformations by simply providing examples.
 
As a design decision, the rules are learned and applied independently of each other. This avoids the exponential problem of searching for the combination of operations that maximize the coverage over all examples. Obviously, it is impossible to learn a rule with maximal $Cov(\Psi^G_{u,v}, \Lambda^+)$ for any  arbitrary string pair $(u, v)$ and $\Lambda^+$, considering only information in $(u, v)$. However, in Section \ref{sec:evaluations}, we will show that the rule learner algorithm produces high coverage rules over real-world transformation tasks, which can be properly selected  to transform  an unseen string $s$, correctly. 

\section{Rule Selector Method}
\label{sec:rulelearner}
In this section, we describe our method to tackle the second learning problem. 

Given a transformation model $\Omega(\Lambda^+)$ and a pair of strings $(x,y)$, we model the problem of finding a transformation rule $\Psi^G_{u,v} \in \Omega(\Lambda^+)$, such that $Validity(\psi^G_{u,v}, x,y)=1$, as a classification problem.  Our training data are   pairs of string   $(u_i, v_i) \in \Lambda^+$ (observations) and   rules in $ \psi^G_{u_i, v_i} \in \Omega(\Lambda^+)$ (categories). Then given a new string $x$, the task is to assign a specific rule (category) to it, based on the features extracted from $x$. We use a \textit{Naive Bayes Classifier} as  classifier. 

During the training phase, where $u$ and $v$ are available, the set of trigrams (3-grams) of the strings $u$, $u^c$, $v$ and $v^c$, and the frequency of the trigrams of $u^c$ were used as features. Precisely, the set of features can be represented as: $(Trigram(u)-Trigram(v))\cup (Trigram(u^c)-Trigram(v^c))\cup freq(Trigram(u^c))$.

A frequency was represented by concatenating the trigram with its   frequency value, e.g., for a trigram \textbf{ull} with $f(\textbf{ull})=2$, the feature was represented as \textbf{ull2}.

During the classification phase, where   only the string $x$ to be transformed is available, the set of trigrams of the string $x$ and $x^c$, and the frequency of the trigrams of $x^c$ were used as features.  Precisely, this set of features can be represented as: $ Trigram(x)\cup Trigram(x^c) \cup freq(Trigram(x^c))$.

Notice that the difference of the set of trigrams used above gives exactly the trigrams that change during a transformation $u \to v$.  During the classification phase, if a string $x$ shares these features (trigrams) that represent a rule $m$, it is likely that $m$ is the best candidate rule to  transform $x$ correctly.   
%We observed that this set of features produces   satisfactory results; further extension of this method could engineer better features even further, e.g., using the techniques of \cite{1183917}.

As any other classification task, we assume the user can collect a representative and discriminative training sample to obtain a satisfactory performance of this method. We   acknowledge that for   specific transformation tasks other machine learning approaches may    perform better. However, in Section\ \ref{sec:evaluations}, we will show that the effectiveness of the Naive Bayes Classifier is sufficient for our purpose, on average.% provided that a proper set of examples is given.
 
\section{Evaluation}
In this section, we investigate empirically three aspects of our method: the coverage of the rules produced by the rule learner, the accuracy of the rule selector, and the learning time of the rule learner. In the end of this section, we compare our algorithm, namely \textit{STransformer}, to a state-of-the-art string transformation method. 
\label{sec:evaluations}
% \begin{itemize}
%\item To study how the number of input examples impacts the quality of the transformation rules output.
%\item To undestand the efficiency pf our algorithm as a function of the number of input examples.
% \end{itemize} 

\subsection {Data}
In this investigation, four datasets were constructed based on  real-world string transformation use cases drawn from data cleaning and spreadsheet processing  literature. As we will discuss, these four scenarios show the power of STransformer, which can solve different   transformation tasks, requiring a very limited set of examples. 

\textbf{Abbreviations Use Case.} Ana is a secretary of an institute and she has a catalog with a few thousands organizations names that need  to be converted into their abbreviations. The list contains universities, society groups among others entities. To achieve her task, she copies the names of the organizations into her text editor and tries to build the abbreviations manually; e.g., \ $Youth\ Hostels\ Association \to YHA$, $University\ of\ New\ Hampshire \to UNH$. After transforming a few abbreviations, she realizes that her manual process is not productive and she would like to automate the task.  To that, she decides to use  STransformer (a feature available on her text editor). She inputs the transformations  she has done so far  as examples and STransformer  transforms the rest of the data.   

To simulate this task, we used an open online catalog\footnote{http://www.betweenthelakes.com/pdfs/organizations.pdf} with 2034 organization names and their abbreviations. The overall task was to learn transformation rules from a few examples  that could generate the abbreviations to the full organizations  names,  Then, the learned rules were used   to transform all organizations names into their abbreviations. Table \ref{table:abbreviationsexamples} shows the examples selected for this task.

%In this \textit{abbreviations} dataset, the overall task was to learn transformation rules from a few examples  that could generate the abbreviations to the full organizations  names,  Then, the learned rules were used   to transform all organizations names into their abbreviations.

\textbf{Book Titles Use Case.} John is a librarian in charge of publishing a catalog of   books acquired by the National Library. He obtained, from the IT department, a list of  book titles. He observes the titles of the books have the article shifted to the end of the sentence, e.g.\  ``Cloud, The" instead of ``The Cloud". Consequently, he needs to fix them, i.e., he needs to apply   a transformation in the titles of the form:  $Cloud,\ The \to The\ Cloud$. He needs the task done as soon as possible, and he cannot count upon the IT department. Using STransformer, he can prepare his own data, providing a few manual examples of the desired titles to input to STransformer, without any programming skills.  

To simulate this task, we used book titles from the \textit{Book Crossing dataset }\cite{Ziegler:2005:IRL:1060745.1060754}, which contains 51690 titles from books records where the titles start  with an  article (e.g.\ ``The", ``La", ``El", ``An"). Then, we shifted the article to the end of the sentence, after inserting a comma and a space. Consequently, the task was to put the title in the original form, i.e., to shift the article to the beginning of the sentence and to remove the additional comma and space. As in the previous scenario, rules were learned  from a limited given set of examples (shown in Table \ref{table:bookexamples}), and then used to transform all titles in the correct conventional form, i.e.\ $Cloud,\ The \to The\ Cloud$. 

%The second dataset is a collection of book titles from the \textit{Book Crossing dataset }\cite{Ziegler:2005:IRL:1060745.1060754}, where we study a data cleaning scenario. We obtained 51690 titles from books records where the titles start  with an  article (e.g.\ "The", "La", "El", "An"). Then, we shifted the article to the end of the sentence, after inserting a comma and a space, as shown in this example: "Cloud, The" instead of "The Cloud". Consequently, the task was to put the title in the original form, i.e., to shift the article to the beginning of the sentence and to remove the additional comma and space. As in the previous scenario, rules were learned  from a limited given set of examples, and then used to transform all titles in the correct conventional form, i.e.\ $Cloud,\ The \to The\ Cloud$. 

\textbf{Songs Use Case.} Mary is a fan of the band R.E.M. She is building a blog and she would like to list all songs of R.E.M at her blog's home page.  She finds on Wikipedia  184 R.E.M 's songs. She copies the songs into her editor but she realizes the songs contain also the song duration, which she would like to remove, e.g., \textit{``Shiny Happy People" - 3:44}. To obtain only the title of the songs, she decides to remove the song's duration manually. She expends 3 seconds per songs, taking in total 9 minutes to convert all 184 songs. Later, she learns about STransformer  and realized that the task could have been done much faster. 

We simulate this use case collecting 184 songs from Wikipedia, and we simulate the  task of learning rules that could extract the song titles from the copied text, i.e.\  $``Shiny\ Happy\ People"\ \-\ 3$:$44\ \to Shiny\ Happy\ People$. The task can be achieved with two examples shown in Table \ref{table:songexamples}.


%\textbf{Use Cases Dataset.} The fourth dataset represents  a set of string transformations  collected from the literature. These example transformations are listed in Table \ref{table:ruleexamples}.  The task was to learn transformation rules  from the examples $u_i \to v_i$ in Table \ref{table:ruleexamples} and use the learned rules to correctly transform the string $s_i$ in the second column of Table \ref{table:ruleexamples}. Consequently, the transformation listed in the first column were used as training data and the transformation listed in the second column as ground truth. 


\textbf{Dates Transformation Use Cases.} Bob is an weather researcher studying  temperatures of in the surroundings of an industrial zone. In his measurements, he uses sensors from two manufactures that output data in different formats. Particularly, one kind of sensor outputs the dates of the measurements in format \textit{``month\ day,\ year"}, where \textit{month} is represented by its abbreviated name (e.g.\ $Jan\ 02,\ 2013$). While the other kind, outputs in the format \textit{``day/month/year"}, where the \textit{month} is represented by its decimal representation (e.g.\ $02/01/13$). Bob would like to transform the output the of first sensor into the format of the second, i.e., \ $Jan\ 02,\ 2013 \to 02/01/13$, so he can build an homogeneous report. To that, he uses the STransformer, providing examples of the dates that he needs to transform.

 To simulate this task,  we used an artificial dataset containing 366 dates in format \textit{``month\ day,\ year"}. This dataset is quite homogeneous requiring exactly 12 rules to transform all strings, which map to the twelve-month names and their equivalent decimal representations. Contrarily, all other datasets evaluated are heterogeneous, i.e., there is no  logical pattern or obvious regularity that can explain their data beforehand. Particularly, we selected this Dates dataset to show that when  there is regularity in the data, the algorithm can learn it with 100\% accuracy. The 12 dates used as examples are shown in Table \ref{table:datesexamples}. 

%The last dataset represents another data cleaning case; we used an artificial dataset containing 366 dates in format \textit{"month\ day,\ year"}, where \textit{month} is represented by its abbreviated name (e.g.\ $Jan\ 02,\ 2013$). The task was to learn rules that could transform these dates to the format \textit{"day/month/year"}, where the \textit{month} is represented by its decimal representation (e.g.\ $02/01/13$), i.e., \ $Jan\ 02,\ 2013 \to 02/01/13$.  This dataset is quite homogeneous requiring exactly 12 rules to transform all strings, which map to the twelve-month names and their equivalent decimal representations. Contrarily, all other datasets evaluated are heterogeneous, i.e., there is no  logical pattern or obvious regularity that can explain their data beforehand. Particularly, we selected this Dates dataset to show that when  there is regularity in the data, the algorithm can learn it with 100\% accuracy. 


We manually constructed the ground truth for all strings in all datasets. To ensure reproducibility of our results both datasets and the implementation of the proposed algorithm  are available for download\footnote{https://github.com/samuraraujo/StringTransformation}. 

\subsection {Evaluation Metric	}
To assess the quality of the rule learner algorithm (i.e.\ the rule coverage), we used the notion of \textit{maximal coverage}, which  is the minimal number of example transformations that have to be learned to  transform all strings correctly. We evaluated the rule learner with three different configurations of n-grams ($E_n$) in relative position algorithm (Alg.\ \ref{alg:relativeposition}): $E_1$ (1-gram), $E_2$ (2-grams) and $E_3$ (3-grams).

To assess the quality of the rule selector algorithm, the \textit{accuracy measure} was used. It is defined below:

\begin{equation}
accuracy=\frac{\#correct\ transformations }{\#string\ pairs\ in\ the\ ground\ truth}
\end{equation}

Where \textit{\#correct transformations} stand for the number of transformations that the rule selector produces correctly; and \textit{\#string\ pairs\ in\ the\ ground\ truth} stands for the total number of strings that have to be transformed.


\subsection {Rule Coverage}
Table \ref{table:rulecoverage} shows the maximum coverage per task. It indicates that indeed the rule learner algorithm would need a relatively small number of examples to  transform all strings in the ground truth  correctly, on average. For the Books, Songs and Dates datasets, $E_2$ requires 157, 2 and 12 examples, respectively. This equates to 0.29\%, 1\% and 3\% of the data, respectively.  


\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{ Maximal Coverage Per Task } 

\begin{tabular}{ | c | c  | c | c | c | c  |  } 
\hline
 & Abbreviations & Books &Songs & Dates \\
\hline
$E_1$   & 439 & 132  & 3  & 12 \\
$E_2$   & 305 & 157  & 2  & 12 \\
$E_3$   & 326 & 249 & 6 &  12  \\

\hline 
\end{tabular}  
\label{table:rulecoverage}
\end{table}  

In total, 305 (for $E_2$) examples are necessary to obtain maximal coverage in the Abbreviations dataset. This equates to 15\% of the data. Although this is a relative large number of examples in our context,  a  small set of rules (precisely, 7 rules) covers 78\% of the strings (i.e., 1577 out of 2034 strings), as can be observed in Table \ref{table:coverageabbreviations}. It indicates that given the right seven examples, STransformer can learn seven rules that transform 78\% of this dataset, correctly. This is quite satisfactory coverage considering that there are precisely 244 cases (12\% of the data) that  can  only be transformed by completely distinct rules, i.e., no general rule could transform them.  Examples of these cases are: $Zeta\ Psi \to ZPsi$ and $Congregation\ of\ the\ Holy\ Ghost \to CSSp$. Consequently, to achieve 100\% coverage with seven or fewer rules is not truly possible in this data due to the lack of regularity in the data.  In practice, no method can learn rules from the other available examples that can transform these 244 cases, correctly.  Although, this is a very heterogeneous dataset with  distinct forms of abbreviating the organizations names, for the cases where there is regularity, the algorithm learns them with acceptable coverage. Notice that if we exclude these 244 outlier cases, then the coverage would be 88\%.


\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{ The first  7   rules with the highest coverage for the Abbreviations dataset using $E_2$. } 

\begin{tabular}{ | c | c | c | c|  } 
\hline
 Rule &  Covered Examples &  Percentage of Data & Cumulative \% \\
\hline
1 & 752 & 	37.0\%	& 37.0\% \\
2 & 430 & 	21.2\%	 & 58.2\% \\
3 & 233 & 	11.5\%	 & 69.6\% \\
4 & 85 & 	4.2\%	 & 73.8\% \\
5 & 37	 & 1.8\%	 & 75.6\% \\
6 & 22	 & 1.1\%	 & 76.7\% \\
7 & 18	 & 0.8\%	 & 77.6\% \\

\hline 
Total & 	  1577	 & 77.6\% 	&  77.6\% \\

\hline 
\end{tabular}  
\label{table:coverageabbreviations}
\end{table}  

Similarly to the Abbreviations dataset, in the Books dataset, many examples, in total 157 (for $E_2$),  are necessary to obtain maximal coverage. However, as observed in Table \ref{table:coveragebooks}, a very small set of rules (precisely, 11 rules) covers a large number of strings (i.e., 51282 strings or 99\% of the strings). It confirms that the algorithm is effective in learning transformation rules from a few examples, provided that some regularity is presented in the data. 
%As indicated in this case, with 11 examples, it covers 99\% of the data, which is a quite high coverage. 

%The results of Table \ref{table:coverageabbreviations} and \ref{table:coveragebooks}  were calculated by shuffling the examples, then building a new rule only when the already learned ones could not transform the current example correctly. The number of examples that a rule transformed correctly was counted as its coverage.  We ran this process five times and reported the average coverage in these tables.


\begin{table}[h]
%\tiny
\small
\centering
\caption{ The first  11   rules with the highest coverage  for the Books dataset using $E_2$. } 

\begin{tabular}{ | c | c | c | c|  } 
\hline
 Rule &  Covered Examples &  Percentage of Data & Cumulative \% \\
\hline
1& 37854& 73.2\% & 73.2\% \\
2& 4026& 7.8\% & 81.0\% \\
3& 2113& 4.1\% & 85.1\% \\
4& 1952& 3.8\% & 88.9\% \\
5& 1766& 3.4\% & 92.3\% \\
6& 1632& 3.2\% & 95.5\% \\
7& 570& 1.1\% & 96.6\% \\
8& 368& 0.7\% & 97.3\% \\
9& 346& 0.7\% & 98.0\% \\
10& 331& 0.6\% & 98.6\% \\
11& 324& 0.6\% & 99.2\% \\

\hline 
Total & 	51282 &	99.2\% & 	99.2\% \\

\hline 
\end{tabular}  
\label{table:coveragebooks}
\end{table}  


In the case of the Songs dataset, the algorithm (with $E_2$) needs only 2 examples. This shows that it can capture the regularity in the data quite precisely. 

In all configurations ($E_1$, $E_2$ and $E_3$), STransformer performs optimally for the Dates dataset, requiring exactly 12 examples, to capture  the 12 distinct patterns of dates  in the ground truth, i.e., the twelve-month names and their equivalent decimal representations. 

%The measurement of the coverage is unnecessary for the  Use Cases dataset  as exactly 15 rules are required to transform the 15 distinct examples shown in Table  \ref{table:ruleexamples}. 

Concluding, the results show that the rule learner is able to generate rules with  acceptable coverage in a variety of datasets. To obtain the best performance, we recommend using  $E_2$ instead of $E_1$, because $E_2$ is more discriminative than $E_1$,  even though slightly more examples were necessary in the Books case.

 

%\begin{figure}[h]
%\centering
%\includegraphics[width=0.35\textwidth]{rulecoverage.pdf}
%\caption{Rule coverage for the Book dataset using N2.} 
%\label{fig:rulecoverage}
%\end{figure}  

\subsection {Rule Selector Accuracy} 
To verify the rule selector accuracy, examples with maximal coverage were selected for each task; except for the Abbreviations and Books tasks, where only 7 examples with 78\% coverage and 11 examples  with  99\% coverage  were selected, respectively. As these examples produce rules that accumulate 100\% coverage (78\% for  the Abbreviations examples and 99\% for the Books examples), we generated rules using the selected examples, and then we verified the accuracy of the rule selector in selecting a rule that transforms correctly a new string $s$. Table \ref{table:abbreviationsexamples}, \ref{table:bookexamples}, \ref{table:songexamples}  and \ref{table:datesexamples}  show the examples used to build the rules for the Abbreviations, Books, Songs, and Dates transformation tasks; respectively. 

%The examples used  in the Use cases dataset are listed in the first column of Table \ref{table:ruleexamples}.

These examples were manually selected, by drawing from the set of examples a few exemplars with evident difference in their features (precisely, strings with different $u^c$ trigrams). Given that the majority of the example strings are quite similar, applying a  random selection of examples does not make sense in this setting, because likely we would select examples that produce the same rule; consequently, resulting in low coverage and accuracy. To such an approach to be effective, we would have to consider a large number of examples, which goes against our goal here.

\begin{table}[h]
%\tiny
\scriptsize
%\small
\centering
\caption{Abbreviations Examples} 

\begin{tabular}{ | c | c | } 
\hline
 & u $\to$ v \\
\hline
1 & American Baptist Foreign Missionary Society$\to$ABFMS \\
2 & American Kennel Club Canine Health Foundation$\to$AKCCHF \\
3 & Congressional Medal of Honor Society$\to$CMHS \\
4 & American Society of Agricultural and Biological Engineers$\to$ASABE \\
5 & Alcoholics Anonymous$\to$AA \\
6 & Army Against War and Fascism$\to$AAWF \\
7 & Army Air Corps$\to$AAC \\
 
\hline 
\end{tabular}  
\label{table:abbreviationsexamples}
\end{table}  

\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{Book  Titles Examples} 

\begin{tabular}{ | c | c | } 
\hline
 & u $\to$ v \\
\hline
1 & Perle, La $\to$ La Perle \\
2&     Kill, A $\to$ A Kill \\
3&    Angel, The $\to$ The Angel \\
 4 &   Solstice,The $\to$ The Solstice \\
  5&   Solstice ,The $\to$ The Solstice \\
 6&    LONG SECRET, THE $\to$ THE LONG SECRET \\
  7&  CHOCOLATE TOUCH,THE $\to$ THE CHOCOLATE TOUCH \\
 8&   Hunny, Funny, Sunny Day, A $\to$ A Hunny, Funny, Sunny Day \\
9 &  street bible, the $\to$  the street bible \\
10 & mummies of Urumchi, The $\to$  The mummies of Urumchi \\
11 &  Mummies of Urumchi, The $\to$ The Mummies of Urumchi \\
\hline 
\end{tabular}  
\label{table:bookexamples}
\end{table}  
 

\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{Song Examples} 

\begin{tabular}{ | c | c   | } 
\hline
 & u $\to$ v \\
\hline
 1 & ``New Test Leper" - 5:26  $\to$ Sitting \\   
 2&  ``Radio Song" (feat. KRS-One) - 4:12 $\to$ Radio Song \\
 
\hline 
\end{tabular}  
\label{table:songexamples}
\end{table}  


\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{Dates Examples} 

\begin{tabular}{ | c | c   | } 
\hline
& u $\to$ v \\
\hline
1 &  Jan 27, 2013 $\to$ 27/01/13 \\
2&   Feb 27, 2013 $\to$ 27/02/13 \\
 3&   Mar 27, 2013 $\to$ 27/03/13 \\
 4&  Apr 27, 2013 $\to$ 27/04/13 \\
 5&  May 27, 2013 $\to$ 27/05/13 \\
  6&  Jun 27, 2013 $\to$ 27/06/13 \\
  7& Jul 27, 2013 $\to$ 27/07/13 \\
  8& Aug 27, 2013 $\to$ 27/08/13 \\
   9& Sep 27, 2013 $\to$ 27/09/13 \\
  10& Oct 27, 2013 $\to$ 27/10/13 \\
  11& Nov 27, 2013 $\to$ 27/11/13 \\
  12 & Dec 27, 2013 $\to$ 27/12/13 \\
\hline 
\end{tabular}  
\label{table:datesexamples}
\end{table}  


Table \ref{table:results} shows the accuracy of rule selector for each transformation task using the examples  described previously. For the Abbreviations, Books, Songs and Dates task, the accuracy was 74\%, 83\%, 96\%, 100\%; respectively.  Although,  the naive classifier is very sensitive w.r.t the quantity and quality of the examples, particularly when a few examples are provided, it performed satisfactorily  with high accuracy, on average. This is due to the quality of the examples provided, which were manually selected in this investigation. Particularly, in the Books task, the accuracy (83\%) was a bit lower  than expected. Given that selected Books  examples have high coverage, features that were insufficiently discriminative can explain this   accuracy.  The Abbreviations task had the lowest accuracy (74\%); however, the coverage of the examples was only 78\%, for this task. It means that its relative accuracy, w.r.t its coverage, was 95\%. 

Overall, we observed that the rule learner indeed learned rules that were quite general,  most of the incorrect transformations observed were due to the rule selector that selected incorrect rules. Concluding, the results demonstrate the feasibility of STransformer for general string transformations tasks. Users without programming knowledge can produce useful string transformations by simply supplying  a set of  examples.  Future extensions could engineer better features to improve the effectiveness of rule selector even further, e.g., using the techniques of \cite{1183917}.
 

\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{Accuracy of the Rule Algorithm With $E_2$} 

\begin{tabular}{ | c| c | c | c | c  | } 
\hline
 Setup  & Abbreviations & Books &Songs & Dates \\
\hline 
$E_2$   & \textbf{74\%} & \textbf{83\%}  & \textbf{96\%}  & \textbf{100\%}  \\
 \hline 
\end{tabular}  
\label{table:results}
\end{table}  
 
 
\subsection {Runtime Cost}

Now, we show the linearity of the rule learner algorithm. For this evaluation, we used an Intel Core 2 Duo, 2.4 GHz, 4 GB RAM, using a FUJITSU MHZ2250BH FFS G1 248 GB hard disk.  As noted in Section 3, the algorithm is linear which allows it to scale well with the number of examples, considering transformation tasks where a large number of examples is available and necessary. We empirically study the performance of the learner algorithm with an increasing number of input examples. We use subsets of increasing cardinality drawn from the Books dataset. Fig.\ \ref{fig:time} shows the running time of four runs for various sample sizes. We observe a linear increase in running times as the number of input examples grows, as expected from our analysis in Section 3.

\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{time.pdf}
\caption{Learning time varying the sample size for the Books dataset. We considered 4 runs for each sample size.} 
\label{fig:time}
\end{figure}  
 
\subsection {Performance Comparison}

Finally, we compare  the STransformer to the state-of-the-art string transformation method proposed in \cite{DBLP:conf/popl/Gulwani11, DBLP:journals/cacm/GulwaniHS12} (namely, \textit{FlashFill}). Their algorithm is implemented in the Microsoft Excel 2013, available as a command in the Excel's toolbar. Particularly, we  used Excel 2013 version 15.0.4505.1001 in this evaluation. 

We measured the accuracy of both systems in solving 4 tasks: Abbreviations, Books, Songs and Dates, which use the same datasets described in the previous evaluations. For each task, we randomly selected a single example from a list of examples that have regular features (e.g., common trigrams) in the data, so that this  example transformation had a clear pattern that could be learned. In both systems, a rule was learned from the selected example, and then used to transform the rest of the data.

Particularly, for STransformer, a single rule was learned  from this single example using the rule learner and then used to transform the rest of the data. In the FlashFill, this single example was input into Excel's interface as the example of the desired transformation. Then, we apply the FlashFill Excel command to the rest of the data.   
Finally, we measured the accuracy of each system for each task.  We repeated this process five times for different examples and computed the average accuracy. Table \ref{table:comparisonflashfill} shows the results. 
\begin{table}[h]
%\tiny
%\scriptsize
\small
\centering
\caption{Average accuracy per systems.} 

\begin{tabular}{ | c |c | c | c | c   | } 
\hline
 System  & Abbreviations & Books &Songs & Dates\\
\hline 
 

STransformer - 1   & \textbf{39\%} & \textbf{72\%}  & \textbf{92\%}  & \textbf{8\%}\\
STransformer - 2   & 39\% & \textbf{72\%}  & \textbf{92\%}  & \textbf{8\%}\\
STransformer - 3   & 39\% & 72\%  & \textbf{92\%}  & \textbf{8\%}\\
STransformer - 4   & 39\% & 72\%  & \textbf{92\%}  & \textbf{8\%}\\
STransformer - 5   & 39\% & \textbf{72\%}  & \textbf{92\%}  & \textbf{8\%}\\
\hline 
STransformer (Average)   & 39\% & \textbf{72\%}  & \textbf{92\%}  & \textbf{8\%}\\
\hline 
FlashFill - 1   & 2\% & 9\%  & 83\%  & 1\%\\
FlashFill - 2    & \textbf{74\%} & 9\%  & 23\%  & 1\%\\
FlashFill - 3    & \textbf{74\%} & \textbf{98\%}  & 3\%  & 1\%\\
FlashFill - 4    & \textbf{74\%} & \textbf{98\%}  & 83\%  & 1\%\\
FlashFill - 5   & \textbf{74\%} & 18\%  & 83\%  & 0.8\%\\
\hline 
FlashFill (Average)    &\textbf{60\%} & 46\%  & 55\%  & 1\%\\
 \hline 
\end{tabular}  
\label{table:comparisonflashfill}
\end{table}  
 
 
In this evaluation, some tasks reported low accuracy (e.g.\ Dates) because a single rule could not cover 100\% of the universe of transformations; however, the   results clearly reflects the ability of each system in producing general rules from a single example.  Comparatively, in practice, from a single example,  STransformer can transform  a larger portion of the data correctly than FlashFill. 

Considering the average accuracy among all tasks, STransformer's accuracy was superior to FlashFill's accuracy. Particularly, STransformer's average accuracy per task was superior in the Books (72\%), Songs (92\%) and Dates (8\%) tasks, compared to FlashFill's average accuracy in the Books (46\%), Songs (55\%) and Dates (1\%) tasks. FlashFill's average accuracy in the Abbreviations task was 60\% while STransformer's average accuracy in this task was 39\%.  The results indicate that in some individual runs FlashFill had a better performance than STransformer (e.g.\  FlashFill - 3, in Abbreviations and Books tasks.); however, STransformer's accuracy was   stable, i.e., it did  not vary for different examples (different runs), contrarily to FlashFill's accuracy that varied a lot in all tasks for different examples. It indicates that FlashFill depends on very specific examples to produce very good accuracy, contrarily to STransformer that produces  equally good accuracy for arbitrary examples. 

These results indicate  that STransformer produces more general rules than FlashFill, on average. Also, the results expose  a strong characteristic of STransformer: it is robust, i.e., its accuracy is less impacted by different examples than FlashFill.  This is a desirable property, given that the universe of examples available is diverse in these types of tasks. 


Concluding, the edit-distance based transformation rules generalize much better in real data than the  grammar-based string transformation approach proposed by FlashFill. STransformer has less than 2000 lines of code\footnote{https://github.com/samuraraujo/StringTransformation} and can  be easily be integrated into PBE interfaces, such as Microsoft Excel 2013, or into data cleaning tools.
 
\section{Related Work}
\label{sec:relatedwork}
In this section, we discuss the related work and other methods addressing string transformations.

\textbf{Learning Association Rules}. Arasu et al.\ \cite{DBLP:journals/pvldb/ArasuCK09} studied the problem of learning a set of transformation rules given a set of examples matches. In their problem, they assume that a transformation rule maps a sequence of tokens to another sequence of tokens (e.g.\ $1st\ Ave.\ \to First\ Avenue$). These mappings or associations are then used to transform a string into another. A limitation of their approach is that rules cannot be applied over unseen tokens. For instance, the rule $North  \to  N$, cannot be used to transform ``South" into ``S". Moreover, their algorithm needs a large number of examples to generate useful rules. Importantly, they showed that string transformation can be used to normalize strings  in record matching tasks, which improves the quality of the matches because it reduces the dissimilarity among the strings.  Michelson et al.\ \cite{DBLP:conf/icai/MichelsonK09} studied the problem of  heterogeneous transformations, which are translations between strings
that are not characterized by a single function. E.g. abbreviation, synonyms and acronyms.  Addressing the problem of record linkage, Patro et al.\ \cite{DBLP:conf/dexa/PatroW11} proposed an automatic method to extract top-k high quality transformation rules given a set of possibly co-referent record pairs. Tejada et al.\ \cite{DBLP:conf/kdd/TejadaKM02} addressed a similar problem. Although relevant, these transformations are complementary to the class of transformations that we looked at in our work. We looked into transformations that change the formatting of a string, instead of a mapping based transformations.

\textbf{Learning Candidate Transformations}. Okazaki et  al.\ \cite{DBLP:conf/emnlp/OkazakiTAT08} studied the problem of  generating candidate strings to which a given string $s$ is likely to be transformed. They propose a supervised approach that uses sub-string substitution rules as features and score them using an L1 regularized logistic regression model. Then their model selects the best target string $t$  based on the probability of a string $t$ be a transformation to $s$. As they use a discriminative model, they required a large number of both positive and negative examples. Moreover, the authors state that their model cannot handle changes at phrase/term level, e.g., ``Michael Jackson" and ``Jackson Michael", which we propose to address in our work.

\textbf{Learning String Transformations from Examples}. Gulwani \cite{DBLP:conf/popl/Gulwani11} proposed a grammar-based string transformation language to express syntactic   transformations. Their method aims to synthesize a desired program including loops and conditions, which together with other functions can express a transformation. Although  they designed an efficient algorithm, their method has many performance issues due to the exponential space of transformations that they have to explore. As stated by the author, their algorithm works in practice, but it is not guaranteed to work for all cases. Particularly, we observed in the experiments that their method cannot process strings larger than 255 characters. This method  was extended by Singh et al.\  \cite{DBLP:journals/pvldb/SinghG12} to support semantic based transformations. Recently, Wu et al.\ \cite{DBLP:conf/aaai/WuSK12} also proposed a gram-based string transformation learner.  Compared to these systems, our approach is  simpler and can express all transformations listed in their papers, when the right number of examples is given. 

%It has the advantage to be based on only four solid string operations that can be implemented using the combination of variations of well-known and widely studied string manipulation algorithms (e.g.\ sorting, longest common substring). Apart from that, it does not suffer of any performance issue, it is linear with the input training size; consequently, it can scale for large  string transformation tasks.

String transformations have been studied in many other domains, as well. For instance, Satta et al.\  \cite{conf/acl/SattaH97}, introduce an original data structure and efficient algorithms that learn some families of transformations that are relevant for part-of-speech tagging and phonological rule systems.  Potter's Wheel \cite{DBLP:conf/vldb/RamanH01} is a system that proposes an interactively transformation strategy for data cleaning. They show the evident need of the user interaction in some transformation tasks. Our algorithm can be easily integrated into more complex transformation workflows, as in this process proposed by Potter's Wheel.

\section{Conclusions} 
We have presented a novel algorithm to learn string transformation rules from examples. The algorithm is especially useful for non-programmers that in preparation of their data analysis expend a considerable effort on string transformations (e.g.\ data cleaning). Here, it is presented as a standalone algorithm that can be integrate into data processing tools that support the programming-by-example paradigm, such as Microsoft Excel. The empirical investigation indicates this algorithm can learn transformation rules that generalize for a large number of strings, even when   a limited number of training examples is given. Additionally, the comparison against a state-of-the-art string transformation algorithm shows 30\% improvement in accuracy (on average), indicating that the proposed algorithm is more effective in learning  a transformation from a single example, in the majority of the cases. 

As future research, we will investigate   alternative machine learning approaches to select the rules when a limited set of features and examples are available. The rule selector proposed  in this work is satisfactory and ready to be deployed in real applications; however, it may be improved by incorporating feature  selection state-of-the-art techniques. Overall, the results achieved in this work can facilitate the data processing tasks of millions of non-programmers (and programmers) that need to do string transformations, in a daily basis.  

%It has a simple but strong foundation in four basic string operations: permutation, insertion, deletion and updates. %We demonstrated that these operations can express any string transformation. 
%The proposed algorithm is based on a combination of variations of well-known algorithms for string manipulation.  
%The algorithm   can be employed in widely range of string transformation tasks. 
%We provided an implementation in Ruby programming language, which is available for download at GitHub.  

% ensure same length columns on last page (might need two sub-sequent latex runs)
\balance