\documentclass[a4paper,UKenglish]{lipics}
  %for A4 paper format use option "a4paper", for US-letter use option "letterpaper"
  %for british hyphenation rules use option "UKenglish", for american hyphenation rules use option "USenglish"
 % for section-numbered lemmas etc., use "numberwithinsect"
 
\usepackage{microtype}%if unwanted, comment out or use option "draft"

%\graphicspath{{./graphics/}}%helpful if your graphic files are in another directory

\bibliographystyle{plain}% the recommended bibstyle

% Author macros %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{TAL User Guide}
\titlerunning{TAL User Guide}%optional

\author[1]{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%




%\documentclass[a4paper,12pt]{report}
%\usepackage{geometry} % see geometry.pdf on how to lay out the page. There's lots.
\usepackage{listings}
%\usepackage[usenames,dvipsnames]{color}
\definecolor{cus}{rgb}{0.9,0.9,0.9}
%\geometry{a4paper} % or letter or a5paper or ... etc
% \geometry{landscape} % rotated page geometry
\lstset{language=Prolog, frame=lines, backgroundcolor=\color{cus}}
% See the ``Article customise'' template for come common customisations

%\title{}
%\author{}
%\date{} % delete this line to display the current date
%
%%% BEGIN DOCUMENT
\begin{document}

\maketitle
\tableofcontents


\section{About TAL}
$TAL$ is an Inductive Logic Programming (ILP) system. ILP is used over a wide range of problems to derive logic rules that codify some concept extensionally described by a set of positive and negative examples. A background knowledge defines concepts that can be used within the learning. 
With respect to other ILP tool TAL provides the following notable features:
\begin{itemize}
\item It supports non-monotonic learning
\item Can be used to perform theory revision
\item Uses configurable heuristics in the search
\item Is complete, i.e. it is able to find a solution if one exists. This implies TAL is able to learn recursive and multi-predicate concepts, perform non-observational learning and predicate invention
\item It is easily debuggable, providing a graphical representation of the search
\item It supports constraint solvers over finite domains
\end{itemize}

Though most of the available ILP system are appropriate for specific tasks, TAL was developed to support the largest class of ILP problems, from deriving rules from large datasets to performing a complete search over a large search space and infinite domains.
In this guide we assume the reader is familiar with Inductive Logic Programming and with the TAL algorithm.

\section{Quick guide}
In order to run a learning task a {\em learning file} must be provided. Learning files are described in Section \ref{section:learningfile}.
To execute the learning:
\begin{verbatim}
$ ./run.sh -f path_to_learning_file
\end{verbatim}
The following options can be specified:
\begin{description}
  \item[-d] \hfill \\
	Produces a HTML file and a graphML file that show the states of the learning derivation. It does not support single seed learning. So we advise to debug the learning over a standard learning task first.
  \item[-y] \hfill \\
	Runs the system with YAP. The command ``yap'' must be in the system path.
  \item[-i] \hfill \\
	Calls the Prolog system with the ``-i'' option (interactive mode).
  \item[-s MS] \hfill \\
	Sets a timeout of MS seconds. For the single seed case the timeout refers to each seed.
  \item[-n N]\hfill \\
	Sets the number of solution requested to N.
  \item[-h] \hfill \\
	Prints a help message.
\end{description}
These options override those specified in the learning file.

The system provides information about the search on the standard output, showing the explored hypotheses. 
The output of the system is given in the file \texttt{temp/solution.txt}, where N solutions are provided ordered by score (where N is the parameter set in the option \texttt{solution\_pool}). The tile \texttt{temp/output} contains the outcome of the cross-validation. Both files can be parsed by a Prolog system.

%
%\section{Project guidelines}
%\begin{itemize}
%\item Get to a hypothesis as quickly as possible. Instead of applying resolution on current goals prefer to build new hypotheses.
%\item It may be useful to consider whether during a derivation we already encountered a certain hypothesis. In that case we could prune, or better give lower priority (that branch is likely to result in similar hypotheses as another).
%\item For tasks where performance is critical and completeness is not, use only one example as a goal and execute more than once. Then combine the hypotheses.
%\end{itemize}
%
%
\section{Learning files}
\label{section:learningfile}
To set up a learning task a {\em learning file} is needed. The file can be split on multiple files (e.g. in case the number of examples is big).
It must specify
\begin{itemize}
\item (optionally) a set of options $O$
\item a background theory $B$
\item a set of examples $E$
\item a set of mode declarations $M$
\end{itemize}

\noindent For example:

{\footnotesize
\begin{lstlisting}
/*
*	O
*/

%Maximum number of body literals
option(max_body_literals, 2).
%Maximum number of rules
option(max_num_rules, 2).	
%Maximum depth of the proof
option(max_depth, 200).

/*
*	B
*/

bird(a).
bird(b).
canfly(a).

/*
*	M
*/

modeh(penguin(+bird)).
modeh(abird(+bird), [name(a)]).
modeb(abird(+bird), [name(ab)]).
modeb(\+ canfly(+bird)).

/*
*	E
*/

example(penguin(b), 1).
example(penguin(a), -1).
\end{lstlisting}}

\subsection{Options}
TAL uses a set of options that are specified in the file \texttt{./prolog/default.txt}. They can be overridden in the learning file specifying facts of the type \texttt{option(name, value)}.
\begin{description}
  \item[option(max\_body\_literals, N)] \hfill \\
Specifies $N \in [0, inf[$ the maximum number of body literals allowed in a rule.
  \item[option(max\_num\_rules, N)] \hfill \\
Specifies $N \in [1, inf[$ the maximum number of rules allowed in a solution.
  \item[option(max\_depth, N)] \hfill \\
Specifies $N \in \in [1, inf[$ the maximum depth of the proof procedure.
  \item[option(debug, B)] \hfill \\
Specifies $B \in \{true, false\}$ if the debugging is active or not.
  \item[option(timeout, MS)] \hfill \\
Specifies $MS \in \{false\} \cup [1, inf[$ the timeout for the learning task in milliseconds. $false$ disables the timeout.
  \item[option(number\_of\_solutions, N)] \hfill \\
Specifies $N \in [1, inf[$ the number of solutions required before the task terminates. This parameter is used in the termination component of the heuristics, so the effect is dependent on the specific implementation.
  \item[option(timeout\_for\_test, N)] \hfill \\
During the derivation partial candidate hypotheses are tested over all the examples to calculate the score. $MS \in [0, inf[$ is the maximum time in milliseconds allowed for the test on one example (otherwise the test fails).
  \item[option(strategy, ST)] \hfill \\
Specifies $ST \in \{progol, breadth, full\_breadth, ...\}$ the strategy used. Strategies are described in Section \ref{section:strategies}. Custom strategies can be used.
  \item[option(single\_seed, B)] \hfill \\
Specifies $B \in \{true, false\}$ if the single seed strategy is used or not. The single seed strategy is described in Section \ref{section:singleseed}.
  \item[option(single\_seed\_ratio, N)] \hfill \\
Specifies $N \in [1, inf[$ the ratio for the single seed case. Intuitively each seed is taken to represent $N$ examples.
  \item[option(solution\_pool, N)] \hfill \\
Specifies $N \in [1, inf[$ the number of solutions that are traced in the learning process. This is the number of solutions that appear in the final solution file and also the solutions that are considered in the post learning phase of the single seed case.
  \item[option(xvalidation\_folds, N)] \hfill \\
Specifies $N \in ]-inf, inf[$ the number of folds used for cross validation. If less than $1$ cross validation is not performed.
 \item[option(ic\_check, B)] \hfill \\
Specifies $B \in \{true, false\}$ if the integrity constraints are checked for each hypotheses. See Section \ref{section:ic} for details.
\end{description}
All of the options refer to the learning over a seed in the case of single seed learning.

\subsection{Background knowledge}
The background knowledge is specified as a Prolog theory. 
Predicates that are not defined in the theory must be declared as bultin. For example the predicate $</2$ and $length/2$ cannot be used unless declared as follows in the learning file:
{\footnotesize
\begin{lstlisting}
builtin(<(_,_)).
builtin(length(_,_)).
\end{lstlisting}}
Constraint solver over finite domains can be used as in the $clpfd$ library.

\subsubsection{Integrity constraints}
\label{section:ic}
Integrity constraints are by default used at the abductive level. So the user can declare integrity constraints over the meta abducibles $pr$ that represent rules as logic atoms. 
Enabling the option {\em ic\_check} each hypothesis is checked before the score is calculated. If it entails the integrity constraints than it's considered as a possible solution, otherwise it's discarded and the proof continues. Note that the check does not control floundering so the result may not be correct (some hypotheses that do not entail the integrity constraints may be considered as possible).

\subsection{Mode declarations}
Mode declarations have a schema argument and optionally a second argument where options can be specified. In the example of learning file provided for two mode declarations the name used in the debugging file is set.

\subsection{Examples}
Examples are declared as \texttt{example(e, pn)} where e is the actual example and $pn$ specifies whether it's positive $pn > 0$ or negative $pn<0$.


\section{Heuristics}
The search in TAL is performed by an abductive logic programming system. So each state in the derivation is associated to a full abductive state, including a set of goals, a set of constraints and a set of abducibles. Some of these states corresponds to the derivation of an inductive solution, in some cases already derived in other parts of the tree and in other cases brand new. These states are called {\em partial hypothesis states}. All the states are associated to a score, made of a pair $(phs, s)$. 

$phs$ is $0$ for all the states that are not partial hypothesis states and set to a different number $phs_{strategy}$ otherwise. This number depends on the particular strategy adopted and is usually set to $1$	. $s$ depends on the strategy and only changes in partial hypothesis states. The reason why a pair is used is to allow strategies evaluate the score only whenever the current open states are only partial hypothesis states, thus choosing what solution to refine whenever no abductive steps can be performed. 

\subsection{Strategies}
\label{section:strategies}
\begin{sloppypar}
Strategies are fully defined by the following three predicates: \texttt{solution\_score/1}, \texttt{heuristic\_score/4} and \texttt{termination /1}.
\texttt{solution\_score(phs)} sets {\em phs}. \texttt{heuristic\_score(solution, evaluation, info, hs)} produces the score {\em hs} given the current solution, an {\em evlauation} list and a set of additional informations (like the depth of the solution). The evaluation list is of the type \texttt{[out\_example(fly(tweety), -1, 1), ...]} where the second argument of $out\_example$ is $1$ if the example is positive and $-1$ if negative and the third is $1$ if the example is entailed and $-1$ if it's not.
\texttt{termination(solutions)} is defined as true if given the pool of solutions in {\em solutions} the learning should terminate.

\end{sloppypar}

\subsubsection{Progol style}

{\em option(strategy, progol).} Implements a strategy inspired by Progol where the score is given by \#(number of positive examples entailed) - \#(number of negative examples entailed) - \#(complexity of the solution). The Learning terminates after a complete search so it is advised to set a timeout for large problems.

\subsubsection{Breadth first}
{\em option(strategy, breadth).} It considers the depth as a score. The learning terminates when $n$ solutions are found, where $n$ is defined in the options.

\subsubsection{Custom}
The file \texttt{user/custom\_strategies.pl} can be used to implement custom strategies (a template is provided). 

%\section{Single seed strategy}
%\label{section:singleseed}

\section{Cover loop approach}
\label{section:coverloop}
The strategy implemented is the standard cover loop approach. It can be used also for nonmonotonic  but it performs best on monotonic problems.
At each iteration a current set of example is used. From this set a positive example is used as seed. Then the standard learning task starts and it terminates if one of the following conditions is met:
\begin{itemize}
\item A timeout occurs
\item The termination condition of the strategy is met
\end{itemize}
Another iteration is started if the score is below a certain threshold $t$ from \texttt{option(loop\_threshold, t)} (note: all the scores are implemented such that lower is better). In this case the current solution is added to the background knowledge and the examples entailed by the current solution is subtracted to obtain the new list of examples considered. Otherwise the learning terminates.


\section{Use notes}
When using {\em ordering mode} outputs in the head are not supported (they can be used but the ordering is not defined for conditions that use head outputs). Furthermore all the type declarations must ground the argument.


\section{Implementation notes}
Regardless of the execution options the interesting solutions are asserted as 
\texttt{saved\_solution(solution, info, score)}.
\texttt{run(file, solution)} produces the saved solutions. The second argument is one of such solutions (its use is deprecated). 

%\section{APIs}

%
%\subsection{Executing the learning}
%
%
%
%\section{Guidelines}
%
%\section{Heuristics}
%
%
%\section{Internals}
%
%
%
%\subsection{Scoring}
%Each state is assigned a score. The score is internally represented as a pair $(a, b)$ where $a$ is 0 if the state does not correspond to a new inductive solution (the set of abducibles contains a new abducible that makes the current inductive solution differ wrt the father state). $b$ is application dependant and contains the real score. $b$ is thus evaluated when the proof is in a situation where we have alternative hypotheses and we need to choose one. In the case where some branches are performing other steps of the abductive proof procedure, those have priority. Note that this strategy does not help with infinite recursion.
%Not that lowest score is considered better.
%
%\subsection{Hypothesis generation}
%TAL goes through a top theory rule whenever a new condition is generated.
%{\footnotesize
%\begin{lstlisting}
%...
%        pr(B, H),
%        - cond -
%        - type checking -
%        gpr(B, H),
%...
%\end{lstlisting}}
%Considering a left to right evaluation of the rules, the abducible $pr$ is selected first. At this point, the link part of the rule has been formed but we don't have the constants. These can be generated only by checking the condition $-cond-$. 
%$-cond-$ can lead (a) to immediate failure, (b) to immediate success, (c) to the production of a new rule, (d) to the use or rules derived so far, (e) to recursion.
%(a) and (b) happen in the easiest learning scenario. 
%
\subsection{The output list}
The output list is a list of lists that is carried as an argument of body predicates and updated whenever output arguments are produced. The internal lists are of the type $[type, element_{1}, ..., element_{n}]$, where type represents the type of all the elements.
The flattened representation must be able to refer to each single element of the output list. Despite the explicit partitioning of the outputs in different types it is still possible, given an ordered list of mode declaration to refer univocally the position of each single element.
Consider the following mode declarations
{\footnotesize
\begin{lstlisting}
modeh(penguin(+bird))	// with label m1
modeb(eats(+bird, -animal))	//with label m2
modeb(hates(+animal, -bird))	//with label m3
\end{lstlisting}}
and the rule $penguin(X) :- eats(X, Y), hates(Y, W),  eats(W, Z)$. We can associate each element of the nested list to an integer from $1$ to the number of outputs. The output list associated with the rule is $[[bird, X, W], [animal, Y, Z]]$. For example  output codified as $(1,2)$ points to the second element of the first list, i.e. $W$.
The flattening will then be $[(m1, [], []), (m2, [], [(1,1)]), (m3, [], [(2,1)]), (m2, [], [(1,2)])]$ but we also need to link the outputs. 

Consider the following mode declaration:
{\footnotesize
\begin{lstlisting}
modeh(mother_and_pet(+person, -person, -pet ))	// with label m1
modeb(pet(+person, -pet)	)	//with label m2
modeb(mother(+person, -person))	//with label m3
\end{lstlisting}}
rule $mother\_and\_pet(X, Z, Y) :- pet(X, Y), mother(X, Z).$ The output list associated will be $[[person, X, Z], [pet, Y]]$. In this case the codification of the rule $[(r1, [], []), (r2, [], [(1,1)]), (r3, [], [(1,1)])]$. A new list output-link is derived $[(1, 2),(2, 1)]$ that corresponds to the outputs codified as $[(person, Z), (pet, Y)]$. Underlying a dictionary permits the reconstruction of the rules $[X-(1,1), Z-(1,2), Y-(2,1)]$
%
%
%\subsection{``Flattened'' representation}
%TAL uses internally abducibles of the type $r/3$, $pr/3$ and $gpr/3$, respectively {\em rule, partial rule} and {\em ground partial rule}.
%The three arguments are
%\begin{enumerate}
%\item $ID$: The id of the rule, a variable with a finite domain or a positive integer
%\item $L$: A level, a positive integer
%\item $R$: A representation of the rule similarly to what described in the reference paper
%\end{enumerate}
%
%\subsection{Order of execution}
%\subsubsection{Links before the abducible}
%By using the links before we ground the inference as much as possible and this results in easier inference. Also, grounding the abducible results in less dynamic integrity constraints being triggered.
%The link though produces as many branches as the allowed links. 
%The $pr/$ abducible is guaranteed to be ground, except for the list containing the constants that could be grounded during the inference.
%
%\subsection{Extensions to the abductive procedure}
%The current abductive solution is controlled by a predicate $checkAs(Delta)$ that checks that we are not in a state that will inevitably lead to an inductive solution that is not allowed. More specifically it checks that for all levels there are no more than $n$ different partial hypotheses, where $n$ is the maximum number of hypotheses allowed. 
%%
%%Another extension is for efficiency. We implement the following pruning rule ``whenever a partial rule contains two conditions that do not produce out consecutively, the first must come before the second in a full order between conditions''. The full order trivially consists in the alphabetical order of the names of the mode declaration plus the constants and linking. This definitely remove redundancies in the full setting. In the guided setting one may suspect some solutions may be lost. If we perform a greedy search for example, this may affect what solution is found first thus altering the final hypotheses. In full searches though the outcome is not affected since all solutions are still within the search space.
%%
%
%Failure goals are ordered according to their size. Smaller goals should be easier to fail.
%
%\subsection{Examples}
%Examples are processed to add a level of abstraction in the proof.
%For each $example(real\_example, positive\_or\_negative)$ declared, the system adds to the top theory a clause of the type $ex(real\_example, positive\_or\_negative) :- \{not \} real\_example$.
\section*{Known issues and current work}
\begin{itemize}
\item The current version doesn't check floundering when evaluating integrity constraints.
\item There are issues using YAP over large learning files.
\end{itemize}


\section*{Contacts}

For questions, bug reports and anything else please contact {\em dcorapi@imperial.ac.uk}.

\section*{Acknowledgements}
The abductive system used in TAL was developed by Jiefei Ma (jiefei.ma03@imperial.ac.uk). 

\end{document}