% \setcounter{secnumdepth}{-1}

\chapter*{Abstract}

The work presented in this report is focussed on the design and development of a generic symbolic execution framework on top of the \K Framework ~\cite{rosu-serbanuta-2010-jlap}. 
We propose a generic, language-independent symbolic execution approach for languages endowed with a formal operational semantics based on term rewriting. Starting from the definition of a language $\cal L$, a new definition  $\LSym$ is automatically generated, which has the same syntax, but whose   semantics extends  ${\cal L}$'s data domains  with symbolic values and  adapts the  semantical rules  of $\cal L$ to deal with the new domains.
Then, the symbolic execution of  ${\cal L}$ programs is the concrete  execution of the corresponding $\LSym$ programs, i.e., the application of the rewrite rules in the semantics of  $\LSym$.
 We prove that  the symbolic execution  thus defined has the adequate properties normally expected from it, and illustrate the approach, together with some applications of it to formal analysis and verification, on a simple imperative language defined in the \K framework. 
 

% \setcounter{secnumdepth}{-1}
\chapter{Research description}

\section{Short overview of the thesis - current status}
\label{rd:ov}
There are many frameworks on symbolic execution nowadays but most of them are specialized and optimized for specific languages: Java PathFinder~\cite{DBLP:conf/kbse/PasareanuR10} for Java, PEX~\cite{pex} for C\#, and so on. Most of these frameworks have common approaches, the single fact that makes them different is the language they use to perform symbolic execution. All of them make use of {\it symbolic values}; the symbolic execution 
of a program generates an {\it execution tree}, where each branch corresponds to a {\it program path}, also called {\it path condition}. Often, the existent frameworks provide interfaces for additional tools like model-checkers or satisfiability solvers.

Our purpose is to design a generic framework for symbolic execution which is parametric in the semantics of a programming language. We use the \K operational semantics of a programming language given as a set of rules and we design an algorithm which produces a symbolic semantics, which can be used to run programs symbolically. As a result, the symbolic execution of a program is a path condition; all path conditions can be obtained calling a model-checker. From the implementation point of view the symbolic execution framework is implemented on top of the K framework, version 3.0. It also provides an interface to the Z3 SMT solver and to the \K model checker.

Most of the existing approaches in symbolic execution are focused mainly on program testing, especially in test case generation. Our approach includes a connection to a SMT solver and model checker, so it can be used for test case generation. On the other hand, having a symbolic semantics of a programming language, the challenge is to use it for reasoning about programs, especially for program verification.


% \setcounter{secnumdepth}{-1}
\chapter{Progress report}

\section{Proposed approach}

The \K definition of a language consists in a set of rewrite rules. The programs together with their state are constituents of a special term which is a called configuration. The rewrite rules have the form $\varphi \Ra \varphi'$ where $\varphi$ and $\varphi'$ are patterns. If $\varphi$ matches the the configuration then it replaces it with $\varphi'$. Rewrite rules apply on the configuration until none of them can be applied anymore. Such rewrite rules describe the concrete semantics of a programming language in \K, in the sense that the semantics can run programs having concrete inputs. Since symbolic execution can have symbolic inputs as described in~\cite{DBLP:journals/cacm/King76} then we must adapt the semantics of the language such that it can run programs symbolically. Our approach is to find the transformation from concrete to symbolic semantics and to implement this transformation.


\section{Main achievements}
The main achievements regarding the proposed approach are the listed below:
\begin{itemize}
\item Design a mechanism for the transformation from a \K semantics to a symbolic \K semantics
\item Formal definition for the transformation 
\item Proved two of the most important properties of symbolic execution, namely {\it Coverage} and {\it Precision}
\item Implementation of the transformation and testing
\end{itemize}
The steps above are fully described in this report.

\section{Current papers and publications}

The research done until now also produced some interesting results which ended up as publications in different workshops/conferences:
\begin{itemize}
\item The extended \K semantics of OCL has been submitted for publication as a regular paper entitled "Towards a K Semantics for OCL" in the post-proceedings of the \K'11 workshop, held at Cheile Gr{\u a}di{\c s}tei, Bra{\c s}ov, Rom{\^ a}nia.
\item In order to execute real programs \K framework offers a tool called {\tt krun} which is able to execute programs using directly the \K semantics of a programming language. A limitation of it was that it was not able to deal real input/output operations. We added this feature directly to Maude and we published the paper "Making Maude Definitions more Interactive" in the post-proceedings of WRLA'12(Workshop on Rewriting Logic and Applications), held in Tallinn, Estonia.
\item Improving implementation of the \K framework, changing the syntax of the \K language, adding program execution features, and finishing the \K semantics of made the framework evolve to a stable release. We published the paper "Executing Formal Semantics with the K Tool" in the proceedings of the 18th International Symposium on Formal Methods (FM'12), held in Paris, France. Related to the stable release of \K framework, we also submitted "The K Primer (version 2.5)" for publishing in \K'11 post-proceedings.

\item A more technical paper is "Automating Abstract Syntax Tree construction for Context Free Grammars", which aims to automatize the construction of \K specific abstract syntax trees directly from the context free grammar of a language. It was submitted and accepted for publication in post-proceeding of SYNASC 2012, published by Conference Publishing Service (CPS). The paper will be presented at Timi{\c s}oara, Rom{\^ a}nia, on 28$^{th}$ of September, 2012.

\item The results regarding symbolic execution are part of the technical report "A Generic Framework for Symbolic Execution", which was published at INRIA as Rapport de Recherche no. 8189 in December 2012.
\end{itemize}



% \setcounter{secnumdepth}{-1}
\chapter{Technical report}


\section{Introduction}
Symbolic execution is a well-known program analysis technique introduced in 1976 by James C. King~\cite{DBLP:journals/cacm/King76}. Since then, it has proved its  usefulness for testing, verifying,  and debugging programs. Symbolic execution  consists in  providing programs with symbolic  inputs, instead of concrete ones, and the execution is performed by processing  expressions involving the symbolic inputs~\cite{DBLP:conf/kbse/2010}. The main advantage of  symbolic execution is that it allows reasoning about multiple concrete executions of a program,  and its  main disadvantage is the state space explosion determined by decision statements and loops. Recently, the technique has found renewed interest in the formal methods community due to  new algorithmic developments and progress in decision procedures.
Current applications of symbolic execution include automated test input generation \cite{klover}, \cite{Staats:2010:PSE:1831708.1831732}, invariant detection \cite{Pasareanu04verificationof}, model checking~\cite{Khurshid03generalizedsymbolic}, or proving program partial correctness~\cite{Siegel:2006:UMC:1146238.1146256}, \cite{Dillon:1990:VGS:78617.78622}.

The \emph{state} of a symbolic program execution typically contains the next  statement to be executed, symbolic values of program variables, and the {\it path condition}, which  constrains past and present values of those variables (i.e., constraints on the symbolic values are accumulated on the path taken by the execution for reaching the current instruction).
The states, and the transitions between them, generate a \emph{symbolic execution tree}. When the control flow of a program is 
determined by  symbolic values (e.g., the next instruction to be executed is an if-statement, whose Boolean condition depends on symbolic values), then there is a branching in the tree. The  path condition is then used to discriminate
among  branches.

Two of the  most important properties expected of  symbolic execution are:\\
\textbf{\textit{Coverage}}: for every concrete execution there is a corresponding symbolic one;\\
\textbf{\textit{Precision}}: for every symbolic execution there is a corresponding concrete one;\\
\noindent where two executions are said to be corresponding if they take the same  path.



 In this paper we propose a generic, language independent symbolic execution approach that, under some reasonable conditions, has the above properties.

\paragraph*{Our contribution}
Most of the existing tools and methodologies have been developed for specific programming languages, and most of them 
 are not based on formal semantics.
The main contribution of the paper includes the  proposing of a general, language-independent approach for symbolic execution,  based on a language's formal semantics defined using term rewriting. Most existing   operational semantics styles (small-step, big-step, reduction with evaluation   contexts, \ldots) have beed shown to be  representable in this way~\cite{DBLP:journals/iandc/SerbanutaRM09}. The proposed framework has the desired properties and supplies a good basis for analysis tools grounded in the formal semantics of the programming languages.


We start by identifying the main ingredients for defining programming  language   in an algebraic 
 and term-rewriting setting: a signature, a model of that signature, including interpretations of data, and a set of rewrite rules.  We distinguish between data, which are used by, but are not part of programming languages, and non-data (e.g., statements), which are part of a language's definition. If the data are specified equationally then our definitions are   rewriting-logic specifications~\cite{DBLP:conf/rta/Meseguer00}, but unlike  rewriting logic, we do not assume anything about how data is defined; this allows us to focus on the language definition itself and saves us a lot of 
 technical complications.   
  Then, starting from the definition
 of a language  $\cal L$,  a new language definition $\LSym$ is automatically generated, with the same syntax as  $\cal L$, but whose   semantics
 extends  ${\cal L}$'s datatypes  with symbolic values and  adapts the  semantical rules  of $\cal L$ to handle the symbolic values.
 
  By definition, the symbolic semantics of ${\cal L}$ is  the semantics of  $\LSym$, and symbolic execution of programs  in
  $\cal L$ is the (usual) execution of  $\LSym$, i.e., the application of  semantical rules of $\LSym$ to the corresponding symbolic programs.
  
 We prove that  symbolic execution  has  the (coverage and precision) properties presented above. 
  We illustrate the approach on  a simple imperative language  \IMP whose  operational semantics is given in the  \K semantic framework~\cite{rosu-serbanuta-2010-jlap}. We exhibit, on \IMP example, how the symbolic semantics 
can be extended in order to produce tools for program analysis.

\paragraph*{Related work}
There are many tools  for performing symbolic execution for specific programming languages. Java PathFinder~\cite{DBLP:conf/kbse/2010} is a complex symbolic execution tool which uses a model checker to explore different symbolic execution paths. The approach is applied to Java programs and it can handle recursive input data structures, arrays, preconditions, and multithreading. Java PathFinder can access through an interface several  Satisfiability Modulo Theories (SMT) solvers, and the user can also choose between multiple decision procedures.

One interesting approach consists in combining concrete and symbolic execution, also known as {\it concolic} execution. 
First, some concrete values are given as input and these determine an execution path. When the program encounters a decision point,
the paths not taken by concrete execution   are explored symbolically.
This type of  analysis has been implemented by several tools for performing dynamic test generation: DART~\cite{Godefroid:2005:DDA:1065010.1065036}, CUTE~\cite{Sen:2005:CCU:1081706.1081750}, EXE~\cite{Cadar06exe:automatically}, PEX~\cite{pex},~\cite{HalleuxT08}. 

Symbolic execution has initially been used  in automated test generation ~\cite{DBLP:journals/cacm/King76}. The main goal of testing is to achieve large code coverage, meaning the exploration of as many statements   and code branches as possible. Symbolic execution is well
suited for this since it is driven by the control flow of a program. 
Test sequence generation is another application of symbolic execution, consisting in generating all code sequences which explore different  paths \cite{Lee2003523}. Symbolic execution is a mainly testing methodology but it can be useful for proving program correctness in case there is an upper bound for the number executions of each loop. Otherwise, loops must be annotated with invariants. There are several tools (e.g. Smallfoot~\cite{DBLP:conf/aplas/BerdineCO05,DBLP:conf/aplas/2005}) which use symbolic execution together with separation logic to prove Hoare triples. There are also approaches which tend to detect automatically invariants in programs(\cite{Pasareanu04verificationof}, \cite{schmitt07}). Another useful symbolic execution application is the static detection of runtime errors. The main idea to perform symbolic execution on a program until a state is reached where an error occurs: null-pointer dereference, division by zero, etc. 

Another body of related work is symbolic execution in term-rewriting systems. The technique called \emph{narrowing}, initially used for solving equation systems in abstract data\-types, has been extended for solving reachability problems in  term-rewriting systems
and have sucessfully been applied to the analysis of security protocols~\cite{DBLP:journals/lisp/MeseguerT07}.  Such analyses relies on powerful unification-modulo-theories algoritms~\cite{DBLP:journals/entcs/EscobarMS09}, which works well for security protocols since there are unification algorithms modulo the  theories involved there (exclusive-or, exponentiation,\,\ldots). This is not  suitable for general programming languages, where datatypes can be arbitrary.
In our approach we replace unification by a combination of matching (with possibly altered, yet equivalent) rewrite rules and calls to SMT solvers.

There are also various approaches that refer to applications of symbolic execution (see e.g.,~\cite{DBLP:journals/sttt/PasareanuV09}). Since our paper deals with the foundations of symbolic execution, we do not mention those approaches here. 

\vspace*{-3ex}
\paragraph{Structure of the paper} Section~\ref{sec:example}  introduces of our running example (a simple imperative language \IMP) and its definition in \K. Since we want our approach be generic in both the framework the operational semantics is defined and the language, Section~\ref{sec:prelim} introduces our framework for language definitions. \K and \IMP are used only as running examples.  
Section \ref{sec:sym} shows how the definition of a language $\cal L$ can be automatically extended to that of a language
 $\LSym$ by extending the data of $\cal L$ with symbolic values, and the rules of $\cal L$ with means to handle those symbolic values. We also relate our matching-based approach to symbolic execution with related works based on unification. Section \ref{sec:rel} deal with the symbolic semantics and  with its relation  to the  concrete semantics, establishing the coverage and precision results stated in this introduction. Section~\ref{sec:experiments} describes
how our approach is applied on the \K definition of \IMP and some applications of our symbolic execution framework to formal program analysis and verification. Conclusions and future work plans are given in Section~\ref{sec:conclusions}.

\begin{figure}[t]
\fontsize{8}{10}
\selectfont
\begin{center}
\begin{tabular}{ll}
\multicolumn{2}{l}{
$\begin{aligned}
\sort{Id} &::= \textrm{domain of identifiers}\\
\sort{Int} &::= \textrm{domain of integer numbers (including operations)}\\
\sort{~Bool} &::= \textrm{domain of boolean constants (including operations)}
\end{aligned}$
}\\
$
\begin{aligned}
{\nonTerminal{\sort{AExp}}} ::&\!={{\nonTerminal{\sort{Int}}}}&&\mid{{{\nonTerminal{\sort{AExp}}}}\terminal{/}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
&\mid{{\nonTerminal{\sort{Id}}}}{}&&\mid{{{\nonTerminal{\sort{AExp}}}}\terminal{*}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
&\mid{({\nonTerminal{\sort{AExp}}})}{}\hspace{-2ex}&&\mid{{{\nonTerminal{\sort{AExp}}}}\terminal{+}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
\end{aligned}
$&$
%\begin{aligned}
%&\mid{{{\nonTerminal{\sort{AExp}}}}\terminal{/}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
%&\mid{{{\nonTerminal{\sort{AExp}}}}\terminal{*}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
%&\mid{{{\nonTerminal{\sort{AExp}}}}\terminal{+}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
%\end{aligned}
$
\\
$
\begin{aligned}
{\sort{BExp}}::&\!={{\nonTerminal{\sort{Bool}}}}{}\\
&\mid {({\nonTerminal{\sort{BExp}}})}{}&&\mid {{{\nonTerminal{\sort{AExp}}}}\terminal{<=}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
&\mid {\terminal{not}{{\nonTerminal{\sort{BExp}}}}}~[\textrm{strict}]\hspace{-2ex}&&\mid {{{\nonTerminal{\sort{BExp}}}}\terminal{and}{{\nonTerminal{\sort{BExp}}}}}~[\textrm{strict}(1)]\\
\end{aligned}
$&$
%\begin{aligned}
%&\\
%&\mid {{{\nonTerminal{\sort{AExp}}}}\terminal{<=}{{\nonTerminal{\sort{AExp}}}}}~[\textrm{strict}]\\
%&\mid {{{\nonTerminal{\sort{BExp}}}}\terminal{and}{{\nonTerminal{\sort{BExp}}}}}~[\textrm{strict}(1)]\\
%\end{aligned}
$
\\
\multicolumn{2}{l}{
$
\begin{aligned}
{\nonTerminal{\sort{Stmt}}}::&\!={\terminal{skip}}{}
\mid{\terminal{\{}{\nonTerminal{\sort{Stmt}}}\terminal{\}}}
\mid{{{\nonTerminal{\sort{Stmt}}}}\terminal{;}{{\nonTerminal{\sort{Stmt}}}}}
\mid{{{\nonTerminal{\sort{Id}}}}\terminal{:=}{{\nonTerminal{\sort{AExp}}}}}\\
&\mid{\terminal{while}{{\nonTerminal{\sort{BExp}}}}\terminal{do}{{\nonTerminal{\sort{Stmt}}}}}{}\\
&\mid{\terminal{if}{{\nonTerminal{\sort{BExp}}}}\terminal{then}{{\nonTerminal{\sort{Stmt}}}}\terminal{else}{{\nonTerminal{\sort{Stmt}}}}}~[\textrm{strict}(1)]
\end{aligned}
$}\\
\multicolumn{2}{l}{
$\sort{Code}::=\sort{Id}\mid\sort{Int}\mid\sort{Bool}\mid\sort{AExp}\mid\sort{BExp}\mid\sort{Stmt}\mid \sort{Code}\kra\sort{Code}$}
\end{tabular}
\end{center}
\vspace{-2ex}
\caption{\label{fig:impsyn}\K Syntax of IMP}
\end{figure}


%\vspace*{-17ex}
\section{A Simple Imperative Language\\ and its Definition in \mbox{\large{\K}}}
\label{sec:example}

Our running example is \IMP, a simple imperative language intensively used in research papers. The syntax of \IMP is described in Figure~\ref{fig:impsyn} and is mostly self-explained since it uses a BNF notation. The statements of the language are either assignments, {\it if} statements, {\it while} loops, {\it skip} (i.e., the empty statement), or blocks of statements. The attribute  \textit{strict}  in some production rules means the arguments of the annotated expression/statement are evaluated before the expression/statement itself. If  \textit{strict} is followed by a list of natural numbers then it only concerns  the arguments whose  positions are present in the list.


\begin{figure}[H]
\centering
\begin{tabular}{lll} 
\begin{minipage}{6cm}
\begin{verbatim}
if ( a <= b )
  then if (a <= c)
    then min := a
    else min := b
  else if (b <= c)
    then min := b
    else min := c;
if (a <= b && 1 <= c)
  x = a / (c / min);
else skip
\end{verbatim}\vspace{-3ex}
\caption{An IMP program: computing the minimum of three numbers\label{fg:minimum}}
\end{minipage}
\end{tabular}
\end{figure}

\begin{figure}[b]
\fontsize{8}{10}
\selectfont
\centering
\begin{tabular}{lll} 
\begin{minipage}{5cm}
$
\begin{array}{ll}
\sort{Cfg}&::=\kall[black]{cfg}{\kall[black]{k}{\sort{Code}}\kall[black]{env}{\sort{Map}_{\mathit{Id},\mathit{Int}}}}
\end{array}
$
\caption{\label{impcfg}\K Configuration of IMP}
\end{minipage}
\end{tabular}
\end{figure}


%The example shown in 
Figure~\ref{fg:minimum} includes a simple \IMP program for which we detect a bug using symbolic execution (see Section~\ref{sec:experiments}).
%computes the minimum of the values stored in the variables {\tt a}, {\tt b},  {\tt c}. The program is intentionally incorrect  when {\tt a} $\leq$ {\tt b} and {\tt a} $>$ {\tt c};  in this case,  {\tt min} is set to {\tt b} instead of {\tt c}.  
%we  show later in the paper, using symbolic execution, that it includes a division by zero.



The operational semantics of \IMP is given as a set of (possibly conditional) rewrite rules.   The terms to which rules apply are called \textit{configurations}. Configurations typically contain the program to be executed, together with any additional information required for
program execution.
The structure of a configuration depends on the language being defined; for \IMP, it consists only of the program code to be executed and an environment mapping variables to values.
 Configurations are written in \K as nested structures of \textit{cells}: for  \IMP, a top cell  $\terminal{cfg}$, having 
 a subcell $\terminal{k}$ containing the code and a subcell $\terminal{env}$ containing the environment (cf.\ Figure~\ref{impcfg}). The code inside the  $\terminal{k}$ cell is represented as a list of computation tasks $C_1\kra C_2\kra\ldots$ to be executed in the given order. Computation tasks are typically statements and expressions.
The environment in the $\terminal{env}$ cell  is  a multiset of bindings of variable to values, e.g., 
$\terminal{a}{}\mapsto 3$.

\begin{figure}[h]
\begin{center}
\fontsize{8}{10}
\selectfont

$
\begin{aligned}
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1\terminal{+}I_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1+_{\mathit{Int}}I_2}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1\terminal{*}I_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1*_{\mathit{Int}}I_2}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1\terminal{/}I_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1/_{\mathit{Int}}I_2}}}{I_2\not=0}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1\terminal{<=}I_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I_1\le_{\mathit{Int}}I_2}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\true\terminal{and}B}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{B}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\false\terminal{and}B}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{\false}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{not}B}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{\neg B}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{skip}\;}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{S_1;S_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{S_1\kra S_2}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{\{}S\terminal{\}}}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{S}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if}\true\terminal{then}S_1\terminal{else}S_2}}}
{\kprefix[black]{cfg}{\kall[black]{k}{S_1}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kall[black]{k}{\terminal{if}\false\terminal{then}S_1\terminal{else}S_2}}}
{\kprefix[black]{cfg}{\kall[black]{k}{S_2}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{while}B\terminal{do}S}}}{}{}\\
&{}\quad{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if}B\terminal{then}\terminal{\{}S\terminal{;}
  \terminal{while}B\terminal{do}S\terminal{\}}\terminal{else}\ \terminal{skip}}}}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{X}\kprefix[black]{env}{X\mapsto I}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I}\kprefix[black]{env}{X\mapsto I}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{X\terminal{:=}I}\kprefix[black]{env}{X\mapsto \_}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{}\kprefix[black]{env}{X\mapsto I}}}{}
\end{aligned}
$
\end{center}
\caption{\label{impsem}\K Semantics of IMP}
\end{figure}
The semantics of \IMP is shown in  Figure~\ref{impsem}.  Each rewrite rule from the semantics  specifies how the configuration evolves when the first  computation task from the $\terminal{k}$ cell is executed. Dots in a cell mean that the rest of the cell remains unchanged. Most syntatical constructions require one semantical rule. The exceptions are the conjunction operation  and the $\terminal{if}$ statement, 
which have  Boolean arguments and require two rules each (one rule per Boolean value).

In addition to the rules shown in Figure~\ref{impsem} the semantics of \IMP includes additional rules induced by the \textit{strict} attribute. We show only the case of the $\terminal{if}$ statement, which is strict in the first argument. The evaluation of this argument is achieved by executing the following rules:\\

\centerline{
\fontsize{8}{10}
\selectfont
$\begin{aligned}
&\hspace{-0.5cm}\kprefix[black]{cfg}{\kall[black]{k}{\terminal{if} \mathit{BE} \terminal{then} S_1\terminal{else} S_2\kra C}} \pmb{\Rightarrow}\\
&\hspace{1cm}\kprefix[black]{cfg}{\kall[black]{k}{\mathit{BE}\kra\terminal{if} \square \terminal{then} S_1\terminal{else} S_2\kra C}}\\
&\hspace{-0.5cm}\kprefix[black]{cfg}{\kall[black]{k}{B\kra\terminal{if} \square \terminal{then} S_1\terminal{else} S_2\kra C}}\pmb{\Rightarrow} \\
&\hspace{1cm}\kprefix[black]{cfg}{\kall[black]{k}{\terminal{if} B \terminal{then} S_1\terminal{else} S_2\kra C}}
\end{aligned}
$}

Here,  $BE$ ranges over \sort{BExp} $\setminus\{\false,\true\}$, $B$ ranges over the Booleean values $\{\false,\true\}$, and 
$\square$ is a special variable, destined to receive the value of $BE$ once it is computed, typically, by the other   rules in the semantics. 



\section{The Ingredients of a Language Definition}
\label{sec:prelim}
In this section we identify the  ingredients of a formal language definition in an algebraic and term-rewriting setting.
The concepts are then explained on the \K definition of \IMP. 
We assume the reader is familiar with  the basics of algebraic specification, rewriting, and First-Order Logic (abbreviated FOL in this paper).
A programming language $\cal L$ can be  defined as a triple $(\Sigma,\T, \S)$, consisting of:
 \begin{enumerate}
\item A many-sorted algebraic signature $\Sigma$, which includes  at least a  sort $\Cfg$ for configurations and a subsignature  $\Sigma^{\mathsf{Bool}}$ for Booleans with their usual  constants and operations.    $\Sigma$ may also include other subsignatures for other data sorts, depending on the language $\cal L$ (e.g., integers,  identifiers, lists, maps,\ldots). Let $\Sigma^\mathsf{Data}$ denote the subsignature of $\Sigma$ consisting of all data sorts and their operations. 
We assume that the sort $\Cfg$ and the syntax of  $\cal L$ are not data, i.e., they are  defined in  $\Sigma \setminus \Sigma^\mathsf{Data}$.
Let $T_\Sigma$ denote the  $\Sigma$-algebra of ground terms and $T_{\Sigma,s}$ denote the set of ground terms of  sort\;$s$. 
Given a sort-wise infinite set of variables $\Var$, let $T_\Sigma(\Var)$ denote the free $\Sigma$-algebra of terms with variables, $T_{\Sigma,s}(\Var)$ denote the set of terms of sort $s$ with variables, and $\var(t)$ denote the set of variables occurring in the term\;$t$.


\item A $\Sigma$-algebra $\T$ that gives interpretation to data and their operations.
Let  $\T_s$ denote the elements of~$\T$ that have the sort $s$; the elements of $\T_\Cfg$  are called \emph{configurations}.
 $\T$ interprets the data sorts (those included in the subsignature $\Sigma^\mathsf{Data}$) according to some  $\Sigma^\mathsf{Data}$-algebra~$\cal D$\footnote{A possible definition for ${\cal D}$  is the following one. Assume the data are defined equationally, i.e., there is a finite set of equations
$E^\mathsf{Data}$ defining the data sorts and the operations on them. Then, ${\cal D}$ can be defined as  the initial algebra of the equational specification $(\Sigma^\mathsf{Data},E^\mathsf{Data})$. We chose not to impose that data be defined in any particular way, since they are not part of the
programming  language's definition.}.
 
 $\T$ interprets the non-data sorts as  sets of ground terms over the signature 
\begin{equation}(\Sigma \setminus \Sigma^\mathsf{Data})\cup \bigcup_{d\in\it Data}{\cal D}_d\label{eq:T}
\end{equation}
where ${\cal D}_d$ denotes the carrier set of the sort $d$ in the algebra ${\cal D}$, and the elements of ${\cal D}_d$  are added to the signature $\Sigma \setminus \Sigma^\mathsf{Data}$ as constants of  sort $d$\footnote{If data were defined equationally - cf. previous footnote -  then $\T$ would be defined as the initial algebra of the equational specification $(\Sigma,E^\mathsf{Data})$. Again, we did not choose this approach in order to avoid  assumptions about how the  data are defined.}. 

Any \emph{valuation} $\rho:\Var\to\T$ is extended to a (homonymous) $\Sigma$-algebra morphism $\rho : T_\Sigma(\Var) \to \T$. The interpretation of a ground term $t$ in $\T$ is denoted by $\T_t$. If $b\in T_{\Sigma,\Bool}(\Var)$ then we write $\rho\models b$ iff $\rho(b)= {\cal D}_\true$.  For simplicity, we often write in the sequel $\true,\false$ instead of ${\cal D}_\true, {\cal D}_\false$.

\item A set $\S$ of rewrite rules, whose definition is given later in the section.
\end{enumerate}
We explain  these concepts on the \IMP example.  Nonterminals from the syntax ($\sort{Int}, \sort{Bool}, \sort{AExp}, \ldots)$ are sorts in $\Sigma$. Each production from the syntax defines an operation in $\Sigma$; for instance, the production $\sort{AExp}::=\sort{AExp}\terminal{+}\sort{AExp}$ defines the operation $\_\texttt{+}\_:\sort{AExp}\times \sort{AExp}\to \sort{AExp}$. These operations define the constructors of the result sort. For the configuration sort $\Cfg$, the only constructor is $\kall[black]{cfg}{\kall[black]{k}{\_}\kall[black]{env}{\_}}:\sort{Code}\times\mathit{Map}_{\it Id,Int}\to\Cfg$.\newline
The expression $\kall[black]{cfg}{\kall[black]{k}{X:=I\kra C}\kall[black]{env}{X\mapsto 0\  \mathit{Env}}}$ is a term of $T_\Cfg(\Var)$, where $X$ is a variable of sort \sort{Id},
$I$ is a variable of sort \sort{Int},  $C$ is a variable of sort \sort{Code} (the rest of the computation), and $\it Env$ is a variable of sort $\mathit{Map}_{\it Id,Int}$ (the rest of the environment). The data algebra ${\cal D}$ interprets \sort{Int} as the set of integers, the operations like $+_{\it Int}$  (cf.\ Figure~\ref{impsem}) as the corresponding usual operation on integers, \sort{Bool} as the set of Boolean values $\{\false,\true\}$, the operation like $\land$ as the usual Boolean operations, the sort $\mathit{Map}_{\it Id,Int}$ as the multiset of maps $X\mapsto I$, where $X$ ranges over identifiers \sort{Id} and $I$ over the integers.
The other sorts, \sort{AExp}, \sort{BExp}, \sort{Stmt}, and \sort{Code}, are interpreted in the  algebra $\T$ as ground terms over a modification of the form~(\ref{eq:T}) of the signature $\Sigma$, in which 
data subterms are replaced by their interpretations in ${\cal D}$.  For instance, the term $\terminal{if} 1 >_{Int} 0 \terminal{then} \ 
\terminal{skip} \ \terminal{else} \ \terminal{skip}$  is intepreted as $\terminal{if} true \terminal{then} \ 
\terminal{skip} \ \terminal{else} \ \terminal{skip}$ provided ${\cal D}_{1 >_{Int} 0} = {\cal D}_\true (=\true)$.

\medskip

\noindent We now formally introduce the notions required for defining semantical  rules.


\begin{definition}[pattern~\cite{rosu-stefanescu-2012-oopsla}]
\label{def:pattern}
A \emph{pattern} is an expression of the form $\pattern{\pi}{b}$, where  $\pi\in T_{\Sigma,\Cfg}(\Var)$ are \emph{basic patterns}, $b\in T_{\Sigma,\Bool}(\Var)$, and $\var(b)\subseteq\var(\pi)$.
If $\gamma\!\in\!T_\Cfg$ and $\rho\,{:}\Var\!\to\!\T\!$, we write  $(\gamma,\rho)\!\models\!\pattern{\pi}{b}$ for
$\gamma{=}\rho(\pi)$ and $\rho\models b$.
\end{definition}
A basic pattern $\pi$ defines  a set of (concrete) configurations, and the condition~$b$  gives additional constraints these configurations must satisfy. In~\cite{rosu-stefanescu-2012-oopsla} patterns are encoded as FOL formulas, where $\pi$ is seen as a special predicate and hence the conjunction notation $\pattern{\pi}{b}$ (here $\foland$ is the conjunction in FOL). In this paper we  keep the  notation but separate basic patterns from constraining formulas.
We  identify   basic patterns $\pi$ with paterns $\pattern{\pi}{\true}$. 
Sample patterns are $\pattern{\kall[black]{cfg}{\kall[black]{k}{I_1\terminal{+}I_2\kra C}\kall[black]{env}{\mathit{Env}}}}{}$ and $\pattern{\kall[black]{cfg}{\kall[black]{k}{I_1\terminal{/}I_2\kra C}\kall[black]{env}{\mathit{Env}}}}{I_2\not=0}$. 

\begin{definition}[rule, transition system]
\label{def:sem}
A \emph{rule} is a pair of patterns of the form $\rrule{l}{r}{b}$ (note that $r$ is in fact the pattern $\pattern{r}{\true}$). Any set  $\S$  of rules defines a labelled transition system $(\T_\Cfg, \tran{\T}{\S})$ such that $\gamma\ltran{\alpha}{\T}{\S}\gamma'$ iff  $\alpha \eqbydef (\rrule{l}{r}{b}) \in \S$ and $\rho:\Var\to\T$ are such that $(\gamma,\rho)\models \pattern{l}{b}$ and
$(\gamma',\rho)\models r$.
\end{definition}
Note that if data were defined as initial models of certain equational specifications, then the transition system on configurations \mbox{$(\T_\Cfg, \tran{\T}{\S})$}
could be defined as  the initial model of the rewriting-logic specification obtained by adding the rules $\S$ to those equational specifications. Not defining the data in this way gives us some freedom and saves us some technical difficulties, as  will be shown  in the next section.

\section{Symbolic Semantics by Data  Extension}
\label{sec:sym}
We show in this section how a new definition $(\SigmaSym,\TSym{},\SSym)$ for a language $\LSym$ is automatically generated for a given a  definition $(\Sigma, \T, S)$ of a language $\cal L$. The new language $\LSym$ has the same syntax, and its semantics  extends  ${\cal L}$'s data domains  with symbolic values and  adapts the  semantical rules  of $\cal L$ to deal with the new domains.
Then, the symbolic execution of  ${\cal L}$ programs is the concrete  execution of the corresponding $\LSym$ programs, i.e., 
the application of the rewrite rules in the semantics of  $\LSym$. Building the definition  of  $\LSym$ amounts to:
\begin{enumerate}
\item extending the signature $\Sigma$ to  a symbolic signature $\SigmaSym$;
\item extending the $\Sigma$-algebra $\cal T$ to a $\SigmaSym$-algebra $\TSym$;
\item turning the concrete rules $\S$ into symbolic rules $\SSym$.
\end{enumerate}
We then obtain  the symbolic transition system $(\TSym{\CfgSym},\tran{\TSym}{\SSym})$  by using Definitions~\ref{def:pattern},\ref{def:sem} for $\LSym$, just like the  transition system $(\T_\Cfg, \tran{\T}{\S})$ was defined for $\cal L$.
Section~\ref{sec:rel} then deals with the relations between the two transition systems.

\subsection{Extending the Signature \mbox{\large{$\Sigma$}} to\,  a Symbolic Signature \mbox{\large{$\SigmaSym$}}}
\label{sec:sigsym}

We fix a sort-wise set of \emph{symbolic values} $\SymVar$, which are fresh with respect to $\Sigma$,
and let $\Sigma(\SymVar)$ denote the signature  $\Sigma$ enriched with $\SymVar$  as constant declarations of the corresponding sorts.
We call any (data) sort $s$ for which there is a symbolic value of sort $s$ a \emph{symbolically extensible sort}. 


\paragraph*{Assumption}
The symbolically extensible sorts are among the  data sorts.
Non-data symbolically extensible sorts are left for study in future work.

\smallskip

The symbolic signature $\SigmaSym$ includes $\Sigma(\SymVar)$ enriched with  a  new sort $\it Fol$ for FOL formulas, 
together with all the required operations that allow FOL formulas to be written as ground terms of sort  $\it Fol$.

\begin{example}
For the \IMP example this extension amounts to declaring
$$\sort{Fol}::=\sort{Bool}\mid\!(\pmb{\forall} \SymVar)\sort{Fol}\mid\!(\pmb{\exists} \SymVar) \sort{Fol} \mid\! \sort{Fol}\foland\!\sort{Fol}\mid\folneg\sort{Fol}$$
\label{fol}
\end{example}
Then,  we enrich the signature with a new operation symbol $\unsat:\sort{Fol}\to\sort{Bool}$, whose intended meaning is to identify a set of unsatisfiable formulas.  
Next, we  extend $\SigmaSym$ with a sort $\CfgSym$ and a constructor $\langle\_, \_ \rangle: \Cfg \times \sort{Fol} \to \CfgSym$, for building symbolic configurations that are pairs consisting of configurations over symbolic data and a FOL formula denoting  path conditions. Finally,  we extend the   set of variables $\Var$ with infinitely many variables of sort  $\sort{Fol}$. 


\begin{example}
 For the \IMP example we   enrich the configuration with a new cell:\\[1ex]
\centerline{
\fontsize{8}{10}
\selectfont
$
\CfgSym::=\kall[black]{cfg}{\kall[black]{k}{\sort{Code}}\kall[black]{env}{\sort{Map}_{\mathit{Id},\mathit{Int}}}\kall[black]{cnd}{\sort{Fol}}}
$}\\[1ex]
where the new cell $\terminal{cnd}$ includes a formula meant to express the path condition.
This technique to encode the path condition in the configuration is often used in verification and program analysis tools. 
\end{example}



\subsection{Extending the Model \mbox{\large{$\T$}} to\, a Symbolic Model \mbox{\large{$\TSym$}}}
\label{sec:modelsym}
We first deal with  the \emph{symbolic domain} $\DSym$, a $\Sigma^\mathsf{Data}$-algebra with the following properties: 1) the $\Sigma^\mathsf{Data}$-algebra $\cal D$ is a sub-algebra of  $\DSym$, 2) there is an injection $\iota_{
\DSym}:\SymVar\to \DSym$, and 3) for any valuation $\vartheta_{\cal D}:\SymVar\to{\cal D}$ there is a unique algebra morphism $\vartheta^s_{{\cal D}}:\DSym\to{\cal D}$, such that the  diagram in Figure~\ref{fig:diagram}
\begin{figure}[t]
\centerline{
\begin{tikzpicture}[scale=1.5]
\node (A) at (0,1) {$\SymVar$};
\node (B) at (2,1) {$\DSym$};
\node (C) at (1,0) {${\cal D}$};
\path[->,font=\scriptsize,>=angle 60]
%(A) edge node[above]{$\iota_{\DSym}$} (B)
(B) edge node[right]{$\vartheta^s_{{\cal D}}$} (C)
(A) edge node[left]{$\vartheta_{{\cal D}}$} (C);
\path[right hook->,font=\scriptsize,>=angle 60]
(A) edge node[above]{$\iota_{\DSym}$} (B);
\end{tikzpicture}
}
\caption{\label{fig:diagram} Diagram Characterising $\vartheta^s_{{\cal D}}:\DSym\to{\cal D}$}
\end{figure}
commutes. 
The diagram says that  symbolic values are   data in $\DSym$ via the injection $\iota_{\DSym}$, and that any interpretation $\vartheta_{{\cal D}}$ of the symbolic values as concrete data is uniquely extended to an algebra  morphism $\vartheta^s_{{\cal D}}$ that  assigns concrete values to symbolic values.
For instance, $\DSym$ can be the $\Sigma^\mathsf{Data}$-algebra of expressions built over $\cal D$, where $\SymVar$ play the role of variables, or the quotient of this algebra modulo the congruence defined by some set of equations $E^{\mathfrak{s}}$ (which can be used in practice as simplification rules for symbolic expressions).

 We  leave  some freedom in choosing the symbolic domain, to allow the use of decision procedures or other efficient means for handling symbolic artefacts.

We now give the interpretation for the remaining  syntax that $\SigmaSym$ introduced with respect to
$\Sigma$.
The  interpretation of ground terms of sort $\sort{Fol}$ are the 
corresponding FOL formulas. The operation symbol $\unsat$ is interpreted as a homonymous predicate and is assumed to be \emph{sound}: for any FOL formula $\phi$,  if 
$\unsat(\phi) = \true$  then $\phi$  is  unsatisfiable. 
The converse  property: if $\phi$ is unsatisfiable then $\unsat(\phi) = \true$,
 is called  \emph{completeness}\footnote{Note that soundness and completeness are relative to the (usual) semantics of FOL.} and is only required for the (essentially, theoretical) precision result regarding symbolic execution\footnote{This is where defining data  as initial algebras  of equational specifications would be   too restrictive: since $\unsat$ returns a Boolean it
needs to be fully defined, as required by initial algebras. Thus, a sound and complete interpretation of $\unsat$ in $\TSym{}$ is  impossible if $\TSym{}$ was an initial algebra, since satisfiability in FOL is undecidable. 
We did not impose the definition of data in this way, and   are free  to
interpret  $\unsat$ as a decision oracle for FOL when   needed later in the paper for theoretical reasons.}.
The $\Sigma$-terms of sort $\Cfg$  are interpreted in $\TSym{}$  like in  $\T$, and the
  terms of sort $\CfgSym$ are interpreted as pairs $\langle \gamma^s,\phi\rangle$, where
  $\langle\_, \_ \rangle: \TSym_\Cfg \times \TSym_\sort{Fol} \to \TSym_{\CfgSym}$ is the interpretation of
  the 
operation symbol $\langle\_, \_ \rangle: \Cfg \times \sort{Fol} \to \CfgSym$.



\begin{definition}[Satisfaction Relation]
\label{def:symsat}~\\
A concrete configuration $\gamma\in\T_\Cfg$ \emph{satisfies}  a symbolic configuration $\langle \gamma^s ,\phi\rangle\in\TSym{\CfgSym}$, written  $\gamma\models \langle\gamma^s,\phi\rangle$, if there is $\vartheta:\SymVar\to \T$ such that $\gamma=\vartheta(\gamma^s)$ and $\vartheta\models \phi$. 
\end{definition}
In the above definition, the notation $\vartheta \models \phi$ means that the valuation $\vartheta$ of $\SymVar$ satisfies the FOL formula $\phi$
according to the usual satisfaction relation of FOL.

\begin{example} The concrete configuration\\
\centerline{
$\gamma \eqbydef \kall[black]{cfg}{\kall[black]{k}{\terminal{if} \true \terminal{then} \ \terminal{skip} \ \terminal{else} \ \terminal{skip}} \kall[black]{env}{.}}$}\\
satisfies the symbolic configuration\\[1ex]
$\langle \gamma^s, \phi \rangle \eqbydef \kall[black]{cfg}{\kall[black]{k}{\terminal{if} b^s \!\terminal{then}\;  \terminal{skip}\; \terminal{else} \; \terminal{skip}} \kall[black]{env}{.} \kall[black]{cnd}{b^s {=} \true}}$\\[1ex]
where the valuation $\vartheta$ is $b^s \mapsto \true$, and in this case it is directly computed from the formula $\phi \eqbydef (b^s = \true)$ included in the additional $\terminal{cnd}$ cell. Here $b^s$ is a symbolic value of sort $\sort{Bool}$.
\end{example}

\subsection{Turning the Concrete Rules \mbox{\large{$\S$}} into Symbolic Rules \mbox{\large{$\SSym$}}}
\label{sec:semsym}


We show how to automatically build the symbolic-semantics rules  $\SSym$ from the concrete semantics-rules $\S$,  by applying the three steps described below.

We first make the following \emph{assumption}: the left-hand sides of rules do not contain operations on symbolically extensible  sorts. For example, if the data sort $\sort{Map}$ is symbolically extensible, then
 the last two rules from the \IMP semantics (cf.\ Figure~\ref{impsem}):\\[1ex]
\centerline{
\fontsize{8}{10}
\selectfont
$\begin{aligned}
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{X}\kprefix[black]{env}{X\mapsto I}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I}\kprefix[black]{env}{X\mapsto I}}}{}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{X\terminal{:=}I}\kprefix[black]{env}{X\mapsto \_}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{}\kprefix[black]{env}{X\mapsto I}}}{}
\end{aligned}$}\\[1ex]
violate the assumption, because $\kprefix[black]{env}{X\mapsto I}$
is  a shortcut for $\kall[black]{env}{(X\mapsto I) M}$ for some map variable $M$, where the juxtaposition operation is  map composition.

The assumption can sometimes be made to hold by transforming the rules into equivalent ones; for example, the two problematic  rules can be rewritten as\\[1ex]
\centerline{ \fontsize{8}{10}\selectfont
$\begin{aligned}
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{X}\kall[black]{env}{M}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{I}\kall[black]{env}{M}}}{(\mathit{lookup}(M,X) = I)}\\
&\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{X\terminal{:=}I}\kall[black]{env}{M}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{}\kall[black]{env}{\mathit{update(M,X,I)}}}}{}
\end{aligned}$
}\\[1ex]
where $\_{=}\_$ is a function symbol that returns a Boolean, assumed to exist for all data sorts, and interpreted as returning $\true$ iff its arguments are equal, and $\mathit{lookup}()$, $\mathit{update}()$ are the usual lookup and updating functions for maps, which have then to be defined in a subsignature $\Sigma^{\mathsf{Map}}$ of  $\Sigma^\mathsf{Data}$. If $\sort{Map}$ is not symbolically extensible then the original   semantics  satisfies our assumption.

\paragraph{1. Linearising Rules}
\label{subsec:step1}
A rule is (left) linear if any variable occurs at most once in its left-hand side.
A nonlinear rule can always be turned into an equivalent linear one, by renaming the variables occurring many times and adding equalities between the renamed variables and the original ones to the rule's condition.
Recall the last two rules from the original  \IMP semantics (Fig.~\ref{impsem}). These rules are non-linear because variable $X$ appears twice in the left-hand side, once in the $\kall[black]{k}{...}$ cell and once in the $\kall[black]{env}{...}$ cell. To linearise  them  we just add a new variable, say $X'$, and a condition, $X=X'$:\\[1ex]
%\begin{center}
\centerline{
\fontsize{8}{10}
\selectfont
$
\begin{aligned}
&\kprefix[black]{cfg}{\kprefix[black]{k}{X}\kprefix[black]{env}{X'\mapsto I}}\foland{X=X'}\; \pmb{\Rightarrow}\\
&\qquad\kprefix[black]{cfg}{\kprefix[black]{k}{I}\kprefix[black]{env}{X\mapsto I}}\\[0.5ex]
&\kprefix[black]{cfg}{\kprefix[black]{k}{X\terminal{:=}I}\kprefix[black]{env}{X'\mapsto \_}}\foland{X=X'}\; \pmb{\Rightarrow}\\
&\qquad\kprefix[black]{cfg}{\kprefix[black]{k}{}\kprefix[black]{env}{X\mapsto I}}
\end{aligned}
$
%\end{center}
}\\[1ex]
This process is entirely automatic. Of course, there are other ways of linearising rules, e.g., the rules transformed by
using $\mathit{lookup}()$ and $\mathit{update}()$ also shown above.

 
\paragraph{2. Replacing Constants by Variables}
\label{subsec:step2}

Let $Cpos(l)$ be the set of positions $\omega$\footnote{For the  notion  of position in a term and other rewriting-related notions, see, e.g.,~\cite{Baader:1998:TR:280474}.}  of the term $l$ such that $l_\omega$ is a constant of a symbolically 
extensible sort.
The next step of our rule transformation consists in replacing all the constants of symbolically extensible sorts by fresh variables. The purpose of this step is to make rules  match any configuration, including  the symbolic ones.

Thus, we transform each rule $ \rrule{l}{r}{b}$ into the rule\\[.5ex]
\centerline{$\rrule{[l_\omega/ X_\omega]_{\omega \in Cpos(l)}}{r}{(b \wedge \bigwedge_{\omega \in Cpos(l)}  (X_\omega = l_\omega))}$,}\\[.75ex] 
where each $X_\omega$ is a new variable of the same sort as $l_\omega$.

\begin{example}
Consider the following rule for \emph{if} from the \IMP semantics:\\[1ex]
\centerline{
\fontsize{8}{10}
\selectfont
$\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if}\true\terminal{then}S_1\terminal{else}S_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{S_1}}}{}$
}\\[1ex]
Assuming Boolean is a symbolically extensible sort, we replace the constant $\true$  with a Boolean variable $B$, and add the condition $B=\true$:\\[1ex]
\centerline{
\fontsize{8}{10}
\selectfont
$\rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if} B\terminal{then}S_1\terminal{else}S_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{S_1}}}{(B{=}\true)}$
}
\end{example}

 
\paragraph{3. Adding Formulas to Configurations and Rules}
%\label{subsec:step3}
Let $\unsat$ be the unsatisfiability predicate in $\SigmaSym$. The last transformation step consists in transforming  each rule $\rrule{l}{r}{b}$ in $\S$ obtained after the  previous steps, into the following one:
\begin{eqnarray}
&& \rrule{\langle l,\psi\rangle}{\langle r,\psi \foland b \rangle}{\folneg \unsat(\psi\foland b)} \label{eq:transform}
\end{eqnarray}
where $\psi \in \Var$ is a variable of sort $\sort{Fol}$  and $\langle\_,\_ \rangle$
is the operation in  $\SigmaSym$. Intuitively, this means that a symbolic transition is performed on a symbolic configuration
if the conjunction of the symbolic configuration's path condition and the concrete rule's condition is not unsatisfiable. Indeed, this is what happens in the transition system since we chose to interpret $\unsat$ as a sound predicate.

\begin{example}
The last   rule for $\terminal{if}$ from the (already transformed) \IMP semantics is further transformed into the following rule in $\SSym$:
\begin{align*}
\fontsize{8}{10}
\selectfont
&\hspace{-6cm}\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if}\!B\terminal{then}S_1\terminal{else}S_2\!\!}\kall[black]{cnd}{\psi}\!\!} \pmb{\Rightarrow}\\
\kprefix[black]{cfg}{\kprefix[black]{k}{S_1\!\!}\kall[black]{cnd}{\psi{\foland}B}\!\!}{\folneg\unsat(\psi{\foland}B)}
\end{align*}
\end{example}



\subsection{Defining the Symbolic Transition System 
%\mbox{\large{$(\TSym{\CfgSym},\tran{\TSym}{\SSym})$}}
}
%\label{subsec:step3}
The triple ($\SigmaSym$, $\TSym$,   $\DSym$)  defines a language $\LSym$. Then, the
 transition system  $(\TSym_{\CfgSym},\tran{\TSym}{\SSym})$ can be defined using Definitions~\ref{def:pattern} and~\ref{def:sem} applied to $\LSym$.

For this, we    note that the
left-hand sides of the rules of $\LSym$, of the form $\langle l, \psi \rangle$, are terms in $T_{\SigmaSym, \CfgSym}(\Var)$, and  that  the conditions of the rules of~$\LSym$, of the form
$\folneg \unsat(\psi\land b$), are terms in $T_{\SigmaSym, {\it Fol}}(\Var)$
 - remember that $\Var$ has been extended to include variables of  sort $\sort{Fol}$. Since $\pattern{l}{b}$ is a pattern of $\cal L$,
 using Definition~\ref{def:pattern} applied to  $\cal L$
  we get $\var(b) \subseteq \var(l)$, and then $\var(\psi \land b) = \var(b) \cup  \{\psi\}\subseteq \var(l) \cup \{\psi\} = \var(\langle l,\psi\rangle)$. It follows,  according to   Definition~\ref{def:pattern} applied to  $\LSym$, that  the expression $ \pattern{\langle l,\psi\rangle}{\folneg \unsat(\psi\land b)}$ is  a  pattern of   $\LSym$, and  then Definition~\ref{def:sem}  for $\LSym$ gives us
  the transition system $(\TSym_{\CfgSym},\tran{\TSym}{\SSym})$.



\section{Relating the Concrete and \\ Symbolic Semantics of \mbox{\large{$\cal L$}}}
\label{sec:rel}
We now relate the concrete and symbolic semantics of $\cal L$, i.e., the transition systems  $(\T_\Cfg,\tran{\T}{\S})$ and $(\TSym_{\CfgSym},\tran{\TSym}{\SSym})$. We prove certain simulation relations between them and obtain the coverage and precision properties as corollaries.

The following technical lemma is essential for obtaining a match between the left-land side $l$ of a rule and a symbolic configuration,
provided there is a match between $l$ and a concrete configuration that satisfies that symbolic configuration.

We denote by $f|_{A'}$ the restriction of a function $f:A\to B$ to $A' \subseteq A$. 

\begin{lemma}
\label{l:matchs}
Let $l\in T_{\Sigma,\Cfg}(\Var)$ be the left-hand side of a rule in $\S$, $\rho:\Var\to\T$ a valuation, and $\gamma^s\in\TSym_{\Cfg}$. If  $\vartheta:\SymVar\to \T$ satisfies $\vartheta(\gamma^s)=\rho(l)$ 
then there is a valuation $\rho^s:\Var\to\TSym{}$ such that $\gamma^s= \rho^s(l)$  and $\rho=\vartheta\circ\rho^s$. 
\end{lemma}
\begin{proof}
Note first that by notation abuse   we are using the valuation $\vartheta : \SymVar\to \T$ in  $\vartheta(\gamma^s)= \rho(l)$  and $\rho=\vartheta\circ\rho^s$, where we  should be using~$\vartheta^s$, the homomorphical extension of $\vartheta$ 
to terms in $\TSym$. (Remember that, by definition, $\TSym$ consists  of ground terms over a signature of the form (\ref{eq:T}) where $\DSym$ replaces $\cal D$). 

We also make the  following remark: ($\spadesuit$) for all $n,n' \in \mathbb{N}$, for all operation symbols $f,f'$ such that the result sort of $f$ is not a data sort,  and for all elements 
$\tau_1,\ldots,\tau_n, \tau'_1,\ldots,\tau'_{n'} \in \T$, if  $\T_{f}(\tau_1,\ldots,\tau_n) =\T_{f'}(\tau'_1,\ldots, \tau'_{n'})$
then $f=f'$, $n=n'$, and $\tau_i = \tau'_i$ for all $i = 1, \ldots, n$. This is because $\T_{f}(\tau_1,\ldots,\tau_n)$ is the interpretation of some ground term of a non-data sort, and such terms are interpreted as ground terms over a certain signature (of the form~(\ref{eq:T})). 
%Hence, the only way such a term  can be equal to some other term  is by being syntactically equal to it. 
We also assume without loss of generality that for each variable $y\in\Var\setminus\var(l)$ there is a symbolic value $y^s$  that does not occur in $\gamma^s$, such that $\rho(y)=\vartheta(y^s)$.

We prove the lemma by establishing a more general result, where $l$ can be any subterm of a left-hand side of a rule in $\S$. The proof goes by structural induction. There are three cases:\\[1ex]
1. $l$ is a variable $x$. Then we take $\rho^s$ such that $\rho^s(x)=\gamma^s$, and $\rho^s(y)=y^s$ for $y\not=x$.  First, $\rho^s(x)=\gamma^s$ is just
$\rho^s(l)=\gamma^s$, which proves the first conclusion of the lemma in this case. Moreover 
$\rho(x) = \rho(l)  =  \vartheta(\gamma^s) = \vartheta(\rho^s(x))$, and  for all $y \in \Var \setminus \{ x\}$,
 $\rho(y) = \vartheta(y^s) = \vartheta(\rho^s(y))$,  which proves the second conclusion.
\\[1ex]
2. $l$ is a constant $c$. 
Since the sort of $c$ is not symbolically extensible,  $\rho(c)\!=\T_c\!=\!\TSym_c\!=\!\vartheta(\gamma^s)$. Let 
$\gamma^s\!=\!f'(\gamma'^s_1,\ldots,\gamma'^s_{n'})$, thus, $\vartheta(\gamma^s)\!=\!\T_{f'}(\vartheta(\gamma'^s_1),\ldots,\vartheta(\gamma'^s_{n'}))$. Using the  remark ($\spadesuit$), this
means $f'=c$ and $n'=0$, and then $\vartheta(\gamma^s) = \gamma^s = \T_c$.
We take $\rho^s$ such that $\rho^s(y)=y^s$ for any $y$. Then, $\rho^s(l) = \rho^s(c) = \T_c = \gamma^s$, and for all $y \in \Var$,
 $\rho(y) = \vartheta(y^s) = (\vartheta\circ\rho^s)(y)$, which proves this case as well.
\\[1ex]
3. $l=f(t_1,\ldots,t_n)$. Let $\gamma^s\!=\!f'(\gamma'^s_1,\ldots,\gamma'^s_{n'})$. Thus, we have 
$\rho(l)=\T_f(\rho(t_1),\ldots,\rho(t_n))$ $=\T_{f'}(\vartheta(\gamma'^s_1),\ldots,\vartheta(\gamma'^s_{n'}))=\vartheta(\gamma'^s)$ that implies
$f=f'$, $n=n'$, and\,$\rho(t_1)=\vartheta(\gamma'^s_1)$, \ldots, $\rho(t_n)=\vartheta(\gamma'^s_n)$ by  the  remark ($\spadesuit$).
There are $\rho^s_i:\Var\to\TSym{}$ with $\rho^s_i(t_i)=\gamma'^s_i$ and $\rho_i=\vartheta\circ\rho^s_i$ by the inductive hypothesis.
Since $l$ is linear, it follows that $\var(t_1),\ldots,\var(t_n)$ are pairwise disjoint and we can build a valuation $\rho^s:\Var\to\TSym{}$ with the following properties: (1) $\rho^s|_{\var(t_i)}=\rho^s_i|_{\var(t_i)}$ for each $i=1,\ldots$, and (2) $\rho^s$ equals  some arbitrarily chosen $\rho^s_i$ in the rest. 
We have $\rho^s(l)=f(\rho^s(t_1),\ldots,\rho^s(t_n))= f(\rho^s_1(t_1),\ldots,\rho^s_n(t_n))=f(\gamma'^s_1,\ldots,\gamma'^s_n)=\gamma^s$.  The equality $\rho=\vartheta\circ\rho^s$ follows by the construction of $\rho^s$ and the inductive hypotheses.
\end{proof}

\begin{corollary}
\label{for:unique}
$\rho^s|_{\var(l)}$ , for $\rho^s$ given by Lemma~\ref{l:matchs}, is unique for given $\gamma^s$ and $l$.
\end{corollary}
\begin{proof} By structural induction over $l$ and using the observation ($\spadesuit$) from above.
\end{proof}

Lemma~\ref{l:matchs}  gives us the formal ground for relating our symbolic execution with symbolic execution via unification as done in related works, e.g.,~\cite{DBLP:journals/entcs/EscobarMS09}. The main idea is that  the  substitution $\rho^s$ 
from Lemma~\ref{l:matchs}, which depends essentially only on $\gamma^s$ and $l$, generates a \textit{symbolic unifier} that \textit{subsumes}
all their \textit{concrete unifiers}. 

\begin{figure}[t]
\centerline{
\begin{tikzpicture}[scale=1.5]
\node (A) at (0,1) {$\SymVar$};
\node (B) at (2,1) {$\DSym$};
\node (C) at (2.75,0.25) {${\cal D}$};
\node (D) at (4,1) {$\TSym$};
\node (E) at (4,-1) {$\T$};
\path[right hook->,font=\scriptsize,>=angle 90]
(A) edge node[above]{$\iota_{\DSym}$} (B)
(B) edge node[above]{$\iota_{\TSym}$} (D)
(C) edge node[right]{$\iota_{\T}$} (E);
\path[->,font=\scriptsize,>=angle 90]
(B) edge node[right]{$\vartheta^s_{{\cal D}}$} (C)
(A) edge node[below]{$\vartheta_{{\cal D}}$} (C)
(A) edge node[left]{$\vartheta$} (E)
(D) edge node[right]{$\vartheta^s$} (E);
\end{tikzpicture}
}
\caption{\label{fig:bigdiagram} Diagram Characterising $\vartheta^s: \TSym \to \T$. Contains Fig.~\ref{fig:diagram} as subdiagram.}
\end{figure}

We first note that, as stated at the beginning of the proof of  Lemma~\ref{l:matchs}, the actual last statement of Lemma~\ref{l:matchs} is $\rho = \vartheta^{s} \circ \rho^s$, where  $\vartheta^{s}$  is the unique morphism from $\TSym$ to $\T$ making the outermost diagram in Figure~\ref{fig:bigdiagram} commute. 
Note that this diagram contains as a subdiagram the one in Figure~\ref{fig:diagram} (top left corner).

\begin{figure}[b]
\centerline{
\begin{tikzpicture}[scale=1.5]
%\node (A) at (0,1) {$\SymVar\uplus\var(l)$};
%\node (B) at (4,1) {$\TSym$};
%\node (D) at (4,0) {$\T$};
%\path[->,font=\scriptsize,>=angle 90]
%(A) edge node[above]{$\vartheta^s\uplus\rho^s$} (B)
%(B) edge node[right]{$\eta$} (D)
%(A) edge node[above]{$\vartheta\uplus\rho$} (D);
\node (A) at (0,2) {$\SymVar$};
\node (B) at (4,2) {$\TSym$};
\node (C) at (0,0) {$\var(l)$};
\node (D) at (4,0) {$\T$};
\node (E) at (0,1) {$\SymVar\uplus\var(l)$};
\path[->,font=\scriptsize,>=angle 60]
(A) edge node[above]{$\vartheta^s$} (B)
(A) edge node[below]{$~~~\vartheta$} (D)%$\vartheta$
(C) edge node[above]{$~~~\rho^s$} (B)%$\rho^s$
(C) edge node[above]{$\rho$} (D)
;
\path[->, thick, font=\scriptsize,>=angle 60]
(E) edge node[above]{$\vartheta^s \uplus \rho^s$} (B)
(E) edge node[above]{$\vartheta \uplus \rho$} (D)
;
\path[->, dashed, font=\scriptsize,>=angle 60]
(B) edge node[right]{$\eta$} (D)
;
\path[right hook->, font=\scriptsize,>=angle 60]
%(A) edge node[right]{} (E)
(C) edge node[right]{} (E)
;
\path[right hook->, font=\scriptsize,>=angle 60]
(A) edge node[right]{} (E)
%(C) edge node[right]{} (E)
;
\end{tikzpicture}
}
\caption{\label{fig:subsum} Subsumption Relation.}
\end{figure}
 
Next, we  need to adapt some  definitions regarding unification. For $\gamma^s \in \TSym{\Cfg}$ and $l \in T_{\Sigma,\Cfg}(\Var)$,
we say that $\vartheta \uplus \rho: \SymVar \uplus \var(l) \to \T$ is a  \emph{concrete unifier} of $\gamma^s$ and~$l$
if $\vartheta(\gamma^s) =_{\T}  \rho(l)$, and that $\vartheta^s \uplus \rho^s: \SymVar \uplus \var(l) \to \TSym$ is a \emph{symbolic unifier}
of $\gamma^s$ and~$l$ if $\vartheta^s(\gamma^s) =_{\TSym}  \rho^s(l)$. Note that the former equality holds in $\T$, while the latter
holds in $\TSym$, as emphasised by the subscripts of the equality operator. Note also that (for simplicity) the domains of   $\rho$ and $\rho^s$ were chosen to be  $\var(l)$
since the values to which they evaluate the variables outside $\var(l)$ do not matter.

We say that a symbolic unifier  $\vartheta^s \uplus \rho^s$
\emph{subsumes} a concrete unifier $\vartheta \uplus \rho$ if there is $\eta : \TSym \to \T$ such that $\theta = \eta \circ \theta^s$ and
$\rho = \eta \circ \rho^s$. 
This  is illustrated by the diagram in Figure~\ref{fig:subsum}.
What Lemma~\ref{l:matchs} tells us is that the symbolic unifier $(\iota_{\TSym} \circ \iota_{\DSym}) \uplus \rho^s : \SymVar \uplus \var(l) \to \TSym$, where $\iota_{\TSym}$ and $\iota_{\DSym}$ are the injections shown  in the diagram in Figure~\ref{fig:bigdiagram},  subsumes the concrete
unifier $\vartheta \uplus \rho$ from which $\rho^s$ can be obtained by using Lemma~\ref{l:matchs} (this can be easily seen by superposing the diagram from Figure~\ref{fig:bigdiagram} over the diagram from Figure~\ref{fig:subsum}). 
\\[1ex]
Indeed, we have
$\vartheta = \vartheta^{s} \circ (\iota_{\TSym} \circ \iota_{\DSym})$, which is given by the diagram in Figure~\ref{fig:bigdiagram}, and $\rho = \vartheta^{s} \circ \rho^s$, which is implied by Lemma~\ref{l:matchs}, thus,  $(\iota_{\TSym} \circ \iota_{\DSym}) \uplus \rho^s$ subsumes  $\vartheta \uplus \rho$. But by Corollary~\ref{for:unique}, $\rho^s = \rho^s|_{\var(l)}$ is unique for  given  $l$ and $\gamma^s$, thus, the  above reasoning   could be made for any  concrete unifier of  $l$ and $\gamma^s$, and still obtain  the same symbolic unifier $(\iota_{\TSym} \circ \iota_{\DSym}) \uplus \rho^s$  to subsume it. 
 Hence, 
this symbolic unifier of $\gamma^s$ and~$l$ subsumes all  concrete unifiers of $\gamma^s$ and~$l$.
\\[1ex]
Thus, we do not need to compute those concrete unifiers, or most general
unifiers that subsume all of them as~\cite{DBLP:journals/entcs/EscobarMS09} do, which is an advantage for us, since
there are few theories with adequate (i.e., finitary and complete) unification algorithms. On the other hand, we rely on SMT solvers for 
checking parts of the rule's conditions that  unification algoritms check directly - namely, those parts introduced by transformations of  $\S$ into $\SSym$ described earlier in this section.  The bottom line is: we  postpone  inherent incompleteness issues  to  SMT solving.


%\medskip

The next lemma shows that the symbolic transition system forward-simulates the concrete transition system.  The notion of
forward simulation (and of backwards simulation, used later) is borrowed from Lynch and Vandraager~\cite{DBLP:journals/iandc/LynchV95}.
We denote by $\alphaSym \in \SSym$  the rule  obtained by transforming $\alpha \in \S$ (Section~\ref{sec:semsym}).

\begin{lemma}[For\-ward Simulation] 
\label{lem:oneside}
$(\TSym_{\CfgSym},\tran{\TSym}{\SSym})$ forward simulates $(\T_\Cfg,\tran{\T}{\S})$:
for all configurations $\gamma$,  symbolic configurations $\langle \gamma^s , \phi \rangle$ and rules $\alpha\in \S$,
if $\gamma\!\models\!\langle \gamma^s , \phi \rangle$ and $\gamma\!\ltran{\alpha}{\T}{\S}\!\gamma'$ then $ \langle \gamma^s, \phi \rangle \ltran{\alphaSym}{\TSym}{\SSym} \langle  \gamma'^s, \phi' \rangle$ and $\gamma' \models \langle \gamma'^s , \phi' \rangle$, for some $\langle \gamma'^s, \phi'\rangle$.
 \end{lemma}
\begin{proof}
Let $\alpha\eqbydef\rrule{l}{r}{b}$.
The transition  $\gamma \ltran{\alpha}{\T}{\S} \gamma'$  implies that  there is  $\rho:\Var\to\T$ such that $(\gamma,\rho)\models \pattern{l}{b}$ and
$(\gamma',\rho)\models r$. The satisfaction $\gamma \models  \langle \gamma^s , \phi \rangle$  implies that there is $\vartheta:\SymVar\to \T$ such that $\gamma=\vartheta(\gamma^s)$ and $\vartheta\models\phi$. By Lemma~\ref{l:matchs} there is  $\rho^s:\Var\to\TSym{}$ such that $\gamma^s=\rho^s(l)$ and $\rho=\vartheta\circ\rho^s$. We have  two cases:\\
%\begin{enumerate}
%\item 
1. $\unsat(\phi \foland \rho^s(b))\!=\!\false$\footnote{Recall that we guarantee only the soundness of $\unsat$ and therefore we cannot deduce here that $\phi \foland \rho^s(b)$ is satisfiable.}. Then,\,the rule $\alphaSym\!\eqbydef\!( \rrule{\langle l,\psi\rangle}{\langle r,\psi \foland b \rangle}{\folneg \unsat(\psi\foland b)})$ can be applied to the symbolic configuration $\langle \gamma^s , \phi \rangle$, yielding the transition
$\langle \gamma^s,\phi \rangle \ltran{\alphaSym}{\;\TSym{}}{\SSym} \langle \rho^s(r), \phi\foland\rho^s(b) \rangle$. Let   $\gamma'^s\eqbydef\rho^s(r)$, $\phi' \eqbydef\phi\foland\rho^s(b)$. \\
We prove  $\gamma'\models \langle \gamma'^s,\phi' \rangle$. From the hypothesis, $\gamma'=\rho(r)$, but $\rho(r)=(\vartheta\circ\rho^s)(r)=\vartheta(\rho^s(r))=\vartheta(\gamma'^s)$ which means  $\gamma'=\vartheta(\gamma'^s)$. On the other hand, $\rho\models b$ means  $\rho(b) = \true$,  which implies  $(\vartheta\circ\rho^s)(b) = \true$. Thus, $\vartheta(\rho^s(b)) = \true$, i.e.,  $\vartheta\models\rho^s(b)$. Using  $\vartheta\models\phi$, we get $\vartheta\models\phi\foland\rho^s(b)$, i.e.,  $\vartheta\models\phi'$. By Definition~\ref{def:symsat} this means $\gamma'\models \langle \gamma'^s,\phi' \rangle$. This proves the lemma in this case.\\
%\item 
2. $\unsat(\phi\foland\rho^s(b))=\true$. This is impossible, since the soundness of $\unsat$ implies   $\phi\foland\rho^s(b)$ is unsatisfiable,  in 
contradiction with $\vartheta\models\phi\foland\rho^s(b)$ that we deduce as above  from the hypotheses of the lemma. This concludes the proof. 
%\end{enumerate}
\end{proof}
\begin{remark}
Note that only the soundness of  $\unsat$ was used in this proof. This is important for implementation purposes,
since satisfaction in FOL is undecidable, thus,  sound \emph{and} complete unsatisfiability predicates cannot be implemented; whereas  
sound ones are implemented all provers.
\end{remark}




For $\beta \eqbydef \beta_1 \cdots \beta_n \in \S^*$ we write  $\gamma_0 \ltran{\beta}{\T}{\S} \gamma_n$ for 
$\gamma_i \ltran{\beta_{i+1}}{\T}{\S} \gamma_{i+1}$ for all $i = 0,\ldots, n-1$, and use a similar notation for sequences of transitions
in the symbolic transition system, where we denote $\betaSym$ the sequence $ \betaSym_1 \cdots \betaSym_n \in {\SSym}^{,*}$.

 We can now state the coverage theorem as a corollary to the above lemma:

\begin{theorem}[Coverage]
With the above notations, if  $\gamma\!\ltran{\beta}{\T}{\S}\!\gamma'$ and  $\gamma\models \langle \gamma^s,\phi \rangle$ 
then there exists $\langle \gamma'^s,\phi' \rangle$ such that $\gamma'\models \langle \gamma'^s,\phi' \rangle$
and \mbox{$\langle \gamma^s,\phi \rangle\!\ltran{\betaSym}{\TSym{}}{\SSym}\!\!\langle \gamma'^s,\phi' \rangle$}.
\end{theorem}
\begin{proof} By induction on the length of $\beta$, using Lemma~\ref{lem:oneside} for the induction step.
\end{proof}
The coverage theorem  says that if a sequence $\beta$ of rewrite rules can be executed starting in some initial configuration,
the corresponding sequence of symbolic rules can be fired as well. That is, if a program  can execute a certain control-flow path concretely,  then it can also execute that path symbolically.

We would like, naturally, to prove the converse  result (precision) based on a simulation result  similar to Lemma~\ref{lem:oneside}:
\emph{for all configurations $\gamma$ and symbolic configuration $\langle \gamma^s ,\phi \rangle$, if
$\gamma \models \langle \gamma^s, \phi\rangle $ and $\langle \gamma^s,\phi \rangle \ltran{\alphaSym}{\TSym{}}{\SSym}
\langle \gamma'^s,\phi' \rangle$ then there is a configuration
$\gamma'$ such that 
 $\gamma \ltran{\alpha}{\T}{\S} \gamma'$ and $\gamma' \models \langle \gamma'^s , \phi' \rangle$}.
But this \emph{is  false}.




\begin{example}
 Consider the following  (symbolic) configurations and (symbolic) rules:\\[0.5ex]
\fontsize{8}{10}
\selectfont
 $\gamma \eqbydef \kall[black]{cfg}{\kall[black]{k}{\terminal{if} \true \terminal{then} \ \terminal{x:=1} \ \terminal{else} \ \terminal{skip}} \kall[black]{env}{\texttt{y} \mapsto 5}}$,
 \\[0.5ex]
 $\langle \gamma^s, \phi \rangle \eqbydef \\ \kall[black]{cfg}{\kall[black]{k}{\terminal{if} y^s >_{\it Int} 3 \terminal{then} \ \terminal{x:=1}\ \terminal{else} \ \terminal{skip}} \kall[black]{env}{\texttt{y}\!\mapsto \!y^s} \kall[black]{cnd}{y^s >_{Int}0}}$,
 \\[0.75ex]
$\langle \gamma'^s, \phi' \rangle \eqbydef \kall[black]{cfg}{\kall[black]{k}{\terminal{skip} } \kall[black]{env}{\texttt{y}\mapsto y^s} \kall[black]{cnd}{y^s >_{Int}0\land \neg(y^s >_{\it Int}3)}}$,
\\[0.75ex]
$\alpha \eqbydef \rrule{\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if} B\terminal{then}S_1\terminal{else}S_2}}}
{\kprefix[black]{cfg}{\kprefix[black]{k}{S_2}}}{\neg B }$,\quad and $\alphaSym \eqbydef$
\\[0.5ex]
$\kprefix[black]{cfg}{\kprefix[black]{k}{\terminal{if}B\terminal{then}S_1 \terminal{else}S_2\!\!}\!\kall[black]{cnd}{\psi}}\foland{\neg\unsat(\psi\land\neg B)}~\pmb{\Rightarrow}$\\
\indent\qquad$\kprefix[black]{cfg}{\kprefix[black]{k}{S_2\!\!}\!\kall[black]{cnd}{\psi\!\land\!\neg B}\!\!}$\vspace{-1ex}
\\[0.5ex]
Then, $\gamma \models \langle \gamma^s ,\phi\rangle$ with $y^s\mapsto 5$,  $\langle \gamma^s,\phi \rangle \ltran{\alphaSym}{\TSym{}}{\SSym}
\langle \gamma'^s,\phi' \rangle$ since $\neg(y^s>_{\it Int} 3) \land y^s>_{\it Int}0$ is satisfiable (e.g., $y^s\mapsto 2$), but the only transition starting from $\gamma$ is $\gamma\ltran{\alpha'}{\T}{\S}{\kall[black]{k}{\terminal{x := 1} } \kall[black]{env}{\texttt{y}\mapsto 5}}$, whose destination clearly  does not satisfy $\langle \gamma'^s, \phi' \rangle$.
\end{example}

Thus, we need another way of proving the precision result. The next lemma says that the concrete semantics backwards-simulates the
symbolic one:



\begin{lemma}[Backward Simulation]
\label{lem:othersidealt}
$(\T_\Cfg,\tran{\T}{\S})$ backward simulates $(\TSym_{\CfgSym},\tran{\TSym}{\SSym})$:
for all concrete configurations $\gamma'$ and all symbolic configurations $\langle \gamma^s , \phi \rangle$ and $\langle {\gamma'}^s ,\phi' \rangle$, if  $\langle \gamma^s , \phi \rangle \ltran{\alphaSym}{\TSym}{\SSym}\langle {\gamma'}^s ,\phi' \rangle$ and $\gamma'\!\models\!\langle \gamma'^s , \phi' \rangle$  then there exists $\gamma \in \T_\Cfg$ such that $\gamma \models \langle \gamma^s , \phi\rangle$ and $\gamma \ltran{\alpha}{\T}{\S} \gamma'$.
\end{lemma}
\begin{proof}
 The transition $\langle \gamma^s , \phi \rangle \ltran{\alphaSym}{\TSym}{\SSym}\langle {\gamma'}^s ,\phi' \rangle$ is
 obtained by applying the rule $\alphaSym\!\eqbydef\!(\rrule{\langle l,\psi\rangle}{\langle r,\psi\!\land\!b \rangle}{\folneg\unsat(\psi\!\foland\!b)})$. Thus,  there are: a valuation $\rho^s : Var \to \TSym$ such that $\rho^s(l) = \gamma^s$, $\rho^s \models \folneg\unsat(\psi \foland b)$, $\rho^s(\psi)=\phi$, $\rho^s(\psi \foland b)=\phi'$,  $\rho^s(r)={\gamma'}^s$; and a valuation $\vartheta:SymVal \to \T$ such that $\vartheta({\gamma'}^s) = \gamma'$,$\vartheta \models \phi'$. \\[1ex]
 From the previous statement, we have $\phi'=\rho^s(\psi \foland b) = \rho^s(\psi) \foland \rho^s(b)$. Since $\vartheta \models \phi'$ we  deduce   $\vartheta \models \rho^s(\psi)$, thus,  $\vartheta \models \phi$. Let us consider $\gamma \eqbydef \vartheta(\rho^s(l)) = \vartheta(\gamma^s)$. The last two statements ensure $\gamma \models \langle \gamma^s, \phi\rangle$. There remains to prove $\gamma \ltran{\alpha}{\T}{\S} \gamma'$. 
\\[1ex]
For this, consider the valuation $\rho \eqbydef \vartheta \circ \rho^s$. From   $\vartheta \models \phi'$ we obtain
  $\vartheta \models \rho^s(b)$, which is  $(\vartheta \circ \rho^s)(b) = \true$, i.e., $\rho(b) = \true$. 
Finally, $\rho(r) = (\vartheta \circ \rho^s)(r) = \vartheta(\rho^s(r)) = \vartheta({\gamma'}^s) = \gamma'$, which proves $\gamma \ltran{\alpha}{\T}{\S} \gamma'$ and completes the proof. 
\end{proof}

A consequence of this lemma is the \emph{weak precision} theorem; it   says that if a sequence $\betaSym$ of symbolic  rules can be executed starting in some initial symbolic configuration \emph{and the final symbolic configuration that the sequence reaches is satisfiable} (by some concrete configuration), then the corresponding sequence of  concrete rules can be fired as well. A strong version of precision is proved after this one.

\begin{theorem}[Weak Precision]
With the above notations,  \mbox{$\langle \gamma^s\!\!,\!\phi \rangle\!\ltran{\betaSym}{\TSym{}}{\SSym}\!\!\langle \gamma'^s\!\!,\!\phi' \rangle$}  and  $\gamma'\models \langle \gamma'^s,\phi' \rangle$   implies that there exists $\gamma$ such that
$\gamma\!\ltran{\beta}{\T}{\S}\!\gamma'$ and 
$\gamma\models \langle \gamma^s,\phi \rangle$.\vspace*{-2ex}
\end{theorem}
\begin{proof} By induction on the length of $\betaSym$, using Lemma~\ref{lem:othersidealt} in the induction step.
\end{proof}

The strong version of precision (simply called precision), shown below, is based on the following lemma, which  assumes the completeness of the $\unsat$
predicate.

\begin{lemma}
\label{lem:otherside}
Assume that the predicate $\unsat$ is complete.
If $\langle \gamma^s , \phi \rangle \ltran{\alphaSym}{\TSym}{\SSym}\langle {\gamma'}^s ,\phi' \rangle$ then there are concrete configurations $\gamma,\gamma'$\!
such that  $\gamma \models \langle \gamma^s , \phi\rangle$, $\gamma'\!\models\!\langle \gamma'^s , \phi' \rangle$\!  and $\gamma \ltran{\alpha}{\T}{\S} \gamma'$.
\end{lemma}
\begin{proof}
The transition $\langle \gamma^s , \phi \rangle \ltran{\alphaSym}{\TSym}{\SSym}\langle {\gamma'}^s ,\phi' \rangle$ is
 obtained by applying the rule $\alphaSym \eqbydef( \rrule{\langle l,\psi\rangle}{\langle r,\psi \foland b \rangle}{\folneg \unsat(\psi\foland b) })$. Thus,  there is  $\rho^s : Var \to \TSym$ such that $\rho^s(l)\!=\!\gamma^s$, $\rho^s\!\models\!\folneg\unsat(\psi\foland b)$, $\rho^s(\psi)\!=\!\phi$, $\rho^s(\psi\foland b)\!=\!\phi'$,  $\rho^s(r)\!=\!{\gamma'}^s$\!. From the above  we get $\unsat(\phi \foland \rho^s(b)) = \false$, and by completeness of
 $\unsat$, $\phi \foland \rho^s(b)$ is satisfiable, thus, there is $\vartheta\!:\!\SymVar\!\to\!\T$ such that $\vartheta \models \phi\foland  \rho^s(b)$. With $\gamma \eqbydef (\vartheta \circ \rho^s)(l)$, $\gamma' \eqbydef (\vartheta \circ \rho^s)(r)$ we get $\gamma \models \langle \gamma^s , \phi\rangle$, $\gamma'\!\models\!\langle \gamma'^s , \phi' \rangle$, $\gamma \ltran{\alpha}{\T}{\S} \gamma'$.
 \end{proof}

\begin{theorem}[Precision]
Assume that $\unsat$ is complete. Then
 \mbox{$\langle \gamma^s,\phi \rangle\!\ltran{\betaSym}{\TSym{}}{\SSym}\!\!\langle \gamma'^s,\phi' \rangle$}   implies 
that there exist concrete configurations $\gamma,\gamma'$ such that 
$\gamma\!\ltran{\beta}{\T}{\S}\!\gamma'$ and 
$\gamma\models \langle \gamma^s,\phi \rangle$ and $\gamma'\models \langle \gamma'^s,\phi' \rangle$.\vspace*{-2ex}
\end{theorem}
\begin{proof} By\:induction\:on\:the\:length\:of\:$\betaSym$\!, using Lemma~\ref{lem:othersidealt} and Lemma~\ref{lem:otherside} in the induction step.
\end{proof}
The above precision theorem now captures the intuition of what precision means,  informally: each symbolically executable path can also be executed concretely. It is essentially a theoretical result since it assumes an oracle to decide unsatisfiability in FOL, but it is nonetheless important since it shows that symbolic execution may only "diverge" from concrete execution only because of intrinsic undecidability results, (i.e., not because of how we have defined it in this paper).



\section{Experiments}
\label{sec:experiments}
In order to show that our theoretical framework for symbolic execution is a good basis for developing symbolic analysers and verifiers grounded in  the formal semantics of language, we made the following experiments\footnote{The experiments presented in this section can be tested online at \url{http://fmse.info.uaic.ro/tools/Symbolic-IMP}.} in \K: we turned the \K definition of \IMP into a symbolic definition \IMPSym that is able to execute symbolic configurations given as input programs; we used \IMPSym together with the \K support on bounded model-checking for statical analysis of several programs; we extended \IMPSym to a new symbolic definition \IMPRL that is able to prove certain reachability formulas by transforming them into symbolic configurations to be executed; we extended \IMPRL to a new symbolic definition \IMPHL that is able to prove Hoare triples by transforming them into symbolic configurations to be executed.

\subsection{Extending \mbox{\large{\IMP}} to \mbox{\large{\IMPSym}}}
First we added some symbolic calculus support in the \K tool. For instance, the symbolic values of sort $\sort{Int}$ ($\sort{Bool}$) are expressions of the form $\terminal{\#symInt}\terminal{(X)}$ (resp. $\terminal{\#symBool}\terminal{(X)}$), where $\terminal{X}$ could be an identifier or an integer. These values can be automatically generated on request. The algebra of integers was extended with symbolic expressions and the algebra of Booleans was extended to first-order logic.

We also added an interface to the Z3 SMT solver~\cite{DBLP:conf/tacas/MouraB08}, which consists of a function  {\it checkSat} that, for a given formula, returns {\it sat} if the solver finds the formula satisfiable, {\it unsat} if it finds the formula  unsatisfiable,  and  {\it unknown} if it cannot decide the (un)satisfiability.

Then, the \K definition of \IMP, is transformed into the symbolic definition  \IMPSym by performing the transformations discussed in Section~\ref{sec:sym}: linearising rules, replacing constants by variables and adding formulas to configuration and rules.
Finally, we extended the syntax such that the input programs are symbolic configurations.

\subsection{Bounded symbolic model-checking}
We can use the \IMPSym for performing static analysis for \IMP programs by bounded-model checking. Since the \K tool is based on the Maude system, we have access to its specific analysis tools. One of them is the \mytt{search} command that is used for performing bounded model-checking.

For instance, by searching all symbolic executions of the program shown in Figure~\ref{fg:minimum}, we found an execution path that ends with a division by zero. The reason is given by the fact that the first part of the program wrongly computes the minimum when  \mytt{a} $\leq$ \mytt{b} and \mytt{a} $>$ \mytt{c}. For this case, it sets \mytt{min} to~\mytt{b} instead  of \mytt{c}. The denominator \mytt{c} $/$ \mytt{min} returns zero because $/$ is the integer division and \mytt{min} $=$ \mytt{b}, \mytt{a} $>$ \mytt{c}, and \mytt{a} $\leq$ \mytt{b} imply \mytt{c} $<$ \mytt{min}.

The infinite symbolic executions can be bounded using either the depth parameter of \mytt{search} command or including constraints on some variables into the input symbolic configuration.

\subsection{Extending \mbox{\large{\IMPSym}} to \mbox{\large{\IMPRL}}}
Reachability Logic~\cite{rosu-stefanescu-2012-oopsla} is a language independent eight-rule proof system used for deriving reachability properties of systems. The formulas in reachability logic are reachability rules which generalise the transitions of operational semantics.  If $\it MIN$ denotes the first part of the program from Figure~\ref{fg:minimum}, then we can specify that it should compute the minimum of \mytt{a}, \mytt{b}, and \mytt{c}  by the following reachability rule:
\begin{align}
&\kall[black]{cfg}{\kall[black]{k}{\it MIN} \kall[black]{env}{\terminal{a}\;\mapsto a^s \terminal{b}\;\mapsto b^s \terminal{c}\;\mapsto c^s \terminal{min}\;\mapsto \AnyVar}} \;\foland \true \;\pmb{\Rightarrow}\notag\\
&\qquad\kall[black]{cfg}{\kall[black]{k}{\kdot} \kall[black]{env}{\terminal{a}\;\mapsto {a'}^s \terminal{b}\;\mapsto {b'}^s \terminal{c}\;\mapsto {c'}^s \terminal{min}\;\mapsto {\it min'}}}\; \foland\notag\\
&\qquad({\it min'} \leq {a'}^s \land {\it min'} \leq {b'}^s \land {\it min'} \leq {c'}^s)
\label{eq:min}\vspace*{-1ex}
\end{align}

\noindent In order to check such formulas, we define a new symbolic semantics \IMPRL by enriching \IMPSym with two new statements: $\terminal{assume($\varphi$)}$  and $\terminal{match($\varphi'$)}$.
 The execution of $\terminal{assume($\varphi$)}$ creates an initial~symbolic configuration representing $\varphi$ and the execution of $\terminal{match($\varphi'$)}$ checks whether the current symbolic configuration, seen as a FOL formula, implies $\varphi'$. A reachability formula $\rrule{\varphi}{\varphi'}{}$ is transformed into a symbolic configuration\\[1ex]
 \centerline{$\cfgallimps{\terminal{assume($\varphi$)}\kra \terminal{match($\varphi'$)}}{\kdot}{\true}$}\\[1ex]
If all the execution paths are finite and end with the empty {\sf k} cell, then the reachability formula holds. Obviously, this implements only a small part of the reachability logic proof system presented in~\cite{rosu-stefanescu-2012-oopsla}.

\subsection{Extending \mbox{\large{\IMPRL}} to \mbox{\large{\IMPHL}}}
Hoare Logic~\cite{DBLP:journals/cacm/Hoare69} is a well-known formal system designed for reasoning about the correctness of programs. It uses Hoare triples written like $\htriple{P}{C}{Q}$, where $P$ is the precondition, $C$ is a piece of code, and $Q$ is the postcondition. A Hoare triple can be encoded as a reachability rule~\cite{rosu-stefanescu-2012-oopsla}. For example, the following Hoare triple:\\[1ex]
\centerline{
$\htriple{\true}{\it MIN}{\tt min \leq a \land min \leq c \land min \leq c}$
}\\[1ex]
is encoded by the reachability rule~(\ref{eq:min}).
So, we can verify Hoare triples using \IMPRL. For that, we enriched \IMPRL with additional syntax for annotated programs and rules transforming Hoare triples into reachability rules. An annotated program is an \IMP program enriched with pre/post conditions and invariants for $\terminal{while}$ loops. We obtain a new symbolic definition \IMPHL, whose successful executions on an annotated program AP is equivalent to verify if the Hoare triple denoted by AP holds.


\section{Conclusion and Future Work}
\label{sec:conclusions}
We have presented a generic framework for the symbolic execution of programs in languages having  operational semantics defined by term-rewriting. Starting from the formal definition of a language $\cal L$, the symbolic version $\LSym$
of the language is automatically constructed, by extending (some of)  the datatypes used in $\cal L$ with symbolic values, and by modifying the semantical rules of $\cal L$ in order to make them operate on symbolic values appropriately. The symbolic semantics of $\cal L$  is then the (usual) semantics of 
$\LSym$, and symbolic execution of programs in $\cal L$ is  the (usual) execution of the corresponding programs in $\LSym$, which is  the application of the rewrite rules of the semantics of $\LSym$ to programs.

Assuming a sound unsatisfiability predicate for  first-order logic, our symbolic execution has the expected properties of \emph{coverage}, meaning that to each concrete execution there is a symbolic one that corresponds  to it, and of \emph{weak precision}, meaning that to each symbolic execution that ends in a
satisfiable symbolic configuration has a concrete execution that corresponds to it. Here, correspondence means executing the same path in the control flow of the program. By assuming also a complete unsatisfiability predicate
for first-order logic one also gets the theoretical  \emph{precision} result, meaning that  each symbolic execution has a concrete execution  that corresponds to it. The latter result is essentially theoretical since first-order logic is undecidable, but it means that any "imprecision" is not
due to the way we defined symbolic simulation but to inherent undecidability results. 

The results obtained are the expected ones; however, they were obtained thanks to carefully constructed definitions of what the essentials of a programming language are, in an
algebraic and term-rewriting based setting. The crucial observation that  led us to the appropriate definitions is that datatypes are used by, but are not part of, a language definition, and thus should be treated differently.

Finally, we have illustrated the framework on a simple imperative language defined in the $\K$ framework by implementing some known program analysis techniques over symbolic semantics.

\paragraph*{Future Work} We are planning to use symbolic execution as the basic mechanism on  the deductive systems of program logics also developed in the  $\K$ framework (such as  reachability logic~\cite{rosu-stefanescu-2012-oopsla} and  our own  circular equivalence logic~\cite{lucanu:hal-00744374}) are built. More generally, our symbolic execution can be used for program testing, debugging, and verification, following the ideas presented in related work, but with the added value of
being  grounded in  formal operational semantics.
