% THIS IS SIGPROC-SP.TEX - VERSION 3.1
% WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS
% APRIL 2009
%
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP
% LaTeX2e document class file for Conference Proceedings submissions.
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V3.2SP) *DOES NOT* produce:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) Page numbering
% ---------------------------------------------------------------------------------------------------------------
% It is an example which *does* use the .bib file (from which the .bbl file
% is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission,
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% Questions regarding SIGS should be sent to
% Adrienne Griscti ---> griscti@acm.org
%
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
% Gerald Murray ---> murray@hq.acm.org
%
% For tracking purposes - this is V3.1SP - APRIL 2009

\documentclass{IEEEtran}
\usepackage{graphicx}
\usepackage{paralist}
\usepackage{lipsum}
\usepackage{amsmath}
\usepackage{amsfonts}
%\usepackage{natbib}
\usepackage{changebar}
\usepackage{rotating}
\usepackage{amsmath}
\usepackage{mdwlist}
\usepackage{tikz}
%%% algorithm 
\usepackage[linesnumbered]{algorithm2e} 
\newcommand{\Cla}{\textit{Cla}}
\newcommand{\C}{{\cal C}}
\newcommand{\M}{{\cal M}}
\renewcommand{\L}{{\cal L}}
\renewcommand{\l}{\ell}
\renewcommand{\d}{\delta}
\newcommand{\m}{\mu}
\newcommand{\F}{{\cal F}}
\newcommand{\D}{{\cal D}}
\renewcommand{\S}{{\cal S}}
\newcommand{\ins}{\textit{Ins}}
%% algorithm

%
% in the preamble
%
\newcommand{\specialcell}[2][c]{%
  \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
\newcommand{\eat}[1]{}




\begin{document}





\title{Towards the Formal Analysis of the Smart-Card Application Protocol Data Unit}







% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
\author{\IEEEauthorblockN{Andriana Gkaniatsou}
\IEEEauthorblockA{School of Informatics\\
University of Edinburgh\\
Edinburgh, UK\
Email: a.e.gkaniatsou@sms.ed.ac.uk}
\and
\IEEEauthorblockN{Fiona McNeill}
\IEEEauthorblockA{Department of Computer Science\\
Heriot-Watt University\\
Edinburgh, Uk \\
Email: f.mcneill@hw.ac.uk}
\and
\IEEEauthorblockN{Alan Bundy}
\IEEEauthorblockA{School of Informatics\\
University of Edinburgh\\
Edinburgh, UK\\
Email: a.bundy@ed.ac.uk}}

\maketitle
%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.




\eat{







% There's nothing stopping you putting the seventh, eighth, etc.
% author on the opening page (as the 'third row') but we ask,
% for aesthetic reasons that you place these 'additional authors'
% in the \additional authors block, viz.
\additionalauthors{Additional authors: John Smith (The Th{\o}rv{\"a}ld Group,
email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat
(The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).}
\date{30 July 1999}}
% Just remember to make sure that the TOTAL number of authors
% is the number that will appear on the first page PLUS the
% number that will appear in the \additionalauthors section.





\begin{abstract}

A smart-card or integrated circuit card is advertised as one of the most  secure, tamper-proof, and trusted devices for implementing confidential 
operations such as identification, authentication, data storage and application processing. Smart-cards are widely used for financial, communication, security and data management purposes. One of the most commonly used standards is RSA PKCS\#11 which defines the Application Programming Interface for cryptographic devices such as smart-cards. Although, developing formal techniques for analysing and verifying the correctness of implementation of RSA PKCS\#11, little attention has been paid to the low-level cryptographic protocols that implement them. In this paper we propose the automated analysis of the low-level communication between a smart-card and the outside world and the interconnection of this communication with specific RSA PKCS\#11 functions. We present our system REPROVE, which aims at analysing and providing an insight on how specific cryptographic functions, with respect to the RSA PKCS\#11 standard, are translated in the low-level communication of the smart-card. REPROVE does not require access to the card as it basis its analysis on the communication traces, and deals with both inter-industry and proprietary implementations. We show how REPROVE successfully analysis the implementation of the $C\_login$ function to discover security vulnerabilities of the card.

\eat{
Smart-cards or integrated circuits are widely used to perform secure operations over sensitive data. A common standard for such operations is RSA PKCS\#11 which defines API of smart-cards.   A lot of research has focused on developing formal techniques to analyse sRSA PKCS\#11, but little attention has been paid to the low-level cryptographic protocols that implement them. In this paper we present out system REPROVE which aims at analysing the low-level communication layer between a smart-card and the reader and identify the implementation of RSA PKCS\#11 cryptographic functions. We show how our system automatically analyses the both inter-industry and proprietary implementations

In this paper we propose the automated analysis of  the low-level cryptographic protocol that implements the communication between a smart-card and the outside world when a RSA PKCS\#11 function is called. Our analysis is based on the communication traces and it does not require access to the software. We deal with both inter-industry and proprietary implementations by considering the analysis as an inference problem.
}



\end{abstract}

\section{Introduction}


 A smart-card or integrated circuit card is advertised as one of the most  secure, tamper-proof, and trusted devices for implementing confidential 
operations such as identification, authentication, data storage and application processing. 
 Such operations involve the communication between smart-cards and third-party systems which can make them vulnerable to attacks. 
In this paper we propose the automated analysis of the low-level cryptographic protocol that implements this communication, based on the communication traces.
Such analysis will contribute by providing a formal method to discover security vulnerabilities on faulty implementations and prove security properties on correct ones.

A high-level description of our proposal is shown in Figure~\ref{cardvssystem}: the card communicates with the reader generating a communication trace that we capture and analyse. For that purpose we have an analysis module that accepts as parameters the trace and the abstract models of the cryptographic protocols. The output of the analysis module is how the card performs through communication specific  cryptographic functions.

\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{systemvscard}
\caption{High-level overview of our technique.}
\label{cardvssystem}

\end{figure}


A  smart card has to implement all the cryptographic-relevant operations on the token\footnote{A token is a device that stores objects (data, keys and certificates), which can be accessed via session handles, and  can perform cryptographic functions. } side, \textit{i.e.} on-card. The most commonly used cryptographic standard is RSA PKCS\#$11$~\cite{pkcs}, 
which defines an Application Programming Interface (API) for cryptographic devices. It specifies functions such as signing, encryption, decryption, PIN handling, \textit{etc}.  API-related attacks were first introduced in~\cite{Longley} in the early
$1980$s, following the exposure of the vulnerability to attacks of
PKCS \#$11$~\cite{Clulow03onthe, steel10}. Ever since, formally analysing properties of security APIs and reasoning about possible attacks has attracted a lot of attention  \cite{robbank,bakproof,tsalapati, steelxor, steelformalanalysis} and many approaches have been proposed such as using a model checker, a theorem prover, customised decision procedures or reverse-engineering techniques ~\cite{cortier,courant,youn, steelformalanalysis, steel10}.
However, security analysis has mostly focused on the 
PKCS\#$11$ level rather than on implementations closely connected with the standard, such as the low-level communication between the on-card and the off-card applications, 
defined by the Application Data Protocol Unit (APDU).  





The basic principles of the Application Data Protocol Unit are specified by ISO 7816, e.g., the available 
inter-industry commands, the structure and contents of the exchanged messages \textit{etc}. It is up to the card manufacturer whether the implementation of the ISO itself will be either inter-industry or proprietary.
\eat{ However, following exactly the standard is not compulsory, thus, many smart-card implementations are based on  proprietary practises. }
%%Smart Card Attacks
If an attacker compromises the APDU level and the communication is not implemented in a secure way, he can have access to the card's operations and consequently to sensitive information, \textit{eg.}, keys  or data that should be encrypted but is not.  \cite{Murdoch-chp-PIN}, for instance,  proposes  a man-in-the-middle device to allow authentication without knowing the card's PIN. The proposed  system intercepts and modifies the communication between the card and the terminal. Under the assumption that APDUs are inter-industry, it sends a $success$ response to a $verify$ command that has the wrong PIN and  allows authentication of untrustworthy parts. In \cite{barbu-eavesdropPINg-apdu-buffer} a different approach is adopted: the bypassing of the confidentiality insurance under the assumption that an attacker has access to the APDU buffer\footnote{The buffer that  is used to exchange data.}. \cite{smartlogig-koning} presents the SmartLogic tool which  provides full control over the smart-card  communication channel for eavesdropping and man-in-the-middle attacks. The smart-card communication channel has to be implemented in an inter-industry way for SmartLogic to be successful.  All these efforts to prove security vulnerabilities of the smart-cards create the need to define a formal way of analysing and reasoning about their security. The above approaches deal only with inter-industry implementations whereas proprietary ones should also be considered as they are very common. In proprietary commands what aggravates the situation is that they effectively look like a random sequence of bits that we need to decipher in order to understand their semantics.
\eat{ show that smart-cards are insecure, however,
they all assume an inter-industry implementation of the protocol. There is a need to define a formal way of identifying vulnerabilities for both inter-industry and proprietary implementations. 
 none of these approaches define a formal way for identifying vulnerability, and also, they assume knowledge of the implementation of the card.}




%% reverse engineering
 Our proposal is to reverse-engineer  the semantics of the low-level cryptographic protocol that enables the communication between a smart-card and the external world, and to infer the interconnection with the PKCS\#11 functions. A lot of research has been conducted in the protocol reverse-engineering research area. For instance,  the Polygot system \cite{polygot} automatically extracts protocol messages through binary analysis, the Prospex system \cite{prospex}  infers the protocol format and the corresponding state machines.  \cite{reformat} presents ReFormat, a system  for reverse-engineering encrypted messages. \cite{cho2010} proposes the inference of  protocol state machines based on abstractions that the end users provide. However, all these systems either require software access, assume known semantics of the messages or derive only the protocol message format. None of these methodologies suit to our problem as we assume no software access and unknown message semantics, and we also target the interconnection between the communication and the  PKCS\#11 itself.

\paragraph*{Contributions and roadmap}  The contributions of our
work and a roadmap for the rest of this paper are as follows:
\begin{compactitem}
  \item We give an overview of how the PKCS\#11 security protocol works
  and the relevant part of the ISO/IEC 7816 standard that describes the
  commands used for the implementation of the protocol.  We then describe the
  discrepancies that arise between the inter-industry and proprietary
  definitions of the commands covered by the standard, and how these
  discrepancies aggravate the problem of analysing activity traces for
  security flaws (Section~\ref{sec:background}).
  \item We present an abstract model of the APDU layer and its interconnection with two  PKCS\#11 functions, which is 
  powerful enough to capture the semantics of both the inter-industry
  commands of the standard and their potential proprietary implementations. This model is based on decomposing the various functionalities of the API
  into finer-grained sub-functionalities and analysing how the commands of
  the standard can be used to implement these functionalities.  Based on this
  representation we present an analysis algorithm that can be used to
  automatically analyse a trace of commands and group commands according to
  their intended functionality as this has been captured by our model
  (Section~\ref{methodology}).
  \item In addition to presenting the high-level analysis algorithm, we apply
  it on a simplified example (Section~\ref{sec:example}) to aid with its
  understanding.
  \item We present some preliminary findings of manually applying our analysis
  algorithm on existing protocol implementations and show how we have been
  able to successfully detect the semantics of these implementations
  (Section~\ref{sec:evaluation}).  We believe these results pave the way for
  applying our methodology to automatically reverse engineer and analyse
  command traces for the purposes of detecting security flaws.
\end{compactitem}
Finally, we draw our conclusions and identify future work directions in
Section~\ref{sec:conclusions}.
 
 
\eat{Our aim is to analyse the low-level 
communication protocol that is implemented between a card and a host, and  check whether the communication proceeds in a secure way, \textit{i.e.}, whether sensitive information remains 
secure (unrevealed). 
Such an analysis requires a vast knowledge of the protocol implementation, \textit{i.e.} the rules that are specified, the vocabulary, the permitted sequence of command-response 
pairs, \textit{etc}. ISO 7816 defines some of these restrictions/permissions, as well as a set of inter-industry commands that can be used during this communication, but  
the protocol implementation mostly depends on proprietary rules, vocabulary, assumptions, \textit{etc}. So, if someone is not familiar with such proprietary implementations, 
the only way to analyse it is by \textit{guessing} the semantics and the properties. To the best of our knowledge, there have been no proposals, until now, to automate this analysis process.}
%%%%%%%%%%%%%%%%%%%% end eat
Our proposed methodology is  APDU analysis,  for both inter-industry and proprietary implementations, based on the communication traces. We consider the analysis as a semantic interpretation and knowledge reasoning problem and we use inference mechanisms to analyse it.
% Our methodology proposes the analysis using a flexible and general enough representation, which we will then refine to match the actual implementation. 
We model, in first-order logic, abstractions of the APDU protocol, based on ISO 7816 and assumptions. We use these models as background knowledge to infer possible mappings of the implementation and then, we test and refine the model continuously until
it captures the real implementation. To the best of our knowledge, this is the first formal proposal for the APDU analysis and the inference of its interconnection with the PKCS\#11 standard. The novelty of our approach is that we do not require access to the card's software and that we deal with both inter-industry and proprietary implementations.  






\eat{

Novelty of our analysis is that we do not require access to the implementation of the card. We reverse-engineer the semantics of the protocol using communication traces.}


\section{Background}
\label{sec:background}

%%% pkcs 11
\paragraph*{RSA PKCS\#11}
Security APIs intend to allow access
to sensitive resources in a secure way. The design of such APIs is critical, as
they have to ensure the secure creation, deletion, import and export of a key
from the device. Also, they are responsible for permitting the
use of these keys for encryption, decryption, signature and authentication so that if a device is exposed to malicious software the keys remain secure. 
PKCS\#$11$ specifies an ANSI C API to hardware devices that can perform cryptographic functions and store cryptographic related data. It aims to isolate an application from the details of the cryptographic device.

Each time an application connects to a token it initiates a  session by authenticating itself, then, the application can access the objects of the token. PKCS\#$11$ provides a set of functions for key and token management, session management, object management, encryption, decryption, message digesting, signing and MACing\footnote{MAC is a message authentication code.}, verifying signatures and MACs, key management, random number generation, parallel function management, callback and dual function cryptographic functions. Some example functions are presented in Table \ref{pkcs:key-management}.


%ieee
The sessions that can an application can open can be public or private. This defines the kind of objects the application can access and the types of operations that it can perform. In this paper we are focusing on the $C\_Login$ function which performs the necessary authentication that is required to access and perform operations on private objects within the token. After a successful authentication, the application can access all objects stored in the token and perform operations over them. Authentication takes place only once, at the initiation of the first session, and it is not required for the rest of the sessions. In particularly RSA PKCS\#11 defines that when an application logs into a token, then all application sessions with that token are logged in. Logging in successfully to a token provides access to all administrative, object management and cryptographic operations. 


\begin{table}[position specifier]
  \centering
  \begin{tabular}{| l | c |  }
\hline
\small Function & \small Meaning \\ \hline \hline
	\small C$\_$GenerateKey & \small generate a secret key \\ \hline
	\small C$\_$GenerateKeyPair & \small generate a public/private key pair \\ \hline
	\small C$\_$WrapKey &\small  wrap (encrypt) a private key\\ \hline
	\small C$\_$UnwrapKey & \small unwrap (decrypt) a private key\\ \hline
	\small C$\_$Login &\small  log a user to the token \\ \hline
 \end{tabular}
   \caption{RSA PKCS\#$11$ functions.}
  \label{pkcs:key-management}
\end{table}

%% ISO/IEC 7816
\paragraph*{ISO/IEC 7816}
ISO/IEC 7816  defines different aspects of smart-cards. In this paper we focus on Part 4 \cite{7816-4} which specifies the organization, the security and the commands for interchange.  APDU  consists of command-response pairs: a \textit{command} is sent by the outside world to the card and a \textit{response} is the card's answer to that command. A command consists of 
a compulsory 4-byte header (Cla, Ins, P1-P2) and an optional body (Lc, Data Field, Le). Cla defines the type of the command i.e., inter-industry, Ins indicates the specific command, e.g., select-file, P1-P2 are the instruction parameters for the command, e.g., offset to write into the file to write data, Lc is the number of bytes of the Data Field, Data Field is the data that is sent to the card and Le is the number of the expected (if any) response bytes. 
A response consists of  an optional body, the Response Data which is the data that the card sends, and a compulsory 2-byte trailer SW1-SW2 which codes the status of the card after processing the command (the processing state of the card). 
A command can (i) send data to the card, (ii) expect data from the card, (iii) both send and expect data, (iv) none of the above. The length of the response depends on the sent command. ISO 7816 specifies the inter-industry command class (CLA), INS codes, P1-P2 and SW1-SW2 for all inter-industry commands/responses.


An APDU implementation is always defined by ISO 7816 and can either be  inter-industry in which the command codings are defined by ISO 7816, or proprietary in which the developers define their own command codings. 
\eat{However, in both cases the principles and specifications of the communication are defined by the ISO.  }
An example of an inter-industry and a proprietary command is presented  in Table~\ref{ISO-getChallenge}. The meaning of each byte of the inter-industry command can be decoded, whereas the meaning of the corresponding proprietary are unknown. As we will show, our analysis can  infer mappings between the proprietary and the inter-industry commands.
\begin{table}[h!]
  \centering 
  \begin{tabular}{  l | l   l  l  l  l  l  l }

\small Type & \small CLA& \small INS&\small  P1 & \small P2& \small Lc& \small Data Field& \small Le \\ 
\hline 
\tiny inter-industry & \tiny 00 & \tiny 84& \tiny 00& \tiny 00& \tiny 00& \tiny 00& \tiny 08 \\
 
\tiny proprietary & \tiny 80 & \tiny 21& \tiny 00& \tiny 00 & \tiny 00&\tiny 00& \tiny 08 \\ 

\end{tabular}
  \caption{Get-challenge command: possible implementations.}
  \label{ISO-getChallenge}
\end{table}


\eat{
\begin{table}[h!]
  \centering 
  \begin{tabular}{  |l| l|  c|  c|  c|  r|  r| }
\hline
 \small CLA& \small INS& \small P1 & \small P2& \small Lc& \small Data Field& \small Le \\ 
\hline  \hline
\tiny 80 & \tiny 21& \tiny 00& \tiny 00 & \tiny 00&\tiny 00& \tiny 08 \\
 \hline
\end{tabular}
  \caption{Get-challenge command: proprietary implementation.}
  \label{proprietary-getChallenge}
\end{table}
}


 
\paragraph{Log$\_In$ attacks}
Compromising the authentication layer of a smart-card can give us access and perform operations over  sensitive data. In this paper our goal is to analyse the implementation of the log$\_$in function to draw some conclusions about the following security questions:
(i) does log$\_$in allow session handles to be used?
(ii) is it possible to perform log$\_$in bypass attacks?
(iii) is man in the middle attack feasible?
(iv) does the communication allow PIN brute force attacks?
(v) is it possible to sniff the PIN?
(vi) does the communication protocol allow the injection of commands?
(vii) does the communication protocol aloo the blind reply of a session?
(viii) does the communication protocol allow the replay of the the communication trace; i.e. is it possible to bypass the communication?

 



\subsection{Analysis Goals}
From an APDU analysis we expect to infer how the
low level communication implements the high level abstractions and answer the following questions: 
\begin{inparaenum}[(i)]
\item What are the semantics of the defined commands? For example, assume a proprietary command $c$. We want to define the
corresponding inter-industry command(s), the meaning of $c$, \textit{etc}.
\item How is the protocol connected with the RSA PKCS\#$11$ standard, \textit{i.e.} how
a high level command is implemented in the low level communication?
For example, when we send a KEY$\_$GENERATE high level command,
what are the APDU commands-responses that are sent to and from
the card? 
\item Does the APDU configuration allow the execution of commands that
the RSA PKCS\#$11$ configuration does not, e.g., read sensitive keys?
\item Are key operations performed in a secure way, \textit{i.e.}, on-card?
\item Is sensitive data encrypted, e.g., the token-library communication?
\item Is sensitive data exposed, e.g., sensitive keys ?
\item Does sensitive data has secure configurations? For example, can key
attributes be altered?
\end{inparaenum}


\paragraph{Inferred Model}
REPROVE takes as input an APDU trace and produces the corresponding model that describes the card's implementation of the communication protocol. The result 
of this analysis are three different abstractions of the protocol; the exchanged commands, the executed on-card operations (with respect to the exchanged commands) and the interconnection with specific PKCS\#11 functions. Each abstraction can address different types of attacks. For example, 
\begin{itemize}
\item[Exchanged commands]: steal sensitive data that is transmitted, inject commands, blind replay of sessions.
\item[On-card operations]: perform operations over sensitive data.
\item[Interconnection of the communication with PKCS\#11 functions]: perform PKCS\#11 attacks at the APDU layer. 
\end{itemize} 


\section{ Proposed Methodology}
\label{methodology}

We consider the APDU analysis as an inference problem. We model the background knowledge in first-order logic, which consists of abstract  models based on ISO 7816 that define:
\begin{inparaenum}[(i)]
\item the main properties, the restrictions and requirements of communication, 
\item assumptions about the behaviour of a card, and
\item  possible implementations of specific RSA PKCS\#11 functions.
\end{inparaenum} 
We assume that the representation of a command  is unique, and that all commands should satisfy their preconditions. Our analysis follows an initial abstraction, to detect patterns and capture the general properties,  and then  a refinement. \eat{We introduce the notion of \textit{card functionality}, which is a sequence of commands that when is sent, the card executes a specific functionality.
 We express all abstract models of the RSA PKCS\#$11$ functions  in terms of  card functionalities. }  We chose  first-order logic as it is a machine readable format and expressive enough to model the protocol's rules without constraining the scenario and because it can be used to express the  inference rules for the analysis processes. Although there are other type of logics, we have chosen this particular one because of the failure-driven nature of our problem which implies enhancing the analysis through a diagnosis-repair mechanism  \cite{bundymcneill06}.
Higher-order logic is an alternative but we are currently only investigating first-order logic.

\eat{The first step is to convert all commands into predicates, each of the arguments presenting a specific command field. For example, a select command is translated into the following predicate:\\
\indent \small $command( Cla, Ins, P1, P2, Lc, D, Le )$\\
\normalsize where Cla, Ins, P1, P2, Lc, D and Le are instantiated appropriately. 
 Then,  we assign each command to a specific  category. Our first goal is to create command sets, each of which offers a specific card functionality. To do so, we systematically map each unknown command to possible ISO commands based on the category. Each mapped command is a possible precondition for the following one. So, we exclude the mappings whose preconditions are not satisfied. When we conclude with different sets of commands, we exclude those that do not offer any specific functionality based on our predefined models, \textit{i.e.} do not make sense. Finally, based on the abstract models presented in subsection~\ref{pkcs} we discard the sets of commands that are completely irrelevant, \textit{i.e.} the commands that are not present in the models. }

 The main idea of our methodology  is:

 \begin{inparaenum}[(1)]
\item Categorise each command  based on its data exchange properties.\\
\item Produce a set of mappings for each command.\\
\item Use the abstract models to narrow-down the mappings. \\
\item Identify potential card functionalities and further narrow-down the mappings.\\
%\item Map the unknown commands to the ISO defined commands based on the above preconditions.
%\item Create all possible command-sets.
\item Map the inferred model(s) to the RSA PKCS\#$11$ abstract models. \\
\item Test the model for errors and repair (if needed).
\end{inparaenum}


\subsection{Modelling the {\subsecit APDU} layer}
For our initial experiments we have only modelled part 4 of ISO 7816 which consists of the following commands:
 \begin{inparaenum}[(1)]
\item select, \item get data,\item  read binary, \item read record, \item update binary, \item write data,\item write binary, \item  put data, 
\item  write record, \item  create file, \item get challenge, \item get response,\item  verify, \item external authenticate, 
\item general authenticate, \item mutual authenticate, \item internal authenticate.
\end{inparaenum}


In Figure~\ref{fig:decomposition} we show a high-level description of our
modelling approach. Each individual operation (\textit{functionality}) of the card, is decomposed into a sequence of steps (\textit{sub-functionalities}).  Each step is then implemented as a sequence
of APDU commands, proprietary or inter-industry.  The APDU commands
are further characterised depending on their data exchange properties (shown,
for example, as `\texttt{YY}' in the figure to indicate a command that both
sends and receives data) and their role within the sub-functionality in
question (core, additional, or dummy).  Note that the same command can have
different data exchange properties and different roles within different
sub-functionalities, as shown, for instance, for commands
$\textrm{command}_a$ and $\textrm{command}_x$ in the abstract example of
Figure~\ref{fig:decomposition}.

\begin{figure}[!tb]
  \centering
  \includegraphics[width=\linewidth]{decomposition}
  \caption{A single operation represents a specific functionality and it is
   modeled as a sequence of sub-functionalities. Each sub-functionality is
   further implemented as a sequence of commands. Commands are characterised
   by their data exchange properties and role within some particular
   sub-functionality.}
   \label{fig:decomposition}
\end{figure}


\paragraph*{APDU commands} An APDU command is presented as \small $command(Cla, Ins, P1, P2, Lc, D, Le)$ \normalsize  where fields Cla, Ins, P1, P2, Lc, D are instantiated appropriately.
 We consider the following as valid commands: 
\begin{inparaenum}[(i)]
\item Any inter-industry command.
\item Any proprietary command that can be mapped to an inter-industry iff this inter-industry has not occurred before within the same implementation, and
	has all its preconditions satisfied. 
\end{inparaenum} We categorise the commands based on: (i) the data exchange properties, (ii) the card operations. 

\textit{Categorisation: Data Exchange Properties}
 Each category describes the possible data exchange properties and is defined by a 7-ary  predicate whose arguments are instantiated according to the command arguments:\\  
\begin{inparaenum}[(i)]
 \item \small{$command_{nn}(Cla, Ins, P1, P2, Lc, D, Le)$}: no data is sent - no data is expected,\\
 \item \small{$command_{ny}(Cla, Ins, P1, P2, Lc, D, Le)$}: no data is sent - data is expected,\\
 \item \small{$command_{yy}(Cla, Ins, P1, P2, Lc, D, Le)$}: data is sent - data is expected,\\
 \item \small{$command_{yn}(Cla, Ins, P1, P2, Lc, D, Le)$}: data is sent - no data is expected.\\
 \end{inparaenum}
$Lc$, $D$ and $Le$ fields define the category of a command.   \eat{If $Lc\neq 00$ and $D\neq 00$ then the command sends some data $D$ with length $Lc$ to 
the card. If $Le$ is not absent then the response will contain some data with length $Le$.}
 For example the following rule:\\
 \small
 \begin{eqnarray*}
 \lefteqn{ \forall Cla, Ins, P1, P2, Lc, D, Le(} \\
 \lefteqn{(command(Cla, Ins, P1, P2, Lc, D, Le)}\\
 \lefteqn{ \land Lc=00 \land D=00 \land Le \neq null)} \\
&	\rightarrow (command_{ny}( Cla, Ins, P1, P2, Lc, D, Le))
\end{eqnarray*}
\normalsize assigns a command to the $command_{ny}$ category under the condition that $Lc$ and $D$ are instantiated to $00$ and $Le$ is not $null$\footnote{Null means absence of a field.}.


\textit{Categorisation: Card Operations}  For each operation of the card we have categorised the commands to:   
\begin{inparaenum}[(i)]
%%%% core and additional commands
\item \textit{Core} which are the basic commands that perform the operation, e.g., to create a new file
 $create\_file$ is a core  command. 
 \eat{\normalsize We have grouped the core commands that implement the same abstraction of different card operations, to create command sets that have a similar outcome. For example, to extract some data from the card one might send a \small $get\_data$ \normalsize command along with some meta-data. Another way to implement this (depending on the card's data organisation) is by reading the data from the card with a $read\_binary$ command. }
\item \textit{Additional} which are the commands that  add extra properties to the operation, but they do not change the meaning of it; the same operation can be 
implemented without them. For example, to create a file $select$ is an additional command.
\eat{
For example, for the card reader authentication we consider 
 $verify$ to be a core command. During the authentication process,  a  path might be  selected, where the authentication will take place. 
Thus, 
we consider  $select$  as an additional command for this operation. This can also apply vice versa: for the selection of a file $select$  is a core command. However,
 before any operation is executed, authentication via a $verify$ command might take place. Thus, $verify$ is an additional command for this operation.}
%% dummy commands
 \item \textit{Dummy} which are the commands that 
do not send nor expect any data  and they   
usually just query, or check, the communication with the card. For example, a $verify$ command which 
does not send or expect any data to/from the card. Such commands may occur any time during the communication and they should not change the output of the analysis. 
\eat{
\small $\forall Cla, Ins, P1, P2, $ $(command(Cla, Ins, P1, P2, 00, 00, null)$\\$ \rightarrow$   $dummy(Cla, Ins, P1, P2, 00, 00, null))$}
\end{inparaenum}






\textit{Command Preconditions} The preconditions of each command define (i) restrictions on the previous issued commands, (ii) the values of their arguments, (iii) different meanings, (iv) data types and file structures.
 For example,  \small $read\_binary$ \normalsize  is usually sent to access the content of an EF file. However, if the value of $P1$ is between 128 and 160 then its selects an EF  
file. This precondition is modelled as:\\
\small
\begin{eqnarray*}
 \lefteqn{  \forall Cl, Ins, P1, P2, Lc, D, Le( (}\\
  \lefteqn{  command(Cla, bo, P1, P2, Lc, D, Le)} \\
   \lefteqn{\land P1 \in [128, 160])} \\
 &\rightarrow (select(file, D) \land isa(D, ef)))
\end{eqnarray*}

\paragraph*{Card Operations} \normalsize 
 We introduce a hierarchy of abstractions for the card operations. \textit{Functionality}  models provide high-level views of different operations of a card, and \textit{sub-functionality} models  describe
the steps that each operation is implemented. 
\eat{We introduce the term \textit{ functionality} to define a specific card operation that is implemented by a set of command-response pairs. 
We consider two levels of abstraction:
\begin{inparaenum}[(i)]
 \item \textit{Functionality}, which is the most abstract description of such operations.
\item  \textit{Sub-functionality}, which is a description of each operation with respect to the corresponding command-response pairs.
\end{inparaenum}}
A valid (sub-) /functionality has:
\begin{inparaenum}[(i)]
\item all its preconditions satisfied by the commands\footnote{Under the condition that the response is positive.} sent so far, or
\item has a subset of its preconditions satisfied by the commands sent so far,
 but it is possible to satisfy the rest of them by the commands that will follow (partially satisfiable). 
\end{inparaenum}	

\eat{Each (sub-)/functionality is a predicate of arity three:\\
$sub$-$functionality(Name, Preconditions, Postconditions)$\\
$functionality(Name, Preconditions, Postconditions)$\\
where $Name$ is the name of the (sub-)/functionality,\\ $Preconditions$ is the set of commands that need to be sent and $Postconditions$ is the set of effects of that (sub-)/ functionality.
}


%%% edo
\textit{Sub-functionalities} Sub-functionalities model the steps that a card performs specific  operations and consist of one or more commands. The same sub-functionality may have more than one model. 
Table~\ref{tab:subfunctionalities} presents the defined sub-functionalities and the corresponding core commands.
For example, $external\_$\\$authenticate(RD, D)$  describes the authentication of the reader through the \textit{challenge-response} protocol. The card issues a challenge $RD$ and the reader authenticates itself by providing the corresponding  response $D_2$. The following rule describes this operation: \\ 
				\small
 \begin{eqnarray*}
 \lefteqn{\forall   RD, Le_1,  P1_2, P2_2, Lc_2, D_2, ( }\\ 
				\lefteqn{(command(00, 84, 0, 0, 0, 0, Le_1) \land response(RD)}\\
				\lefteqn{ \land command(00, 87, P1_2, P2_2, Lc_2, D_2, null)}\\
				\lefteqn{ \land P2 \in [128,256])}\\
				& \rightarrow external\_authenticate(RD, D_2))
				\end{eqnarray*}
				
\normalsize which says that if  the command with $Ins = 84$  with a response of the card $RD$, is followed by the  command with $Ins=87$ then the reader has authenticated itself  via a challenge-response  external authentication.
For the analysis of the RSA PKCS\#$11$ Log$\_$in and Generate$\_$Key functions, we  categorise the sub-functionalities into:
%\begin{itemize}
%\item \textit{core} 
%\item \textit{additional} 
%\end{itemize}
\begin{inparaenum}[(i)]
\item  \textit{sensitiveOperation} which is any process that we expect to deal with sensitive data, e.g., the verification of a PIN, 
\item \textit{nonSensitiveOperation} which is any generic process over non-sensitive data, e.g., the selection of a file. 
\end{inparaenum}


\eat{If a functionality is a \textit{sensitiveOperation}, then all core sub-functionalities inherit this property, while all additional sub-functionalities are \textit{nonSensitiveOperation} and \textit{vice versa}.}



\begin{table}[h!]
  \centering
  \begin{tabular}{|l|l|}
  \hline
    \small {sub-functionality} & \small core command set\\ \hline  \hline
	\tiny $selected$ & \tiny  \{$select$, $read\_binary$\} \\ \hline
	\tiny $read\_data\_sub$ & \tiny \specialcell {\{$get\_data$, $read\_binary$,\\ $get\_response$, $read\_record$\}}\\ \hline
	\tiny $data\_updated$ & \tiny \{$update\_binary$\}\\ \hline
	\tiny $file\_created$ & \tiny \{$create\_file$\}\\ \hline
	\tiny $data\_written$ & \tiny  \specialcell {\{$write\_binary$, $update\_binary$,\\ $write\_record$\}}\\ \hline
	\tiny $challenge\_sent$ & \tiny \{$get\_challenge$\}\\\hline
	\tiny $verified$ & \tiny \{$verify$\} \\\hline
	\tiny $external\_authenticated$ & \tiny \{$external\_authenticate$\}\\ \hline
	\tiny $internal\_authenticated$ & \tiny \{$internal\_authenticate$ \}\\ \hline
	\tiny $mutual\_authenticated$ & \tiny \{$mutual\_authenticate$\}\\ \hline
  \hline
  \end{tabular}
  \caption{Card sub-functionalities and the corresponding core commands.}
  \label{tab:subfunctionalities}
\end{table}



\textit{Functionalities} Functionalities model the operations of a card. The same operation may be performed in different ways, thus, each functionality consists of a set of possible sub-functionalities and perhaps of one or more dummy commands.
\eat{This categorisation allows us to substitute sub-functionalities when they implement the 
same functionality.} For example, consider two cards $Card\_x$ and $Card\_y$ which both store data ($store\_data$). $Card\_x$ performs this operation through a  $file\_created$ sub-functionality, 
while $Card\_y$ through a $data\_written$.
 Table~\ref{tab:corefunctionalities} presents the defined functionalities and the corresponding sub-functionality set. 



\begin{table}[h!]
  \centering
  \begin{tabular}{|l|c|l|}
  \hline
   \small functionality & \small core & \small additional \\ \hline  \hline
	\tiny $store\_data$ & \tiny \specialcell{ \{$file\_created$,\\ $data\_written$,\\ $data\_updated$\}} & \tiny  \specialcell{\{$selected$,\\ $retrieve\_data$\}} \\  \hline
	\tiny $read\_data$ & \tiny \{$read\_data\_sub$\}&\tiny   \{$selected$\}  \\ \hline
	 \tiny $authenticated$&  \tiny \specialcell{\{$challenge\_sent$, \\$verified$,\\ $external\_authenticated$, \\$internal\_authenticated$,\\ $mutual\_authenticated$\}   } & \tiny \specialcell{\{$selected$, \\ $retrieve\_data$,\\  $data\_written$\}} \\  \hline
	\hline
 
  \end{tabular}
  \caption{Card functionalities and the corresponding sub-functionality set.}
  \label{tab:corefunctionalities}
\end{table}

\eat{
\begin{table}[h!]
  \centering
  \begin{tabular}{|l|l| }
  \hline
   \small functionality &  \small additional sub-functionalities\\ \hline
	 \tiny store$\_$data &  \tiny  \{selected, retrieve$\_$data\} \\
	\tiny read$\_$data & \tiny   \{selected\} \\
	\tiny authenticated& \tiny \{selected, retrieve$\_$data,  data$\_$written\}  \\
	\hline
 
  \end{tabular}
  \caption{Card functionalities and the corresponding additional sub-functionalities command set.}
  \label{tab:addfunctionalities}
\end{table}
}

\eat{\textit{Properties Inheritance} For the analysis of the RSA PKCS\#$11$ Log$\_$In and Generate$\_$Key functions, we  assign the following properties:
%\begin{itemize}
%\item \textit{core} 
%\item \textit{additional} 
%\end{itemize}
\begin{inparaenum}[(i)]
\item  \textit{sensitiveOperation} which is any process that deals with sensitive data, e.g., the verification of a PIN, 
\item \textit{nonSensitiveOperation} which is any generic process over non-sensitive data, e.g., the selection of a file. 
\end{inparaenum}
If a functionality is a \textit{sensitiveOperation}, then all core sub-functionalities inherit this property, while all additional sub-functionalities are \textit{nonSensitiveOperation} and \textit{vice versa}.
}





\paragraph*{General Rules}
We define general predicates and the corresponding rules to describe communication restrictions,   card responses, 
file specifications and data types. 
 For instance, the following rule describes that if some data $D$ of length $Le$ is expected, then the response should contain $D$ and the corresponding length should be $Le$.
\small
\begin{eqnarray*}
 \lefteqn{\forall Le, D (expected(data, Le, D)}\\ 
 &\rightarrow (response(D) \land length(D, Le)))
 \end{eqnarray*}
\normalsize 











\subsection{RSA PKCS\#11 Models}

PKCS\#11 models present our assumptions on how specific cryptographic functions might be implemented at the APDU level, and are expressed in functionalities. We have modelled the Generate$\_$Key and Log$\_$In function. We provide a summary of our modelling assumptions and some example rules only for the Log$\_$In function because of space limitations.

\paragraph*{RSA PKCS$\#$11 Generate$\_$Key}
 For the Generate$\_$Key function we expect  a file creation and/or one of the other operations for storing data.  We  also expect a possible $read\_data$ operation, for \textit{key} related information. 
Another possible operation is referencing a \textit{key-generation} algorithm. Finally, since Generate$\_$Key deals with sensitive data, a session authentication is also expected.

\eat{
\begin{itemize}
%% data-written, create-file klp already contain a selected state. so, either its gonna be data-written on its own or the extra selected state would be for the algorithm referencing
\item  \small
\begin{eqnarray*}
 \lefteqn{store\_data(Location, D\footnote{\label{Data-Key}Data is all key-related information, e.g., attributes, value})}\\
 & \rightarrow log\_in(D)
 \end{eqnarray*}
%	\begin{itemize}
	 \normalsize The card is asked to store some  key-related data $D$ (i) through a write-data related 
	series of commands, or (ii) through the generation of a new file and storing all key-related information in that file. 
%	\end{itemize}
\item $store\_data(F,null)$ \\$\land$ $store\_data(F, D)\rightarrow$\\ $log\_in(D)$
%	\begin{itemize}
	 First the card is asked to create a file $F$, while no data is stored ($null$). Then,  the card stores the key-related data $D$ in $F$. 
%	\end{itemize}
\item $read\_data(Location1, F, RD)$\\ $\land$ $store\_data(Location2, D)$
%	\begin{itemize}
	 First the card is asked for  $RD$ which is key-related data, or data essential for the generation of the key. Then,the card is asked to store all key-related data $D$.
%	\end{itemize}
%\item[Rule 5] selected(file, EF) $\land$ $data\_object\_retrieved(File, Data)$ $\land$ data$\_$written(Data, Offset)

%\item[Rule 6] selected(file, EF) $\land$ $data\_object\_retrieved(File, Data)$ selected(file, EF) $\land$ data$\_$written(Data, Offset)
\item $read\_data(Location1, F, RD)$\\ $\land$ $store\_data(F2, null)$\\ $\land$ $store\_data(F2, D)$
%	\begin{itemize}
 First the card is asked for  $RD$ which is key-related data or data for the key generation. Then, a new file $F2$ is created (no data is stored to that file at that point) and finally, all key-related data $D$ is 
	stored in $F2$.
%	\end{itemize}

\end{itemize}


\eat{we consider $authenticated(TypeA,TypeB)$\footnote{A and B vary depending the authentication process. For example it could be a A=Challenge and B=Response, or A=PIN  and B=PIN,  or A=[Chip Number, Challenge] and B=Response.} and file referencing, $selected(path/file, File)$ as additional functionalities.}
}


\paragraph*{RSA PKCS$\#$11 Log$\_$In}
For the Log$\_$In function we expect one of the $authenticated$ operations 
to take place. In more detail, we expect either a a PIN/Pass-code verification, or  a challenge-response authentication. A $read\_data$ operation for authentication related data is also possible. 
Finally, an additional authentication for the given session is also possible.
\eat{a two-level authentication:  
first the reader authenticates itself to the card for the given session, and then, 
it provides a PIN/Pass-code to log-in.} 
The following rules describe these assumptions:
\begin{itemize}
\item[Rule 1]\small  $authenticated(TypeA,TypeB)$\\
  $\indent \rightarrow  log\_in(TypeA, TypeB)$
	\begin{itemize}
	\item  \normalsize There are two possible types of authentication: (i) through a PIN/Password, and (ii) through the challenge-response protocol.

	\end{itemize}
\item[Rule 2]\small $(selected(path/file, DF)$ \\$\indent \land$ $authenticated(TypeA, TypeB))$ \\ $\indent \rightarrow log\_in(TypeA, TypeB)$
	\begin{itemize}
	\item \normalsize The file or path in which the authentication takes place is selected. Then, authentication takes place as described in Rule 1. 
	\end{itemize}
\item[Rule 3] \small  $(read\_data(Location,File, RD)$\\ $\indent \land$ $authenticated(TypeA, TypeB))$ \\ $\indent \rightarrow log\_in(TypeA, TypeB)$
		\begin{itemize}
		\item  \normalsize Authentication-related data is retrieved from the card and  authentication takes place as described in Rule 1.
		\end{itemize}
\item[Rule 4] \small $(read\_data(Location, File, RD)$\\ $\indent \land$ $selected(path/file, DF)$\\ $\indent \land authenticated(TypeA, TypeB))$ \\$\indent \rightarrow log\_in(TypeA, TypeB)$
			\begin{itemize}
			\item  \normalsize Authentication-related data is retrieved from the card, the file/path in which authentication takes place is selected and, authentication takes place as  described in Rule 1.
			\end{itemize}
\end{itemize}



\subsection{Analysis Algorithm}

First  we  
map the input trace  to  abstract models. For each abstract model we produce mappings into low-level commands (refinement), which we afterwards map to functionalities. Finally, we match these functionalities to the RSA PKCS\#$11$ abstract models. 
Figure \ref{flow} presents a flowchart of our proposed methodology. During the analysis  the low-level input (commands) evolve to abstract card operations. The transformations of the commands for each analysis step  are presented in Figures~\ref{fig:step1}-\ref{fig:step5}.






\begin{figure}[!tb]
  \centering
  \includegraphics[width=0.45\textwidth]{flowchart}

  \caption{Command analysis methodology.} \label{flow} 
\end{figure}


\begin{figure}[!tb]
  \centering
  \includegraphics[width=0.50\textwidth]{step1}
  %[width=8.75cm,height=8cm]
  \caption{Step 1: the proprietary commands are categorised based on their data exchange properties and the inter-industry commands are mapped to sub-functionalities.} \label{fig:step1} 
\end{figure}


\begin{figure}[!tb]
  \centering
  \includegraphics[width=0.4\textwidth]{step2}
  %[width=8.75cm,height=8cm]
  \caption{Step 2: each command category is mapped to the corresponding inter-industry command set.} \label{fig:step2} 
\end{figure}


\begin{figure}[!tb]
  \centering
  \includegraphics[width=0.4\textwidth]{step3}
  %[width=8.75cm,height=8cm]
  \caption{Step 3:  the inter-industry command mappings whose preconditions are not met, are discarded.} \label{fig:step3} 
\end{figure}


\begin{figure}[!tb]
  \centering
  \includegraphics[width=0.4\textwidth]{step4}
  %[width=8.75cm,height=8cm]
  \caption{Step 4: the sets of inter-industry commands are mapped to sub-functionalities which are mapped to functionalities.} \label{fig:step4} 
\end{figure}


\begin{figure}[!tb]
  \centering
  \includegraphics[width=0.4\textwidth]{step5}
  %[width=8.75cm,height=8cm]
  \caption{Step 5: the sets of functionalities are mapped to the PKCS\#11 models.} \label{fig:step5} 
\end{figure}



\eat{
Consider a trace $S$ which is a set of all sent messages \{$command\_{i},...,command\_{i+n}$\} where $\forall c \in S \rightarrow (c \in Interindustry \lor c \in Proprietary$.}





\small{
\begin{algorithm}[!tb]
  \relsize{-1}{
    \SetKwInOut{Input}{input}
    \SetKwInOut{Output}{output}
    \Input{List $\C$ of commands to be analysed}
    \Output{Potential mappings and operation models $P$ for $\C$}

    \BlankLine
    $P = [[]]$\;
    $F=[[]]$\;
    \ForEach{$c(\Cla, \ins, P_1, P_2, \L_c, \D, \L_e) \in \C$}{
      \eIf{$\ins$ indicates $c$ is proprietary}{
        \label{alg:analysis:prop-start}
        use $\l_c, d, \l_e$ to extract data exchange properties
        $\d$\;
        $\M = $ list of APDU commands $c$ maps to based on $\d$\;
        \label{alg:analysis:prop-end}
      }
      {
        \label{alg:analysis:ii-start}
        $n = $ inter-industry command $c$ maps to\;
        $\M = [n]$\;
        annotate each command with its \textit{sub-functionality}\;
          annotate  \textit{sub-functionalities} with \textit{functionalities}\;
          $F = F \oplus \textit{functionalities}$\;
        \label{alg:analysis:ii-end}
      }
      \ForEach{$p \in P$}{
        \label{alg:analysis:expand-start}
        remove $p$ from $P$\;
        \ForEach{$m \in \M$}{
          $s = p \oplus (c \mapsto m)$\;
          $P = P \oplus s$\;
          \label{alg:analysis:expand-end}
        }
      }
    }
    \ForEach{$p \in P$}{
      \label{alg:analysis:filter-start}
      \ForEach{$(c \mapsto m) \in p$}{
        $Z = \{z~|~(k \textrm{~precedes~} m \textrm{~in~} p) \wedge 
            (z \in \textrm{postconditions}(s_k))\}$\;
        \label{alg:analysis:filter-precond}
        \If{preconditions of $m$ are not satisfied by $Z$}{
          remove $p$ and move on to the next path in $P$\;
          \label{alg:analysis:filter-end}  
        }
      }
    }
    \ForEach{$p \in P$}{
      \label{alg:analysis:func-filter-start}
      \ForEach{$(c \mapsto m) \in p$, potential sub-functionality of $m$}{
        group \textit{sub-functionalities} into \textit{functionalities}\;
        \eIf{no such grouping can be found}{
          remove $p$ from $P$\;
        }
        {
          annotate each command with its \textit{sub-functionality}\;
          annotate command groups with \textit{functionalities}\;
          $F = F \oplus \textit{functionalities}$\;
          \label{alg:analysis:func-filter-end}
        }
      }
   
   \ForEach{$f \in F$}{
    		\label{alg:pkcs-matching:start}
			\If{ $f \notin pkcs models$}{
			remove $f$  from F\;
			remove $p$ from P \;
			
			}    
			
			}
    		\label{alg:pkcs-matching:end}
   
    }
    		
    	
    \Return{$P$}\;
    \Return{$F$}\;
    
    
  }
  \caption{The analysis process for a trace of commands}
  \label{alg:analysis}
\end{algorithm}
}
\normalsize
The overall analysis process for a trace of commands is shown in
Algorithm~\ref{alg:analysis}.  The input to the algorithm is a list $\C$ of
commands representing the communication trace, whereas the output is a list
$P$ of potential mappings (each mapping is a list itself) and a list $F$ of card functionalities.  The list $P$ is
initialised to $[[]]$ which indicates that the first mapping is the empty
one.  We then analyse each command $c \in \C$ and depending on its value of
$Cla$  we see whether we are dealing with a proprietary or inter-industry
command.  In the former case (lines~\ref{alg:analysis:prop-start}
to~\ref{alg:analysis:prop-end}) we look at the values of its $\L_c$, $\D$, and
$\L_e$ parameters to categorise its data exchange properties and obtain a
list $\M$ of potential mappings.  If $c$ is an inter-industry command, there is
only one such mapping $n$ so $\M$ is a singleton list, we search for satisfiable 
(sub-)/functionalities by this command 
(lines~\ref{alg:analysis:ii-start} to~\ref{alg:analysis:ii-end}).  For each
existing list of mappings and for each potential mapping for $c$ we expand
the list with the new mapping (lines~\ref{alg:analysis:expand-start}
to~\ref{alg:analysis:expand-end}).  After line~\ref{alg:analysis:expand-end},
list $P$ contains multiple lists of mappings for each command in the original
trace to a potential inter-industry command.  We then narrow-down $P$ in a simple
way (lines~\ref{alg:analysis:filter-start} to~\ref{alg:analysis:filter-end}):
for each potential mapping to an inter-industry command, we check that the
preconditions of the inter-industry command are met by computing union of the
postconditions of all commands that precede it
(line~\ref{alg:analysis:filter-precond}.  If the preconditions of an
inter-industry command are not met,  the erroneous
mapping is removed from $P$ and we move on to the next candidate mapping 
(line~\ref{alg:analysis:filter-end}).  Then, we narrow-down $P$ by looking at our sub-functionality and functionality models 
%models of PKCS\#11 and ISO-7816 that are
%based on sub-functionalities and functionalities
(lines~\ref{alg:analysis:func-filter-start}
to~\ref{alg:analysis:func-filter-end}).  To that end, we iterate over the
remaining candidate mappings once more and look at our categorisation of
commands based on their role.  Using this role we attempt to group commands
into sub-functionalities and further group the resulting sub-functionalities
into higher-level functionalities which we add to $F$, all in the context of our models.  If no
such grouping is found for all mappings of a candidate trace, we remove the
trace from $P$. If a grouping is found, its constituents mappings
are annotated accordingly to denote so.  
The final step of the algorithm is to further narrow-down $P$ by matching the resulting functionalities in $F$ with the PKCS\#11 models. 
In the end, $P$ will contain zero or
more traces of candidate mappings.  If $P$ is empty, our analysis has failed to
produce a mapping.  If there is only one trace in $P$ we say that the mapping
is unique.  If there are more than one traces the analysis is successful, but
we have only identified an abstraction of the correct mapping as there is
 more than one candidate traces.

Note that in Algorithm~\ref{alg:analysis} we only show the conceptual
analysis process to aid the presentation.  An actual implementation of the
algorithm in a declarative language like Prolog will leverage the analysis
algorithm of the language; potentially be implemented using a constraint
satisfaction paradigm; reuse computed partial results efficiently through
memorisation and/or asserting and retracting facts; and so on.  This being a
preliminary work, it is beyond our scope here to discuss efficient
implementations of our analysis algorithm.  We merely want to showcase its
properties and its conceptual progression.
\section{Worked Example}
\label{sec:example}
We present  a manual example based on an APDU trace from a commercially available smart-card. The PKCS\#11 function that we are analysing is Log$\_$In. Because of a  non-disclosure agreement we cannot provide full details of the card implementation. We explain each analysis step in detail and we provide an example of the applied rules only for the first step because of space limitations.  
During the analysis we only refer to predicate names, to make this document easier to read.


\eat{

\subsection{RSA PKCS\#11 Generate$\_$Key}
For  the Generate$\_$Key function we applied our methodology to three commercially available  smart-cards. We present an example of one of the three smart-cards.
% a simple implementation of a smart card. For a more complicated example please refer to appendix, Section \ref{gkexample}.







\paragraph*{Card 1} The implementation of the  RSA PKCS\#11 Generate$\_$Key function is :\\
\small 1.  command(00, a4, p1$_{1}$, 09, lc$_{1}$, d$_{1}$, le$_{1}$)\\
2.  command(cla$_{2}$, ins$_{2}$, p1$_{2}$, p2$_{2}$, lc$_{2}$, d$_{2}$, 00)\\

\begin{enumerate} 
\normalsize \item[Step 1.] The first command has $cla=00$, which means that it is an inter-industry command. We do not assign any category to it, instead we search for partial/satisfiable (sub-)/functionalities. Sub-functionality \small $selected$ \normalsize is fully satisfied. The second command has $cla_{2}$ which defines a proprietary class. According its data exchange properties we assign the category \small $command_{yn}$ \normalsize(data is sent - no data is expected back) to this command.     \eat{Using the following rule:\\
\small $\forall Cla, Ins, P1, P2, Lc, D, Le( (command(Cla, Ins, P1, P2, Lc, D, Le)  \land Lc \neq 00 \land D \neq 00 )  ) 
	\rightarrow (command_{yn}( Cla, Ins, P1, P2, Lc, D))$\\}
\normalsize The output of this step is the following model:\\
  \indent \small $selected(path, d_{1})$ \\ \small \indent $\land$ $command_{yn}(cla$\_2$, ins$\_2$, p1$\_2$, p2$\_2$, lc$\_2$, d$\_2$, 00)$\\


\normalsize \item[Step 2.] We produce all possible mappings for $ins_{2}$ based on the category it belongs ($command_{yn}$).The set of possible command mappings is:  \small $\{select, verify, create\_file, update\_binary, write\_binary, erase\_binary, external\_authenticate, write\_record, append\_record, update\_record, put\_data\}$. 



\normalsize \item[Step 3.] We narrow down the mappings by applying the rules that define the preconditions for each command:
The options for $ins_{2}$ are:
\begin{itemize}

\item \small $select$, \normalsize which is valid as this command does not have a special precondition.
\item \small $verify$, \normalsize which is valid as all preconditions of this command are satisfied.
\item \small $create\_file$, \normalsize which is valid as all preconditions of this command are satisfied.
\item \small $update\_binary$,\normalsize  which is not valid as \small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $write\_binary$, \normalsize which is not valid as \small $selected(file, EF)$ \normalsize  is not satisfied.
\item \small $erase\_binary$, \normalsize which is not valid as \small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $external\_authenticate$, \normalsize which is not valid as \small $challenge$\_$sent(L, Ch)$ \normalsize is not satisfied.
\item \small $write\_record$, \normalsize which is not valid as \small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $append\_record$, \normalsize which is not valid as \small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $update\_record$, \normalsize which is not valid as \small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $put\_data$, \normalsize which is not valid as \small $selected(file, EF)$ \normalsize is not satisfied.
\end{itemize}
The set of applicable mappings is: \small $\{select, verify, create$\_$file\}$.


\item[ Step 4.]\normalsize  We search for partially-/satisfiable card (sub-)/functionalities. We conclude with the following:
		\begin{itemize}
	
 \item \small $verified(pin, D)$  \normalsize is satisfied by:\\ \small $selected(path, d_{1})\land verify(d_{2})$\footnote{\small $selected(path, d_{1})$ \normalsize is an additional sub-functionality.} \normalsize with:
			\begin{itemize}
			\item \small $d_{2} \mapsto D$ 
			\end{itemize}			 
\small $verified(pin, D)$ \normalsize is a sub-functionality of \small $authenticated(pin, D)$.
\item $file\_created(DF, D)$ \normalsize is satisfied by: \\\small $selected(path, d_{1})\land create$\_$file(d_{2})$,\normalsize with:
		\begin{itemize}
		\item $DF \mapsto d_{1}$
		\item $D \mapsto d_{2}$
		\end{itemize}

 $file\_created(d_{1},d_{2})$ \normalsize is a a sub-functionality of \small $store\_data(DF, D)$ \normalsize (same mappings as before). 
		\end{itemize}
\item[ Step 5.] We map the inferred functionalities from Step 4., to the  Generate$\_$Key abstract models. We discard the \small $authenticated(pin, D)$ \normalsize card functionality as it does not match.  \small $store\_data(d_{1},d_{2})$   \normalsize matches exactly with our abstract model:\\
\small $store\_data(Location, Data)$ \normalsize with  \small $Location\mapsto d_{1}$ \normalsize and \small $D \mapsto d_{2}$.
\end{enumerate}
 Indeed, the card implements the Generate$\_$Key function in the same way as the result of our analysis.


}










%30/4

\paragraph*{APDU Trace}
The APDU trace for the  RSA PKCS\#11 Log$\_$In function is the following:\\
\small 1.  $command(cla_{1}, ins_{1}, p1_{1}, p2_{1}, 00, 00, null)$\\
2. $command(00, a4, p1_{2}, 09, lc_{2}, d_{2}, null)$\\
3. $command(cla_{3}, ins_{3}, p1_{3}, p2_{3}, 00, 00, Le_{3})$\\
4. $command(cla_{4}, ins_{4}, p1_{4}, p2_{4}, lc_{4}, d_{4}, null)$\\ 
5. $command(00, a4, p1_{5}, p2_{5}, 00, 00, 00)$\\


\normalsize Step 1. Two of the commands are inter-industry ($cla=00$) and three are unknown ($cla_{1}$, $cla_{3}$, $cla_{4}$). We categorise the unknown commands based on their data exchange properties and we assign card (sub-)/functionalities to the inter-industry commands. 
After the application of the  following rules:
\begin{eqnarray*}
1) \lefteqn{\forall     P1, Lc, D ( }\\ 
				\lefteqn{command(00, a4, P1, 09, Lc, D, null) }\\
				& \rightarrow selected(path, D))\\
2)   \lefteqn{\forall     Cla, Ins, P1, P2 ( }\\ 
				\lefteqn{command(Cla, Ins, P1, P2, 00, 00, null) }\\
				& \rightarrow command_{nn}(Cla, Ins, P1, P2, 00, 00, null))\\
3)  \lefteqn{\forall     Cla, Ins, P1, P2, Lc, D,Le ( }\\ 
				\lefteqn{(command(Cla, Ins, P1, P2, 00, 00, Le) }\\
				\lefteqn{\land Le \neq null)}\\
				& \rightarrow command_{ny}(Cla, Ins, P1, P2, 00, 00, Le))\\
4)   \lefteqn{\forall     Cla, Ins, P1, P2, Lc, D ( }\\ 
				\lefteqn{(command(Cla, Ins, P1, P2, Lc, D, null) }\\
				\lefteqn{\land Lc \neq 00 \land D \neq 00)}\\
				& \rightarrow command_{yn}(Cla, Ins, P1, P2, Lc, D, null))	
\end{eqnarray*}
the output of this step is the following model:\\
\small $command_{nn}(cla_{1}, ins_{1}, p1_{1}, p2_{1},00, 00, null)$ $\land$ $selected(path, d_{2})$ \\ $\land$ $command_{ny}(cla_{3}, ins_{3}, p1_{3}, p2_{3}, 00, 00, le_{3})$ $\land$ \\
$command_{yn}(cla_{4}, ins_{4}, p1_{4}, p2_{4}, lc_{4}, d_{4})$ $\land$ $dummy$\footnote{$command_{nn}$ stands for \textit{no data is sent - no data is expected}, $command_{ny}$ stands for \textit{no data is sent - data is expected}, $command_{yn}$ stands for \textit{data is sent - no data is expected} and $dummy$ for all commands that do nothing.}\\
\normalsize The last inter-industry command is a $select$ command that does nothing. Thus, we map  it to $dummy$.

The analysis of the trace is serial; we begin from the first unknown command, and continue until the last one.  We refer to each unknown command using its $Ins$ field. 

Analysis of $ins_{1}$:
\begin{enumerate}
\item[Step 2.] The set of command mappings, based on the category $ins_{1}$ belongs to (\small $command_{nn}$\normalsize) is:\\ \small $\{select, verify, external\_authenticate\}$.

 \normalsize \item[Step 3.]  We narrow down the  command mappings based on precondition satisfiability: \begin{itemize} 
\item \small $select$: \normalsize we discard this option as we already know how the card implements a select command.
\item \small $verify$: \normalsize as the command does not require any data back from the card, it is used just to check whether verification is required. So, it does not affect the commands that follow. 
\item \small $external\_authenticate$: \normalsize the empty body of this command has the same meaning as in the verify command.
\end{itemize} The set of applicable mappings is:\\ \small \{$verify$, $external\_authenticate$\}. \normalsize Neither command affects the analysis of the later commands, so, we map them to \small$dummy$.


\end{enumerate}
Analysis of $ins_{3}$
\begin{enumerate}

\item[Step 2.]  The set of command mappings for $ins_{3}$, based on the category is belongs to (\small $command_{ny}$) is:\\ $\{select, read\_binary,get\_challenge, $ \\ $read\_record, get\_response, get\_data\}$.


\normalsize \item[Step 3.]  We narrow down the command mappings based on precondition satisfiability: 
 \begin{itemize} 
\item \small $select$\normalsize: we discard this option as we already know how the card implements a select command.
\item \small $read\_binary$: we discard this option as \small \textit{selected(file, EF)} \normalsize is not satisfied.
\item \small $get\_challenge$\normalsize :  is  valid as it does not have any special preconditions.
\item \small $read\_record$\normalsize :  we discard this option as \small \textit{selected(file, EF)} \normalsize is not satisfied.
\item \small $get\_response$\normalsize : is valid as it does not have any special preconditions.
\item \small $get\_data$\normalsize : we discard this option as \small $selected(file, EF)$ \normalsize is not satisfied.
\end{itemize} 
The set of applicable mappings is:\\ \small $\{get\_challenge, get\_response\}$ \normalsize .
\item[Step 4.] We search for  partially/satisfiable (sub-)/functionalities. The outcome is:  
\begin{itemize}

\item \small $challenge\_sent(le_{3}, Challenge)$ \normalsize which is satisfied by the model:\\ \small $selected(path, d_{2})$ $\land$ $get\_challenge(le_{3}, Challenge)$
\item $get\_response(le_{3}, RD)$ \normalsize which is satisfied by the model:\\ \small $selected(path, d_{2})$ $\land$ $get\_response(le_{3}, RD)$\footnote{\small$selected(path,DF)$ \normalsize is an additional sub-functionality here. The model is satisfiable with or without it.}
\end{itemize}
\end{enumerate}

\normalsize
%%%edo-30/4
\textit{Case 1. $selected(path, d_{2})$ $\land$ $challenge\_sent(le\_{3}, Challenge)$} Analysis of $ins_{4}$:
\begin{enumerate}
\item[Step 2.] The set of command mappings for $ins_{4}$ based on the category it belongs to (\small $command_{yn}$) is:\\ $\{select, create\_file, update\_binary,$\\ $external\_authenticate, write\_record, append\_record,$\\ $update\_record, put\_data\}$\normalsize .


\item[Step 3.]  We narrow down the  command mappings based on precondition satisfiability: 
\begin{itemize} 
\item \small $select$: \normalsize we discard this option as we already know how the card implements a select command.
\item \small $create\_file$\normalsize : is  valid as \small $selected(path, F)$ \normalsize is satisfied.
\item \small $update\_binary$\normalsize :  we discard this option as \\ \small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $external\_authenticate$\normalsize : is  valid as \\ \small $challenge\_sent(Le, Challenge)$ \normalsize is satisfied.
\item \small $write\_record$\normalsize :  we discard this option as \small $selected(file, EF)$  \normalsize is not satisfied.
\item \small $append\_record$\normalsize :  we discard this option as \small $selected(file, EF)$  is not satisfied.
\item \small $update$\_$record$\normalsize :  we discard this option as \small $selected(file, EF)$ is not satisfied.
\item \small $put$\_$data$ \normalsize:  we discard this option as \small $selected(file, EF)$ \normalsize is not satisfied.
\end{itemize} 
The set of applicable mappings is:\\ \small $\{create\_file, external\_authenticate\}$\normalsize .
\item[Step 4.] We search for partially-/satisfiable (sub-)/functionalities. The outcome is:
\begin{itemize}
\item \small $file\_created(d_{2}, d_{4})$, \normalsize which is satisfiable by the model:\\ \small $selected(path, d_{2})$ $\land$ $create\_file(d_{4})$, \eat{\\with the following mappings:
		\begin{itemize}
		\item $DF \mapsto d_{2}$
		\item $D \mapsto d_{4}$
		\end{itemize}}
\normalsize and is a sub-functionality of \small $store\_data(d_{2}, d_{4})$ \normalsize with the same mappings as before.		

\item \small $external\_authenticated(Challenge, d_{4})$ \normalsize which is satisfiable by the models: \\\small(i) $selected(path, d_{2})$ $\land$ $get\_challenge(le_{3}, Challenge)$ \\$\land$ $external\_authenticate(d_{4})$,\\ \eat{\\with the following mappings: 
			\begin{itemize}
			\item $Challenge \mapsto RD$
			\item $Response \mapsto d_{4}$
			\end{itemize}}
(ii) $selected(path, d_{2})$ $\land$ $get\_response(le_{3}, Challenge)$\\ $\land$  $external\_authenticate(d_{4})$\footnote{selected(path,$d_{2}$) is an additional sub-functionality here. The model is satisfiable with or without it.}\\
\normalsize This is a sub-functionality of \\ \small $ authenticated(Challenge, d_{4})$.
\item  $read\_data\_sub(card, le_{4}, RD)$ \normalsize which is satisfiable by the model:\\ \small $get\_response(le_{4}, RD)$. \eat{with the following mappings:
			\begin{itemize}
			\item $Le \mapsto le_{4} $
			\item $D \mapsto RD$
			\end{itemize}	}			 
\end{itemize}
%%enty
\normalsize \item[Step 5.]We match the models for the Log$\_$In function with the inferred ones.  We discard \small $store\_data(d_{2}, d_{4})$ \normalsize as  it does not match with any of Log$\_$In models (the communication has ended  and the authentication process is not satisfied). We discard \small $read\_data\_sub(card, Le, RD)$ \normalsize for the same reason.   The only model that matches is:\\
\small $dummy$ $\land$ $selected(path, d_{2})$ $\land$ $authenticated(RD, d_{4})$ $\land$ $dummy$\\

\end{enumerate}
\textit{Case 2. $selected(path, d_{2})$ $\land$ $get\_response(l_{3}, RD)$}\\ Analysis of $ins_{4}$:
\begin{enumerate}
\item[Step 2.]  The set of command mappings for $ins_{4}$ based on the category it belongs to (\small $command_{yn}$) is:\\ $\{select, create\_file, update\_binary, external\_authenticate,$\\ $write\_record, append\_record, update\_record, put\_data\}$\normalsize .

\item[Step 3.]  We narrow down the  command mappings based on precondition satisfiability:
\begin{itemize} 
\item \small $select$\normalsize : we discard this option as we already know how the card implements a select command.
\item \small $create\_file$\normalsize : is valid as before.
\item \small $update\_binary$\normalsize :  we discard this option as \\\small$selected(file, EF)$ \normalsize is not satisfied.
\item \small $external\_authenticate$\normalsize :  is valid as before.
\item \small $write\_record$\normalsize :  we discard this option as \\\small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $append\_record$\normalsize :  we discard this option as \\\small $selected(file, EF)$  \normalsize is not satisfied.
\item \small $update\_record$\normalsize :  we discard this option as \\\small $selected(file, EF)$ \normalsize is not satisfied.
\item \small $put\_data$\normalsize :  we discard this option as \small $selected(file, EF)$ \normalsize is not satisfied.
\end{itemize} 
The set of applicable mappings is: \\\small\{$create\_file, external\_authenticate$\}. \normalsize

\item[Step 4.]  We search for partially-/satisfiable (sub-)/functionalities. The outcome is: 
\begin{itemize}
\item \small $file\_created(d_{2},d_{4})$ \normalsize is satisfiable by the model:\\ \small $ selected(path, d_{2}) \land create\_file(d_{4})$\\ \normalsize \eat{ with the following mappings:
		\begin{itemize}
		\item \small $df \mapsto to d_{2} $
		\item $D \mapsto d_{4}$
		\end{itemize}}
	 \normalsize which is a sub-functionality of \small $store\_data(d_{2}, d_{4})$  \normalsize with the same mappings as before.

\item \small $external\_authenticated(Challenge, d_{4})$ \\ \normalsize  is partially satisfiable by the model \\ \small $external\_authenticate(d_{4})$ \normalsize  but   \small \textit{challenge\_sent(Le, Challenge)} \normalsize is not satisfied, so, we discard it. 
\end{itemize}
  


\item[Step 5.] We match the models for the Log$\_$In function with the inferred ones.  We discard \small $store\_data(d_{2}, d_{4})$ \normalsize as  it does not match with any of the models 
(the communication has ended  and the authentication process is not satisfied).
\end{enumerate} 
To conclude, the analysis showed that Log$\_$In is implemented as (\textit{Case 1} outcome):\\
\small $dummy$ $\land$ $selected(path, d_{2})$ $\land$ $authenticated(Challenge, d_{4})$ $\land$ $dummy$\\
\normalsize which matches exactly with the  actual implementation.

















\section{Evaluation}
\label{sec:evaluation}

We have manually applied our proposed methodology to five commercially available smart-cards.  We tested the RSA PKCS\#$11$ Log$\_$In and Generate$\_$Key functions. We were aware of the implementation of each smart-card from the beginning but we treated them as unknown during the analysis. We compared the results we obtained from our analysis with the actual implementation. 

For the Log$\_$In function the results  are presented in Table \ref{log-in}. In four out of five cases we inferred a unique model which matched exactly with the actual implementation.  In one out of five cases we inferred two models for the same abstraction: 
both models captured the types and the sequence of the operations the card performed text it{e.g.}, log-in data retrieval, followed by session authentication, followed by PIN authentication, and so on.  But not at the level of the implementation of these operations. 
For the Generate$\_$Key function the results are presented in Table \ref{generate-key}. We tested only three cards as the rest generated the key library side. In two out of three cases we inferred a unique model which matched exactly with the original implementation. In one out of three cases we inferred two models: both captured the abstraction of the original implementation, but none matched exactly.



\begin{table}[h]
  \centering 
  \begin{tabular}{  l  c  r }
\small Smart-Card Name & \small Success of Analysis  & \small Inferred Model\\ \hline
 \small Aladdin eToken Pro & \small yes  &\small  exact \\
 \small Athena ASE Key USB  & \small yes &\small abstraction \\
 \small Siemens CardOS V3.4b  &\small  yes & \small exact  \\
 \small RSA SecureID 800  & \small yes &\small  exact \\
 \small \specialcell{Safesite Classic\\ TPC IS V1 } & \small yes &\small  exact \\
 
\end{tabular}
  \caption{RSA PKCS\#$11$: Log$\_$In function: analysis results.}
  \label{log-in}
\end{table}




\begin{table}[h]
  \centering 
  \begin{tabular}{  l  c  l}
\small Smart-Card Name & \small  Success of Analysis & \small Inferred Model\\ \hline

\small Aladdin eToken Pro & \small yes &\small exact\\
\small Athena ASE Key USB  & \small yes & \small abstraction \\
 \small SafeNet iKey 4000& \small yes &\small  exact\\
 
\end{tabular}
  \caption{RSA PKCS\#$11$: Generate$\_$Key function: analysis results.}
  \label{generate-key}
\end{table}


In all tested cases we inferred at least a high-level model of the actual implementation. In some cases, the  analysis outcome was more than one model which captured  an abstraction of the implementation. We do not consider this as a failure since the analysis provided at least  a high-level view of the implementation; however this shows the necessity of incorporating repairing techniques to refine the analysis outcome. 
 Such repairs could be the permutation of the commands, adding, deleting or substituting  extra commands. 

\section{Conclusions and Future Work}
\label{sec:conclusions}

We have presented our methodology towards the automated analysis of low-level cryptographic protocols. Our hypothesis is that by providing a formal model of ISO $7816$ and incorporating knowledge repair techniques, it is possible to infer an unknown APDU implementation.
We have presented manual experiments towards the analysis of the APDU implementation of  the PKCS\#$11$ Generate$\_$Key and Log$\_$In functions. The results we obtained are promising. We have shown that it is possible to narrow-down the search space for the unknown implementations, using as a guide the abstract models and rules we have created.  The work presented in this paper is preliminary and it is based on one-year's worth of research in the context of a larger project exploring the use of inference for analysing low-level cryptographic protocols. The implementation of the system is currently ongoing. One of the challenges  is that even if the analysis succeeds, it is possible to infer a non-exact model. However, our experiments have shown that the correct model exists in the search space. One of our future directions is to specify tests for the accuracy of the inferred models and identify appropriate repairs to refine the inferred model.


\bibliographystyle{abbrv}
\bibliography{references}



\end{document}
