\documentclass[12pt, a4paper, oneside]{report}

\usepackage{graphicx}

\begin{document}

\begin{titlepage}

\setlength{\parindent}{0mm}

{ \vspace*{1.5cm} }
{\LARGE\bfseries \hspace{2cm} Flower --- Workflow Framework}

{\small \hspace{2.1cm} Design Specification}

\vspace{-1.9cm}
\includegraphics[width=2cm,keepaspectratio=true]{logo.pdf}

\vfill

\begin{flushright}
\begin{minipage}{7cm}
{\em Author:} \hfill Dmitry Bratus\\
{\em Date:} \hfill \today \\
{\em Document version:} \hfill 0.1
\end{minipage}
\end{flushright}

\setlength{\parindent}{7mm}

\begin{abstract}

The traditional approach to the business process automation is modeling the business processes as state machines called workflows. Each state is associated with some activity i.e. a sequence of actions to perform on a state. Execution of an activity has useful side effects and the result is a transition to another state. With such approach, workflows that are enough granular to have a good traceability are too complex to maintain. As a result, developers have to write bulky activities and include a lot of error handling and logging code into them to achieve reliability and traceability.

Another approach is using special purpose languages like BPEL. In theory, they allow to code workflows as procedures in a general purpose language. In reality, such languages are poorly readable (because XML-based), not flexible enough and proprietary therefore ties an implementation to a vendor. Also, the testability is usually not a strong side of such languages.

Flower combines advantages of both approaches. It allows to code workflows straightforward as in a structured programming language, it transparently decorates every statement with logging, error handling and transaction support logic, but, at the same time, its code is a normal C\# therefore it is statically compiled and any CLR code can be invoked from a workflow naturally.
\end{abstract}

\end{titlepage}

\tableofcontents

\chapter*{Introduction}
\addtocounter{chapter}{1}
\addcontentsline{toc}{chapter}{Introduction}

\section{Components Overview}

\begin{description}
\item[Directory] \hfill \\ The directory is a hierarchical persistent storage of entities each of which is a resource of Flower: a workflow, an assembly, a set, a process etc.

\item[Processing nodes] \hfill \\ The processing nodes are the services executing the running workflows (processes).

\item[Clients] \hfill \\ The clients are the applications and the services initiating the workflows, communicating with the processes and managing the directory.
\end{description}

\begin{figure}
\includegraphics[width=13cm,keepaspectratio=true]{architecture.pdf}
\caption{Flower components.}
\end{figure}

\section{Design Goals}

\begin{description}
\item[Reliability] \hfill \\ Any activity in Flower is a transaction. Flower uses .NET transactions infrastructure (System.Transactions) and WCF transactions propagation to achieve consistency between the process state and the data manipulated by the process via the services.

\item[Traceability] \hfill \\ Flower persists the process states serialized to XML. Any state can be extracted and viewed in a human-readable form.

\item[Control] \hfill \\ Flower allows to stop a process at any activity, change its state and resume.

\item[Testability] \hfill \\ Components of Flower depend on interfaces of each other therefore can be mocked and tested in isolation.

\item[Security] \hfill \\ The directory authenticates requests and restricts the access to the entries based on the internal roles associated with the domain user names.

\item[Openness] \hfill \\ The Flower API is distributed under Apache license. Vendors can implement free or proprietary components and extensions of Flower.

\item[Ease of use] \hfill \\ Flower is:
    \begin{itemize}
    \item Easy to learn, because it is based on clear and well known concepts and technologies.
    \item Easy to maintain, because all the configuration and the data is stored in a single place---the directory.
    \end{itemize}
\end{description}

\chapter*{Concepts and Terms}
\addtocounter{chapter}{1}
\setcounter{section}{0}
\addcontentsline{toc}{chapter}{Concepts and Terms}

\section{Workflows and Processes}
\label{sec:wf-and-proc}

\subsection{Process Definition}

A {\em workflow} is a definition of a process in Flower like an executable binary is a definition of a process in a OS. It consists of:

\begin{description}
\item[Global variables declaration] \hfill \\ The code defining the global variables.

\item[Initializer] \hfill \\ The code making the initial state of a process.

\item[Process] \hfill \\ The code making the logic of a process.
\end{description}

A workflow is defined by a class implementing {\ttfamily IWorkflow} interface; this class is called {\em process definition class}. The interface provides three methods---the global variables declaration method, the initializer and the method defining the logic.

The global variables declaration method accepts the variables stack builder and defines types and names of the global variables by using it (see~\ref{sec:proc-def}). This method is always called before the initializer and the logic definition method. Thus, it is guaranteed that the global variables are always initialized before use.

{\em The initializer is executed synchronously by the host starting the process.} The initializer accepts an input message of the process and returns an output message (both are data contract serializable CLR types). It also accepts the Flower services provider ({\ttfamily IServicesProvider}) so that the initializer can obtain the service instance from the directory and {\ttfamily ILog} to write the log. It is important to note that, as soon as the initializer is run inside the Flower client session, the directory is used as the source of unresolved assemblies. In other words, {\em the assemblies dynamically loaded within the initializer will be searched in the directory.}

The method defining the process logic, unlike the initializer, is not a real execution of a process---it just builds the activities sequence by using the process builder. A processing node calls this method every time it needs to build the activities sequence and expects that the sequence is the same on every call. In other words, {\em the process logic definition method must be deterministic}.

\subsection{Process Scheduling}

Process scheduling is performed by the directory client. When the client starts a process, it does the following:

\begin{enumerate}
\item Gets the workflow entry from the directory to obtain the name of the process definition class (see~\ref{sec:dir-entries}).

\item Creates an instance of the process definition class loading the assembly containing it if necessary.

\item Invokes the initializer to make the initial state of the process.

\item Invokes a {\em scheduling function} to select a processor.

\item Does the following changes in the directory transactionally:
    \begin{enumerate}
    \item Creates a process entry populating the name of the process definition class with that taken from the workflow entry.
    \item Creates the initial state entry.
    \item Creates a link to the process entry in the corresponding folder under {\ttfamily `/Processes'}.
    \end{enumerate}
\end{enumerate}

A process can have the following statuses:

\begin{enumerate}
\item Pending.
\item Waiting.
\item Broken.
\item Running.
\item Finished.
\end{enumerate}

The current status of a process depends on where it is placed under its processor entry (see~\ref{sec:processor-structure}). The subfolders of a processor represent lists of processes in different states. If a process is in pending list of a processor, it is waiting to be picked up by a processing node; if it is in running list of a processor, it is being executed; if it is in waiting list, it is waiting for a message arrival or until free space is available in a subset; the processes generated too many errors on a same activity and exceeded the maximum number of failures go to the broken list; the finished list contains the finished processes.

As a process is finished, the following actions are performed:

\begin{enumerate}
\item All processes in the waiters list of the finished process are placed to the pending lists of their processors.
\item The process is moved to the finished list of its processor.
\end{enumerate}

Processors are responsible to remove finished processes. The retention strategy depends on a processor implementation and settings.

\subsection{Process Execution}

To execute a process its processor requires the activities sequence and the state of the process. To obtain the activities sequence, a processor creates an instance of the process definition class and invokes the process logic definition method. The state is loaded from the directory and deserialized.

A process state contains:

\begin{itemize}
\item The variables stack.
\item The current activity pointer.
\item Failures count.
\item The list of subsets and processes the process is waiting for.
\item A processing node specific information.
\end{itemize}

In the initial state the variables stack is populated with the global variables and the current activity pointer is set to zero (i.e. pointing to the first activity). The processor executes activities pointed by the activity pointer passing the state to them and the activities change the variables and the pointer. Some activities can persist the state to the directory.

\subsection{Logging}

Every Flower process can emit a log. A log is a special set of string messages automatically created under every process entry. The lifetime of a log is the lifetime of its process entry.

\subsection{Waiting}
\label{sec:waiting}

A process can be hibernated by a processor until one of the following events occur:

\begin{itemize}
\item A process has been finished.
\item A message has been put to a subset.
\item A message has been removed from a subset.
\end{itemize}

To hibernate a process the following is done in a transaction:

\begin{enumerate}
\item The current state of the process is saved.
\item The the process is moved from the running to  the waiting folder of its processor.
\item A link to the process is created in the waiters list of a waited resource (see \ref{sec:dir-structure})).
\end{enumerate}

\subsection{Suspension}

A process can suspend itself or be suspended until some moment in time. Suspended processes are placed to the waiting list of their processor. Also, links to the suspended processes are created and put to a special folder in a processor. The links are named so that:

\begin{enumerate}
\item The moment of continue can be extracted from the name.
\item Sorting by name is an equivalent of sorting by the moment of continue (the exact format depends on a processor implementation).
\end{enumerate}

A processor sorts the links by name, takes the first and resumes the process when the time comes, then takes the next link and so forth. The naming rule (2) implies that every next link has the moment of continue later than the previous.

Infinite suspension is implemented as the suspension for a maximum feasible period of time.

\section{Sets and Messages}
\label{sec:sets-and-messages}

Sets are collections of messages that can be enumerated in different order. Messages within a set all have the same type. Sets can be divided into named subsets, each subset can also be divided into subsets etc. A set may have limitation on the maximum number of messages within any of its sets.

A subset support the following operations:

\begin{description}
\item[Put] \hfill \\ Puts a message to a subset. Messages can optionally be named during the put and ordered by that names during enumeration. If the maximum number of messages reached, the operation fails. If the operation is invoked by a process, the process can wait until some of the messages is removed.
\item[Get range] \hfill \\ Reads a range of messages in a specific order. The items can be requested in the following orders (ascending or descending):
    \begin{itemize}
    \item Default order i.e. the order in which the messages were put to the subset.
    \item By put timestamp.
    \item By last update timestamp.
    \item By message name.
    \end{itemize}
\item[Update] \hfill \\ Updates an individual message in a set. This operation changes only the message itself, its name and the last update timestamp.
\item[Remove] \hfill \\ Removes an individual message from a set.
\item[Count] \hfill \\ Counts the number of messages in a subset.
\end{description}

A subset is created when the first message is put to it or there is at least one waiter; a subset is deleted as soon as the last message is removed and there are not waiters. Subsets without messages or waiters cannot exist.

The next sections describe various use cases of the sets.

\subsection{Cross-process Communication}

Processes can send messages to each other by using workflow local (see~\ref{sec:proc-def}) or shared sets. In the most general case, the first level subsets have the names matching the process ids. Thus, having a process id of a recipient another process can send a message to it.

\subsection{Passing Large Datasets to Processes}

It may be required to provide a process with a dataset exceeding any reasonable message size limit. In such cases a client application can build a subset and pass its path to a process.

\subsection{Communication With a User}

If it is required for a process to get an information or confirmation from a user, the messages and sets can be used. The process should put a message to a well known set. An application should pick the message and display a form to the user. As the user submits the form, the application should transform it to a message and pass to the process via a set well known to the process (its local set, for example).

\subsection{Resource Locking}

One special case of using sets is the resource locking. A subset in a locking set denotes a resource, a message is a lock. The name of a lock is the name of its owner. The locks have System.Int32 message type and their value is the number of times the owner applied the lock. Locks may be shared so that more than one owner can apply the lock simultaneously.

To apply a lock, Flower does the following in a transaction:

\begin{enumerate}
\item Requests all messages that are already in the locking set.
\item If any of the messages have a name which doesn't match the owner name and the intended lock cannot be shared with the current owner, the locking attempt fails.
\item Otherwise, either a new message with locks count 1 is created or the locks count of the existing lock is incremented.
\end{enumerate}

To release a lock, Flower does the following in a transaction:

\begin{enumerate}
\item Requests the lock message by its path.
\item Decrements the locks count.
\item If the locks count is zero, removes the message.
\end{enumerate}

In both cases the transactions have repeatable-read isolation level.

\section{Services}
\label{sec:svc-config}

The notion of service in Flower is wider that it usually is. A service in Flower is not only a WCF or a web-service, it may be literally any object. To instantiate services Flower uses Spring.NET IoC containers configured via XML stored in the service container directory entries.

Each container is joined with all the common containers from a special folder in the directory. The common containers may keep shared components like WCF behaviors and bindings while the individual containers keep the services, their endpoints and specific helpers.

A service is identified by a path that consists of the service container directory path and the id of the object within the container. To create an instance of a service the Flower client obtains the container XML configuration from the directory, builds the container and requests the service instance from it by id.

Visit {\ttfamily http://www.springframework.net} to get more information on the Spring.NET IoC container.

\chapter*{Components}
\addtocounter{chapter}{1}
\setcounter{section}{0}
\addcontentsline{toc}{chapter}{Components}

\section{Directory}

The directory is the centralized data storage containing assemblies, configuration and the data of a Flower instance. {\em One Flower instance has one locial directory.}

The directory is a hierarchy of entries of the following types:

\begin{itemize}
\item Folder.
\item Assembly.
\item Service container.
\item Workflow.
\item Process.
\item State.
\item Set.
\item Message.
\item Processor.
\item Role.
\item Link.
\end{itemize}

\subsection{Directory Entries}
\label{sec:dir-entries}

Entries of all types have the common properties:

\begin{description}
\item[ID] \hfill \\ An identifier unique within the directory.

\item[Name] \hfill \\ A name identifying entries within a path.

\item[Creation timestamp] \hfill \\ UTC date and time when the entry was created.

\item[Last update timestamp] \hfill \\ UTC date and time when either the entry itself was modified last time or its direct children was added or removed.

\item[Access control list] A list of roles with flags indicating the access type---read or write (see~\ref{sec:dir-security}).
\end{description}

{\em The entry ids are assigned by the directory and their semantics depend on the directory implementation, but all ids must start with `\#' symbol to be distinguishable from the paths because, in the directory service contract, wherever a path can be used, an id can be used as well.}

\subsubsection{Folders}

Folders are general containers of entries. They have only common properties.

\subsubsection{Assemblies}

The directory stores all the assemblies required to execute workflows: the assemblies containing the service classes, helpers and all their dependencies. Additionally to the common properties their payload is the binary image of an assembly.

\subsubsection{Service containers}

Service container entries store Spring.NET IoC container XML configurations.

\subsubsection{Workflows}

A workflow is a definition of a process. It has the following properties:

\begin{description}
\item[Definition type] \hfill \\ Full type name of the process definition class (see~\ref{sec:wf-and-proc}).

\item[Preferred processors] \hfill \\ List of paths to the preferred processors or the folders under which the preferred processors should be searched. The scheduler takes all the specified processors with all the processors under the specified paths and selects one of them; if no path is specified, the scheduler selects among all of the registered processors.
\end{description}

\subsubsection{Processes}

A process entry represents a running workflow (for more details see~\ref{sec:wf-and-proc}). It has the following properties:

\begin{description}
\item[Workflow] \hfill \\ Full type name of the process definition class (see~\ref{sec:wf-and-proc})

\item[Input message] \hfill \\ The message passed to the initializer of the workflow.

\item[Output message] \hfill \\ The message returned by the initializer of the workflow.
\end{description}

\subsubsection{States}

States contain XML serialized data that is enough for a processing node to resume execution of a process from the position where it was suspended.

\subsubsection{Set}

Sets are containers of messages divided into subsets (see~\ref{sec:sets-and-messages}). A set entry is a definition of a set. It has the following properties:

\begin{description}
\item[Message type] \hfill \\ Full type name of the message type of the set.

\item[Capacity] \hfill \\ Maximum number of messages in any single subset of the set. Zero denotes unlimited.
\end{description}

\subsubsection{Messages}

Message entries contain XML serialized messages of the message type of the containing set.

\subsubsection{Processors}

Each processing node in Flower has corresponding processor entry in the directory. Processor nodes contain links to the pending, waiting, broken, suspended and running processes. The processing nodes monitor their entries to pick up the assigned processes.

Processor entry may contain notification endpoint configuration. If it is specified, the directory service can notify the processing node on the changes in its entry (some directory implementations may not support this behavior).

\subsubsection{Roles}

Role entries contain a list of domain user names included into the role (see~\ref{sec:dir-security}).

\subsubsection{Links}

Links are aliases of the other entries in the directory. A link entry contains an id of another entry.

\subsection{Directory Structure and Naming}
\label{sec:dir-structure}

The directory has predefined structure. It has single root with the following folders:

\begin{itemize}
\item Assemblies.
\item Services.
\item Workflows.
\item Processes.
\item Processors.
\item Sets.
\item Roles.
\end{itemize}

As the names imply, each of the folders contain entries of a specific type. The entries are organized into sub-folders with the structure described in the next sections.

\subsubsection{Assemblies}

The {\ttfamily Assemblies} folder in Flower is analog of the CLR GAC therefore has similar structure. The first level is the folders having the names matching the assembly names. The second level is the assembly entries with the names matching the assembly versions. Among them there is a link named `Latest' pointing to the entry with the latest version of the corresponding assembly.

Thus, the {\ttfamily Assemblies} folder looks like this:

\begin{verbatim}
/Assemblies
  /MyCompany.CRM
    /Latest
    /1.0.0.0
    /1.1.0.0
  /MyCompany.OrderManagement
    /Latest
    /1.0.0.0
    /1.1.0.0
    ...
  ...
\end{verbatim}

\subsubsection{Services}

The {\ttfamily Services} folder has free structure. Its special {\ttfamily Flower} sub-folder contain the common service containers. The Flower client joins all the service containers in this folder with each of the other containers when a service is instantiated. Thus, you can reference objects from the containers under `/Services/Flower' in any other container.

\subsubsection{Workflows}

Structure of the {\ttfamily Workflows} folder is defined by the {\ttfamily Path} attributes of the process definition classes. When a workflow is imported, its path is created if doesn't exist and the name of the class becomes the name of the workflow entry.

For example, for a workflow class {\ttfamily MyWorkflow} path values will give the following global paths:

\begin{verbatim}
'' :
    '/Workflows/MyWorkflow'
'MySystem' :
    '/Workflows/MySystem/MyWorkflow'
'MySystem/MySubsystem' :
    '/Workflows/MySystem/MySubsystem/MyWorkflow'
\end{verbatim}

\subsubsection{Processes}

The {\ttfamily Processes} folder repeats the structure of the {\ttfamily Workflows} folder. For each workflow entry a folder with the same path is created in {\ttfamily Processes} folder. This folder contains links to the processes of the corresponding workflow. The name of a process link entry matches the process entry id.

\subsubsection{Processors}
\label{sec:processor-structure}

The {\ttfamily Processors} folder contain processor entries. The subfolders of a processor entry (except {\ttfamily Suspended}) contain process entries. Which folder contains a process determines the state of the process.

\begin{description}
\item[Pending] \hfill \\ The processes scheduled for execution.
\item[Running] \hfill \\ The processes executed by the corresponding processing node.
\item[Waiting] \hfill \\ The processes waiting for some event (see~\ref{sec:waiting}).
\item[Suspended] \hfill \\ Links to the suspended processes (special folder).
\item[Broken] \hfill \\ The processes stopped due to errors.
\item[Finished] \hfill \\ The finished processes.
\end{description}

A processor entry may have any name. Its name is given by an administrator.

A process entry work as a folder for the state entries. Names of the state entries match their ids.

The {\ttfamily Waiters} folder under a process entry contains links to the processes to be resumed as the containing process is finished (see \ref{sec:waiting}).

\subsubsection{Sets}

The {\ttfamily Sets} folder contain two sub-folders: {\ttfamily Local} and {\ttfamily Shared}.

The {\ttfamily Local} folder contains the local sets of the processes. For each process there is a folder under {\ttfamily /Sets/Local} having the process id as the name. Each of these folders contain sets defined as local in the processe's workflow definition class. A special set of string messages named {\ttfamily Log} is created for each process. This is the log of the process.

The {\ttfamily Shared} folder contains sets which are not associated with any particular process. The folder has free structure.

Subsets are represented by the folders under the set entry. The structure of the folders repeats the structure of the subsets. The subset folders contain message entries.

Any subset folder may have two special sub-folders. They contain back-links to the waiting processes.

\begin{description}
\item[PutWaiters] \hfill \\ The processes waiting to put a message.
\item[GetWaiters] \hfill \\ The processes waiting to get a message.
\end{description}

For more information see~\ref{sec:waiting}.

\subsubsection{Roles}

The {\ttfamily Roles} folder contains the hierarchy of roles. Each role contain zero or more nested roles (see~\ref{sec:dir-security}).

\subsection{Directory Operations}

The directory provides the following operations:

\begin{itemize}
\item Map an id to path.
\item Get a single entry by exact path or id.
\item Get multiple entries of a specified type by a path wildcard.
\item Get the direct children of an entry with opportunity to specify sort order, the first and the last entry index.
\item Count the direct children of an entry.
\item Get entrie's n-th ancestor of a specific type.
\item Check if an entry with a specified path or id exists and is newer than a specified timestamp.
\item Save a single entry as a child of a specified entry (create if it is new or update if it is existing).
\item Move an entry from one parent to another.
\item Delete a single entry.
\item Run script with an opportunity to specify input and get output parameters (see~\ref{sec:dir-scripting}).
\end{itemize}

\subsubsection{Path Wildcard Search}

The directory supports efficient entry search by a wildcard. The wildcard symbol `{\ttfamily *}' may be either at the beginning or at the end of a path component. Also, a path component of two wildcard symbols means any number of any path components.

For example:

\begin{verbatim}
/Processes/OrderManagement/123 : Exact path to an entry with
                                 the id 123.
/Processes/OrderManagement/*   : All entries in a folder.
/Processes/**/123              : An entry with the id 123
                                 wherever it is under the
                                 `Processes' folder.
/**/Internal*                  : All entries with a name
                                 starting with `Internal'.
\end{verbatim}

\subsection{Directory Security}
\label{sec:dir-security}

The directory authenticates requests and restricts the access to the entries based on the internal hierarchy of roles and the access control lists (ACL) of the entries.

A role contains a list of domain users granted the permissions of the role. When a user is authenticated, the directory collects the roles which the user belongs to and applies the permissions based on this roles set.

A role may have nested roles. If a permission is given to a role, it is given to all its nested roles. Thus, the roles of higher levels represent more general security groups while nested roles the more specific ones. For example, the standard `Application' role may be granted permissions to start common workflows while its nested roles like `Application/CRM', `Application/OrderManagement' etc. may be given permissions to start the workflows specific to a corresponding system.

There are following standard roles:

\begin{description}
\item[All] \hfill \\ This is the guest role which implicitly includes all users.
\item[Administrator] \hfill \\ This is the super-user role. It always has all permissions for all entries in the directory even if they have empty ACL. Thus, it makes no sense to create nested roles of this role.
\item[Application] \hfill \\ This is the default role for all client applications. It is highly recommended to create roles for particular applications as nested to this role.
\item[Processor] \hfill \\ This is the default role of all processing nodes.
\end{description}

Any directory entry has an access control list that consists of key-value pairs where the key is a role name and the value is a bitmask of the flags---one for each permission. There are the following permissions:

\begin{description}
\item[Read] \hfill \\ Allows to see the entry itself and all its descendants. For an identity having no read permission for an entry the directory behaves as if the entry doesn't exist except the case when the client attempts to create a descendant entry. If the parent entry realy doesn't exists, the operation may succeed in some cases (see~\ref{sec:dir-service-int}), but if the client have no read permission for the parent, the operation always fails.
\item[Write] \hfill \\ Allows to update and delete the entry. When an identity attempts to delete an entry the directory checks permissions only for the entry being deleted and not for its descendants. Therefore, restricting writes for the descendant entries does not prevent their recursive deletion if the identity has write permission for an ancestor.
\item[Create children] \hfill \\ Allows to create children of the entry.
\end{description}

The Flower client grants the following permissions by default:

\begin{itemize}
\item The root folder and its direct children allow read to `All'.
\item Assemblies, Services and Workflows folders are read-only for `Application' and `Processor'.
\item The processes folders containing the process entries are readable-writable for `Application'. Process entries and states are read-only for `Application' and readable-writable for `Processor'. The namespace folders are read-only for `Application' and `Processor'.
\item The roles `Application' and `Process' are granted read and create-children permissions for the sets. Subsets, messages and waiters folders are readable for those who may read the set and writable for those who may create children of the set.
\item Processor entries are read-only for `Application' and `Process'. They are also allowed to read and create children of the child folders of a processor entry.
\item Roles have empty ACL.
\end{itemize}

\section{Processing Nodes}

A processing node is any application executing a processes. It may be a service as well as any other application (for example, debug tool or Visual Studio plug-in). However, all processing nodes implement the same behavior and share the common responsibilities.

To be connected to a Flower instance a processing node needs the directory client endpoint configuration and the path to its processor entry.

A processing node monitors the `Pending' folder under its processor entry to pick up the pending processes. If the number of running processes reached the processing node capacity, the processing node may stop accepting the processes so that they are accumulated in the `Pending' folder. If the load is too high, the administrator may consider moving some of the pending processes to another node.

If a processing node is restarted, it resumes the execution of the running processes by picking them from the `Running' folder.

If an activity fails, a processing node may rollback its transaction and retry the activity after some time by suspending the process. The failures count is stored in the process state. Depending on the type of the failure and the settings a processing node may do different number of tries and, if the activity still fails, stop the process by placing its link to the `Broken' folder. The broken processes can be resumed only by the administrator.

A processing node keeps links to the finished processes in the `Finished' folder retaining the links and the processes according to its capabilities and settings.

\subsection{Processing Node Service Interface}

A processing node can optionally expose a WCF endpoint with the standard interface provided by the Flower API. The interface includes the following operations:

\begin{description}
\item[Pending processes updated] \hfill \\ One-way parameterless notification method called by the directory to notify the processing node on the changes in its `Pending' folder.
\item[Suspend a process] \hfill \\ The method to suspend a running process. It accepts an id of a running process and the date-time when the process should be resumed.
\item[Terminate a process] \hfill \\ The method to force a process to finish. It accepts an id of a running process.
\end{description}

\chapter*{API}
\addtocounter{chapter}{1}
\setcounter{section}{0}
\addcontentsline{toc}{chapter}{API}

These sections give an overview of the Flower API. For the details on a particular class or function see the reference documentation and the tutorials.

\section{Assemblies and Dependencies}

The Flower API consists of the following assemblies:

\begin{description}
\item[Flower.Directory.Data] \hfill \\ The directory data contracts.
\item[Flower.Directory.Service] \hfill \\ The directory service contract and the processing node callback interface.
\item[Flower.Directory.Client] \hfill \\ The directory client.
\item[Flower.Directory.Scripting] \hfill \\ The directory scripting API implementation.
\item[Flower.Directory.Util] \hfill \\ Various utilities shared between the directory client and the directory implementations.
\item[Flower.Workflow] \hfill \\ The workflow API.
\item[Flower.Processing] \hfill \\ The processing API.
\end{description}

The following third-party assemblies are referenced:

\begin{description}
\item[Jurassic] \hfill \\ JavaScript runtime.
\item[Spring.Core] \hfill \\ Spring.NET core components.
\end{description}

The diagram \ref{fig:dependencies} shows the dependencies between them and the third-party libraries.

\begin{figure}
\includegraphics[width=13cm,keepaspectratio=true]{dependencies.pdf}
\caption{Flower assemblies dependencies.}
\label{fig:dependencies}
\end{figure}

\section{Directory Client}

The directory client (or Flower client) is a set of wrappers around the directory service contract. The client is responsible for:

\begin{itemize}
\item Handling the assembly resolution requests from the application domain and redirecting them to the directory.
\item Dynamic loading of the all data types required by the workflows and the services.
\item Instantiation of the services from the containers.
\item Serialization and deserialization of the messages.
\item Encapsulation of some directory scripts. 
\end{itemize}

The heart of the directory client is the {\ttfamily DirectoryClient} class. It is designed to be used in a C\# `using' scope. It registers the assembly resolution handler of the current application domain in its constrictor and unregisters in the {\ttfamily Dispose} method. Thus, the key feature of the directory client is that during its lifetime the assemblies can be dynamically loaded from the directory simply by using Assembly.Load method.

\begin{verbatim}
using (var client = new DirectoryClient(directory))
{
    //If the assembly is not in the GAC and on the
    //default search paths, it will be loaded
    //from the directory.
    Assembly.Load("MyAssembly, Version=1.0");
}
\end{verbatim}

With this feature the assemblies containing the workflows and all their dependencies don't need to be installed on the host that uses the directory. They should only be uploaded to the directory and the client will dynamically load all the assemblies required for a particular application.

See the {\ttfamily DirectoryClient} class reference and the tutorials for more information.

\section{Directory Scripting}
\label{sec:dir-scripting}

The directory scripting is a basic way to administrate the directory and to join multiple operations to execute them on a directory host thus avoiding multiple remote calls.

The directory scripting API is implemented on top of Jurassic JavaScript runtime and available in two forms: as a library to be used by the directory service and the client applications and as a command line tool. This section gives an overview of the scripting API as it looks from the JavaScript. For more information on the directory API CLR types see the reference documentation.

The following global objects are available in a directory script:

\begin{description}
\item[\$dir] \hfill \\ The current directory object. In has methods repeating the directory service interface and some methods simplifying working with assemblies.
\item[\$fs] \hfill \\ This object provide a simple interface to the file system. It allows to enumerate files in a folder, load and save XML data.
\item[\$in] \hfill \\ Input parameters of the script. All the parameters passed to a script appear as attributes of this object.
\item[\$out] \hfill \\ Output parameters of the script. Initially, it is empty and all the attributes assigned by the script are returned as the output parameters.
\item[\$xmlns] \hfill \\ The global map of XML namespaces.
\end{description}

{\em By convention, all global objects in the directory API have the names starting with `\$'.}

The API also provides a number of objects that can be instantiated:

\begin{description}
\item[Entry objects] \hfill \\ The entry objects represent the directory entries of various types. These objects are wrappers around the corresponding CLR types (the directory service data contracts). The constructor functions of the entry objects have the same names as the corresponding CLR types: Folder, Message, Process, etc. The constructors accept JavaScript objects containing the attributes to be assigned to the corresponding attributes of the entry object by default. The attributes of the script objects correspond to the data members of the wrapped data contracts (they have the same names as the properties, but in the JavaScript notation).
\item[XML object] \hfill \\ The XML objects, as the name implies, represent XML documents. The XML documents can be obtained in three ways: loaded from a file by using {\ttfamily \$fs.loadXml('file.xml')} call, created from an arbitrary XML string by using {\ttfamily xml('<doc>...</doc>')} function or from the attributes of the entries loaded from the directory. Any XML object represent its root element. It provides methods to retrieve its child elements by using XPath, to add/remove the child elements, to change the inner text of the element and access its attributes.
\end{description}

The {\ttfamily \$dir} object provide a number of methods to manipulate the assemblies in the directory not by their paths, but by their CLR names.

\begin{description}
\item[uploadAssemblies] \hfill \\ Loads assemblies from files into the directory by a given array of file paths.
\item[deleteAssemblies] \hfill \\ Deletes assemblies from the directory by the given names.
\item[assemblyExists] \hfill \\ Checks whether an assembly with a given name exists in the directory.
\item[getAssembly] \hfill \\ Returns the entry of an assembly by a given assembly name.
\end{description}

The {\ttfamily transaction()} function can be used to execute another function in a transaction.

The API provides a number of output functions:

\begin{verbatim}
$fs.write(fileName, obj);
$fs.append(fileName, obj);
$fs.appendln(fileName, obj);
print(obj);
println(obj);
\end{verbatim}

The functions provide a smart formatting of the script objects. They recognize the type of an object and format it accordingly so that the output is human readable.

The scripting API doesn't provide the full functionality in all usages. The objects {\ttfamily \$in} and {\ttfamily \$out} are not available in the scripts run by the command line tool. The scripts run on a directory host doesn't provide the functions accessing the file system ({\ttfamily \$fs} object is not available at all) and the {\ttfamily transaction()} function (a script run on a directory host is always run in a transaction).

\section{Directory service interface}
\label{sec:dir-service-int}

%TODO: Describe directory service methods behavior.

\section{Workflow API}
\label{sec:proc-def}

The workflow API is a set of interfaces, attributes and helper classes required to implement the Flower workflows. The {\ttfamily Flower.Workflow} assembly have no dependencies on the other Flower assemblies. This provides a clean separation of concerns. You can substitute any component used by a workflow with your own implementation. The workflow API provides interfaces for these components.

The key component of the workflow API is the {\ttfamily IWorkflow} interface. This is the interface any workflow must implement. The interface is generic so an implementation must provide two type parameters: the input and the output message type.

The {\ttfamily InitGlobalVars} method of the {\ttfamily IWorkflow} interface provides the global variables declarations by using the {\ttfamily IVarStackBuilder} interface. Each declaration is the builder's {\ttfamily Declare} method call. The method support chaining so you can join all declarations into a single expression.

\begin{verbatim}
class MyWorkflow : IWorkflow<InMsg, OutMsg>
{
    private Var<int> i;
    private Var<string> str;
    ...
    
    public void InitGlobalVars(IVarStackBuilder bld)
    {
        bld
            .Declare("i", out i)
            .Declare("str", out str)
            ...;
    }
}
\end{verbatim}

Each call returns a variable container of the generic struct {\ttfamily Var} (it is convenient to keep them as private fields). A struct {\ttfamily Var<T>} is implicitly convertible to the type {\ttfamily T}. It also provides a read-write property {\ttfamily V} of type {\ttfamily T} for access to the contained value.

The {\ttfamily Init} method initializes the global variables and returns the output message. It accepts the input message, the logger and an instance of the {\ttfamily IServiceProvider} interface.

The {\ttfamily Process} method builds the process logic by using {\ttfamily IProcessBuilder}. The builder provides a method with the chaining support for each statement that can be used in a workflow definition so the definition looks like this:

\begin{verbatim}
public void Process(IProcessBuilder bld)
{
    bld
        .Declare("a", out a)
        .Declare("i", out i)
        
        .Statement()
        .Statement()
        
        .If(ctx => a > 0)
            .Statement()
            
            .While(ctx => i < a)
                .Statement()
                .Statement()
            .End()
            
            .Statement()
        .End()
        
        .Statement()
        .Statement()
}
\end{verbatim}

{\ttfamily IProcessBuilder} provides the same {\ttfamily Declare} method as {\ttfamily IVarStackBuilder}, but the variables declared by the process builder are local. These variables are visible only in the current scope i.e. between the nearest scope statement and the {\ttfamily End} statement.

Some statements require lambda expressions to be executed in the runtime. The variables can be accessed from these expressions.

For more information on the workflow API interfaces see the reference documentation and the tutorials.

\section{Processing API}

The processing API provides infrastructure for running processes. The API include implementations of the activities supported by Flower and allows to extend them by using aspect oriented programing. The API is host independent therefore can be used for any cases where a process execution is needed: in real processing, in debug tools and unit-tests.

The entry point of the API is the {\ttfamily Processor} class. The processor is a disposable wrapper around the directory client. It compiles a process by using the internal {\ttfamily IProcessBuilder} implementation into a sequence of activities sharing the common {\ttfamily IActivity} interface. User of the API can attach own behaviors to the activities providing Spring.NET advices to the processor. Compiling a process the processor proxies the activities to attach the advices. The processing API uses internal advices for logging, transaction management, retries after exception and state saving (the processor applies them by default according to its internal rules).

The implementations of the activities are visible to the API user, however, cannot be instantiated outside the API. They are visible only to allow a host to distinguish them and apply different advices. The set of the activities doesn't match the set of the {\ttfamily IProcessBuilder} statements. It is narrower because some statements are compiled into the same activities.

%mapping

The method {\ttfamily Run} of the {\ttfamily Processor} class executes the process loaded into the processor until an interruption i.e. event requiring an action from the host. There is the following interruptions:

\begin{itemize}
\item An unhandled exception.
\item A breakpoint.
\item A waiting.
\item The process finish.
\end{itemize}

It is important to note that, by running processes, the processor dynamically loads assemblies from the directory therefore the application domain which hosts the processor may grow in size.

See the unit-testing tutorial for more details on the processor usage.

\end{document}