\chapter{Advanced Simulation Code Interfaces}\label{advint}

\section{Building an Interface to a Engineering Simulation Code}\label{advint:building}

To interface an engineering simulation package to Dakota using one of
the black-box interfaces (system call or fork), pre- and
post-processing functionality typically needs to be supplied (or
developed) in order to transfer the parameters from Dakota to the
simulator input file and to extract the response values of interest
from the simulator's output file for return to Dakota (see
Figures~\ref{intro:bbinterface}
and~\ref{interfaces:bbinterfacecomp}). This is often managed through
the use of scripting languages, such as C-shell~\cite{And86}, Bourne
shell~\cite{Bli96}, Perl~\cite{Wal96}, or Python~\cite{Mar03}. While
these are common and convenient choices for simulation
drivers/filters, it is important to recognize that any executable file
can be used. If the user prefers, the desired pre- and post-processing
functionality may also be compiled or interpreted from any number of
programming languages (C, C++, F77, F95, JAVA, Basic, etc.).

In the \texttt{Dakota/examples/script\_interfaces/generic} directory, a
simple example uses the Rosenbrock test function as a mock engineering
simulation code. Several scripts have been included to demonstrate
ways to accomplish the pre- and post-processing needs. Actual
simulation codes will, of course, have different pre- and
post-processing requirements, and as such, this example serves only to
demonstrate the issues associated with interfacing a
simulator. Modifications will almost surely be required for new
applications.

\subsection{Generic Script Interface Files} 

The {\tt Dakota/examples/script\_interfaces/generic} directory contains four important files:
\texttt{dakota\_rosenbrock.in} (the Dakota input file),
\texttt{simulator\_script} (the simulation driver script),
\texttt{dprepro} (a pre-processing utility), and \\
\texttt{templatedir/ros.template} (a template simulation input file).

The file \texttt{dakota\_rosenbrock.in} specifies the study that
Dakota will perform and, in the interface section, describes the
components to be used in performing function evaluations. In
particular, it identifies \\ \texttt{simulator\_script} as its
\texttt{analysis\_driver}, as shown in Figure~\ref{advint:figure01}.
\begin{figure}
  \centering
  \begin{bigbox}
    \begin{small}
      \verbatimtabinput[8]{dakota_rosenbrock.in}
    \end{small}
  \end{bigbox}
  \caption{The \texttt{dakota\_rosenbrock.in} input file.}
  \label{advint:figure01}
\end{figure}

The \texttt{simulator\_script} listed in Figure~\ref{advint:figure02}
is a short driver shell script that Dakota executes to perform each
function evaluation. The names of the parameters and results files are
passed to the script on its command line; they are
referenced in the script by \texttt{\$argv[1]}
and \texttt{\$argv[2]}, respectively. The \texttt{simulator\_script}
is divided into three parts: pre-processing, analysis, and post-processing.
% is divided into five parts: set up, pre-processing, analysis,
% post-processing, and clean up.

\begin{figure}
  \centering
  \begin{bigbox}
    \begin{small}
      \verbatimtabinput[8]{simulator_script}
    \end{small}
  \end{bigbox}
  \caption{The \texttt{simulator\_script} sample driver script.}
  \label{advint:figure02}
\end{figure}

% The set up portion strips the function evaluation number from
% \texttt{\$argv[1]}and assigns it to the shell variable \texttt{\$num},
% which is then used to create a tagged working directory for a
% particular evaluation. For example, on the first evaluation,
% ``\texttt{1}'' is stripped from ``\texttt{params.in.1}'' in order to
% create ``\texttt{workdir.1}''. The primary reason for creating
% separate working directories is so that the files associated with one
% simulation do not conflict with those for another simulation. This is
% particularly important when executing concurrent simulations in
% parallel (to actually execute function evaluations concurrently,
% uncomment the \texttt{asynchronous} line in \texttt{dakota\_rosenbrock.in}).
% %Once executing within the confines of the working directory, tags on the 
% %files are no longer necessary, and for this reason, the tagged parameters
% %file is moved to a more convenient name of ``\texttt{dakota\_vars}''.

In the pre-processing portion, the \texttt{simulator\_script} uses
\texttt{dprepro}, a parsing utility, to extract the
current variable values from a parameters file (\texttt{\$argv[1]})
and combine them with the simulator template input file
(\texttt{ros.template}) to create a new input file (\texttt{ros.in})
for the simulator. Internal to Sandia, the APREPRO utility is often
used for this purpose. For external sites where APREPRO is not
available, the DPrePro utility mentioned above is an alternative with
many of the capabilities of APREPRO that is specifically tailored for
use with Dakota and is distributed with it (in
\\ \texttt{Dakota/examples/script\_interfaces/generic/dprepro}, or {\tt
Dakota/bin} in a binary distribution). Additionally, the BPREPRO
utility is another alternative to APREPRO (see~\cite{WalXX}), and at
Lockheed Martin sites, the JPrePost utility is available as a JAVA
pre- and post-processor~\cite{Fla}. The \texttt{dprepro} script
(usage shown in Figure~\ref{advint:figure03}) will be used here for
simplicity of discussion. It can use either Dakota's \texttt{aprepro}
parameters file format (see
Section~\ref{variables:parameters:aprepro}) or Dakota's standard
format (see Section~\ref{variables:parameters:standard}), so either
option may be selected in the interface section of the Dakota input
file. The \texttt{ros.template} file listed in
Figure~\ref{advint:figure04} is a template simulation input file which
contains targets for the incoming variable values, identified by the
strings ``\texttt{\{x1\}}'' and ``\texttt{\{x2\}}''. These
identifiers match the variable descriptors specified in
\texttt{dakota\_rosenbrock.in}. The template input file is contrived
as Rosenbrock has nothing to do with finite element analysis; it only
mimics a finite element code to demonstrate the simulator
template process. The \texttt{dprepro} script will search the
simulator template input file for fields marked with curly
brackets and then create a new file (\texttt{ros.in}) by replacing
these targets with the corresponding numerical values for the
variables. As noted in the usage information for \texttt{dprepro} and
shown in \texttt{simulator\_script}, the names for the Dakota
parameters file (\texttt{\$argv[1]}), template file
(\texttt{ros.template}), and generated input file (\texttt{ros.in})
must be specified in the \texttt{dprepro} command line arguments.

\begin{figure}
  \centering
  \begin{bigbox}
    \begin{small}
      \verbatimtabinput[8]{dprepro_usage}
    \end{small}
  \end{bigbox}
  \caption{Partial listing of the \texttt{dprepro} script.}
  \label{advint:figure03}
\end{figure}

\begin{figure}
  \centering
  \begin{bigbox}
    \begin{small}
      \verbatimtabinput[8]{ros.template}
    \end{small}
  \end{bigbox}
  \caption{Listing of the \texttt{ros.template} file}
  \label{advint:figure04}
\end{figure}

The second part of the script executes the \texttt{rosenbrock\_bb}
simulator. The input and output file names, \texttt{ros.in} and
\texttt{ros.out}, respectively, are hard-coded into the FORTRAN 77
program \texttt{rosenbrock\_bb.f}. When the \texttt{rosenbrock\_bb}
simulator is executed, the values for \texttt{x1} and \texttt{x2} are
read in from \texttt{ros.in}, the Rosenbrock function is evaluated,
and the function value is written out to \texttt{ros.out}.

The third part performs the post-processing and writes the response
results to a file for Dakota to read. Using the UNIX ``\texttt{grep}'' utility, the
particular response values of interest are extracted from the raw
simulator output and saved to a temporary file (\texttt{results.tmp}).
When complete, this file is renamed \texttt{\$argv[2]}, which in this
example is always ``\texttt{results.out}''.
Note that moving or renaming the completed results file
avoids any problems with read race
conditions (see Section~\ref{parallel:SLP:local:system}).

% Finally, in the clean up phase, the working directory is removed to
% reduce the amount of disk storage required to execute the study. If
% data from each simulation needs to be saved, this step can be
% commented out by inserting a ``\texttt{\#}'' character before
% ``\texttt{$\backslash$rm -rf}''.

Because the Dakota input file \texttt{dakota\_rosenbrock.in}
(Figure~\ref{advint:figure01}) specifies
\texttt{work\_directory} and \texttt{directory\_tag} in its interface
section, each invocation of \texttt{simulator\_script} wakes up in
its own temporary directory, which Dakota has populated with the
contents of directory \texttt{templatedir}. Having a separate directory
for each invocation of \texttt{simulator\_script} simplifies the script
when the Dakota input file specifies \texttt{asynchronous} (so
several instances of \texttt{simulator\_script} might run simultaneously),
as fixed names such as \texttt{ros.in}, \texttt{ros.out}, and \texttt{results.tmp}
can be used for intermediate files. If neither \texttt{asynchronous} nor
\texttt{file\_tag} is specified, and if there is no need (e.g., for debugging)
to retain intermediate files having fixed names, then \texttt{directory\_tag}
offers no benefit and can be omitted. An alternative to \texttt{directory\_tag}
is to proceed as earlier versions of this chapter --- prior to Dakota 5.0's
introduction of \texttt{work\_directory} --- recommended:  add two more
steps to the \texttt{simulator\_script},
an initial one to create a temporary directory explicitly and
copy \texttt{templatedir} to it if needed, and a final step to remove the temporary
directory and any files in it.

When \texttt{work\_directory} is specified, Dakota adjusts the \texttt{\$PATH} seen
by \texttt{simulator\_script} so that simple program names
%in the \texttt{simulator\_script}
(i.e., names not containing a slash) that
are visible in Dakota's directory will also be visible in the work directory.
Relative path names ---
involving an intermediate slash but not an initial one,
such as \texttt{./rosenbrock\_bb} or \texttt{a/bc/rosenbrock\_bb} ---
will only be visible in the work directory if a \texttt{template\_directory}
or \texttt{template\_files} specification (see \S\ref{interfaces:workdir})
has made them visible there.

As an example of the data flow on a particular function evaluation,
consider evaluation 60. The parameters file for this evaluation consists of:
\begin{small}
\begin{verbatim}
    { DAKOTA_VARS     =                      2 }
    { x1              =  1.638247697999295e-01 }
    { x2              =  2.197298209103481e-02 }
    { DAKOTA_FNS      =                      1 }
    { ASV_1           =                      1 }
    { DAKOTA_DER_VARS =                      2 }
    { DVV_1           =                      1 }
    { DVV_2           =                      2 }
    { DAKOTA_AN_COMPS =                      0 }
\end{verbatim}
\end{small}

This file is called \texttt{workdir/workdir.60/params.in} if the line
\begin{small}
\begin{verbatim}
 	  named 'workdir' file_save  directory_save
\end{verbatim}
\end{small}
in Figure~\ref{advint:figure01} is uncommented.
The first portion of the file indicates that there are two variables,
followed by new values for variables \texttt{x1} and \texttt{x2}, and
one response function (an objective function), followed by an active
set vector (ASV) value of \texttt{1}. The ASV indicates the need to
return the value of the objective function for these parameters (see
Section~\ref{variables:asv}). The \texttt{dprepro} script reads the
variable values from this file, namely \texttt{1.638247697999295e-01}
and \texttt{2.197298209103481e-02} for \texttt{x1} and \texttt{x2}
respectively, and substitutes them in the \texttt{\{x1\}} and
\texttt{\{x2\}} fields of the \texttt{ros.template} file. The final
three lines of the resulting input file (\texttt{ros.in}) then appear
as follows:
\begin{small}
\begin{verbatim}
    variable 1 1.638247697999295e-01
    variable 2 2.197298209103481e-02
    end
\end{verbatim}
\end{small}

where all other lines are identical to the template file. The
\texttt{rosenbrock\_bb} simulator accepts \texttt{ros.in} as its input
file and generates the following output to the file \texttt{ros.out}:
\begin{small}
\begin{verbatim}
    Beginning execution of model: Rosenbrock black box
    Set up complete.
    Reading nodes.
    Reading elements.
    Reading materials.
    Checking connectivity...OK
    *****************************************************

    Input value for x1 =  0.1638247697999295E+00
    Input value for x2 =  0.2197298209103481E-01

    Computing solution...Done
    *****************************************************
    Function value =  0.7015563957680092E+00
    Function gradient = [ -0.1353509902591768E+01 -0.9731146217930163E+00 ]
\end{verbatim}
\end{small}

Next, the appropriate values are extracted from the raw simulator output
and returned in the results file. This post-processing is relatively
trivial in this case, and the \texttt{simulator\_script} uses the
\texttt{grep} and \texttt{cut} utilities to extract the value from the
``\texttt{Function value}" line of the \texttt{ros.out} output file and save it to
\texttt{\$argv[2]}, which is the \texttt{results.out} file for this
evaluation. This single value provides the objective function value
requested by the ASV.

After 132 of these function evaluations, the following Dakota output
shows the final solution using the \\ \texttt{rosenbrock\_bb} simulator:
\begin{small}
\begin{verbatim}
    Exit NPSOL - Optimal solution found.

    Final nonlinear objective value =   0.1165704E-06

   NPSOL exits with INFORM code = 0 (see "Interpretation of output" section in NPSOL manual)

   NOTE: see Fortran device 9 file (fort.9 or ftn09)
         for complete NPSOL iteration history.

   <<<<< Iterator npsol_sqp completed.
   <<<<< Function evaluation summary: 132 total (132 new, 0 duplicate)
   <<<<< Best parameters          =
                         9.9965861667e-01 x1
                         9.9931682203e-01 x2
   <<<<< Best objective function  =
                      1.1657044253e-07
   <<<<< Best data captured at function evaluation 130

   <<<<< Iterator npsol_sqp completed.
   <<<<< Single Method Strategy completed.
   Dakota execution time in seconds:
     Total CPU        =       0.12 [parent =   0.116982, child =   0.003018]
     Total wall clock =    1.47497
\end{verbatim}
\end{small}

\subsection{Adapting These Scripts to Another Simulation}

To adapt this approach for use with another simulator, several steps
need to be performed:

\begin{enumerate}
\item Create a template simulation input file by identifying the fields
  in an existing input file that correspond to the variables of
  interest and then replacing them with \texttt{\{\}} identifiers
  (e.g. \texttt{\{cdv\_1\}}, \texttt{\{cdv\_2\}}, etc.) which match
  the Dakota variable descriptors. Copy this template input file to a
  templatedir that will be used to create working directories for the
  simulation.

\item Modify the \texttt{dprepro} arguments in
  \texttt{simulator\_script} to reflect names of the Dakota parameters
  file (previously ``\texttt{\$argv[1]}''), template file name
  (previously ``\texttt{ros.template}'') and generated input file
  (previously ``\texttt{ros.in}''). Alternatively, use APREPRO,
  BPREPRO, or JPrePost to perform this step (and adapt the syntax
  accordingly).

\item Modify the analysis section of \texttt{simulator\_script} to
  replace the \texttt{rosenbrock\_bb} function call with the new
  simulator name and command line syntax (typically including the
  input and output file names).

\item Change the post-processing section in \texttt{simulator\_script}
  to reflect the revised extraction process. At a minimum, this would
  involve changing the \texttt{grep} command to reflect the name of
  the output file, the string to search for, and the characters to cut
  out of the captured output line. For more involved post-processing
  tasks, invocation of additional tools may have to be added to the
  script.

\item Modify the \texttt{dakota\_rosenbrock.in} input file to reflect,
  at a minimum, updated variables and responses specifications.
\end{enumerate}

These nonintrusive interfacing approaches can be used to rapidly
interface with simulation codes. While generally custom for each new
application, typical interface development time is on the order of an
hour or two. Thus, this approach is scalable when dealing with many
different application codes. Weaknesses of this approach include the
potential for loss of data precision (if care is not taken to preserve
precision in pre- and post-processing file I/O), a lack of robustness
in post-processing (if the data capture is too simplistic), and
scripting overhead (only noticeable if the simulation time is on the
order of a second or less).

If the application scope at a particular site is more focused and only
a small number of simulation codes are of interest, then more
sophisticated interfaces may be warranted. For example, the economy of
scale afforded by a common simulation framework justifies additional
effort in the development of a high quality Dakota interface. In these
cases, more sophisticated interfacing approaches could involve a more
thoroughly developed black box interface with robust support of a
variety of inputs and outputs, or it might involve intrusive
interfaces such as the direct simulation interface discussed below in
Section~\ref{advint:direct} or the SAND interface described in
Section~\ref{intro:coupling}.

\subsection{Additional Examples}

A variety of additional examples of black-box interfaces to simulation
codes are maintained in the\\
\texttt{Dakota/examples/script\_interfaces} directory in the source
code distribution.


\section{Developing a Direct Simulation Interface}\label{advint:direct}

If a more efficient interface to a simulation is desired (e.g., to
eliminate process creation and file I/O overhead) or if a targeted
computer architecture cannot accommodate separate optimization and
simulation processes (e.g., due to lightweight operating systems on
compute nodes of large parallel computers), then linking a simulation
code directly with Dakota may be desirable. This is an advanced
capability of Dakota, and it requires a user to have access to (and
knowledge of) the Dakota source code, as well as the source code of
the simulation code.

Three approaches are outlined below for developing direct linking
between Dakota and a simulation: extension, derivation, and
sandwich. For additional information, refer to ``Interfacing with
Dakota as a Library'' in the Dakota Developers Manual~\cite{DevMan}.

Once performed, Dakota can bind with the new direct simulation
interface using the \texttt{direct} interface specification in
combination with an \texttt{analysis\_driver}, \texttt{input\_filter}
or \texttt{output\_filter} specification that corresponds to the name
of the new subroutine.

\subsection{Extension}\label{advint:direct:extension}

The first approach to using the direct function capability with a new
simulation (or new internal test function) involves \emph{extension}
of the existing \textbf{DirectFnApplicInterface} class to include new
simulation member functions. In this case, the following steps are
performed:
\begin{enumerate}
\item The functions to be invoked (analysis programs, input and
  output filters, internal testers) must have their main programs
  changed into callable functions/subroutines.

\item The resulting callable function can then be added directly
  to the private member functions in \textbf{DirectFnApplicInterface}
  if this function will directly access the Dakota data structures
  (variables, active set, and response attributes of the class). It is
  more common to add a wrapper function to
  \textbf{DirectFnApplicInterface} which manages the Dakota data
  structures, but allows the simulator subroutine to retain a level of
  independence from Dakota (see Salinas, ModelCenter, and Matlab
  wrappers as examples).

\item The if-else blocks in the \textbf{derived\_map\_if()},
  \textbf{derived\_map\_ac()}, and \textbf{derived\_map\_of()} member
  functions of the \textbf{DirectFnApplicInterface} class must be
  extended to include the new function names as options. If the new
  functions are class member functions, then Dakota data access may be
  performed through the existing class member attributes and data
  objects do not need to be passed through the function parameter
  list. In this case, the following function prototype is appropriate:
\begin{small}
\begin{verbatim}
    int function_name();
\end{verbatim}
\end{small}
  If, however, the new function names are not members of the
  \textbf{DirectFnApplicInterface} class, then an \texttt{extern}
  declaration may additionally be needed and the function prototype
  should include passing of the Variables, ActiveSet, and Response
  data members:
\begin{small}
\begin{verbatim}
    int function_name(const Dakota::Variables& vars,
                      const Dakota::ActiveSet& set, Dakota::Response& response);
\end{verbatim}
\end{small}

\item The Dakota system must be recompiled and linked with the new
  function object files or libraries.
\end{enumerate}

Various header files may have to be included, particularly within the
\textbf{DirectFnApplicInterface} class, in order to recognize new
external functions and compile successfully. Refer to the Dakota
Developers Manual~\cite{DevMan} for additional information on the
\textbf{DirectFnApplicInterface} class and the Dakota data types.

\subsection{Derivation}\label{advint:direct:derivation}

As described in ``Interfacing with Dakota as a Library'' in the Dakota
Developers Manual~\cite{DevMan}, a derivation approach can be employed
to further increase the level of independence between Dakota and the
host application. In this case, rather than \emph{adding} a new
function to the existing \textbf{DirectFnApplicInterface} class, a new
interface class is derived from \textbf{DirectFnApplicInterface} which
\emph{redefines} the \textbf{derived\_map\_if()},
\textbf{derived\_map\_ac()}, and \textbf{derived\_map\_of()} virtual 
functions.

% Note: this approach has benefits primarily in library mode
In the approach of Section~\ref{advint:direct:sandwich} below, the 
class derivation approach avoids the need to recompile the Dakota 
library when the simulation or its direct interface class is modified.

\subsection{Sandwich}\label{advint:direct:sandwich}

In a ``sandwich'' implementation, a simulator provides both the
``front end'' and ``back end'' with Dakota sandwiched in the middle.
To accomplish this approach, the simulation code is responsible for
interacting with the user (the front end), links Dakota in as a
library (refer to ``Interfacing with Dakota as a Library'' in the
Dakota Developers Manual~\cite{DevMan}), and plugs in a derived direct
interface class to provide a closely-coupled mechanism for performing
function evaluations (the back end). This approach makes Dakota
services available to other codes and frameworks and is currently used
by Sandia codes such as Xyce (electrical simulation), Sage (CFD), and
SIERRA (multiphysics).


\section{Existing Direct Interfaces to External Simulators}\label{advint:existingdirect}

In addition to built-in polynomial test functions described in
Section~\ref{interfaces:direct}, Dakota includes direct interfaces to
Sandia's Salinas code for structural dynamics, Phoenix Integration's
ModelCenter framework, The Mathworks' Matlab scientific computing
environment, Scilab (as described in Section~\ref{scilab}), and
Python. While these can be interfaced to with a script-based
approach, some usability and efficiency gains may be realized by
re-compiling Dakota with these direct interfaces enabled. Some
details on Matlab and Python interfaces are provided here. Note that
these capabilities permit using Matlab or Python to evaluate a
parameter to response mapping; they do not make Dakota algorithms
available as a service, i.e., as a Matlab toolbox or Python module.

\subsection{Matlab}\label{advint:existingdirect:matlab} 

Dakota's direct function interface includes the capability to invoke
Matlab for function evaluations, using the Matlab engine API. When
using this close-coupling, the Matlab engine is started once when
Dakota initializes, and then during analysis function evaluations are
performed exchanging parameters and results through the Matlab C API.
This eliminates the need to use the file system and the expense of
initializing the Matlab engine for each function evaluation.

The Dakota/Matlab interface has been built and tested on 32-bit Linux
with Matlab 7.0 (R14) and on 64-bit Linux with Matlab 7.1 (R14SP3).
Configuration support for other platforms is included, but is
untested. Builds on other platforms or with other versions of Matlab
may require modifications to Dakota including its build system

To use the Dakota/Matlab interface, Dakota must be configured and
compiled with the Matlab feature enabled. The Mathworks only provides
shared object libraries for its engine API, so Dakota must be
dynamically linked to at least the Matlab libraries. To compile
Dakota with the Matlab interface enabled, set the CMake variable {\tt
  DAKOTA\_MATLAB:BOOL=ON}, possibly with {\tt
  MATLAB\_DIR:FILEPATH=/path/to/matlab}, where \\ {\tt MATLAB\_DIR} is the
root of your Matlab installation (it should be a directory containing
directories bin/YOURPLATFORM and extern/include).

Since the Matlab libraries are linked dynamically, they must be
accessible at compile time and at run time. Make sure the path to the
appropriate Matlab shared object libraries is on your 
{\tt LD\_LIBRARY\_PATH}. For example to accomplish this in BASH on
32-bit Linux, one might type
\begin{verbatim}
export LD_LIBRARY_PATH=/usr/local/matlab/bin/glnx86:$LD_LIBRARY_PATH
\end{verbatim}
or add such a command to the .bashrc file. Then proceed with
compiling as usual.

Example files corresponding to the following tutorial are available in \\ 
{\tt Dakota/examples/linked\_interfaces/Matlab}.

\subsubsection{Dakota/Matlab input file specification}

The Matlab direct interface is specified with {\tt direct, matlab}
keywords in an interface specification. The Matlab m-file which
performs the analysis is specified through the {\tt analysis\_drivers}
keyword. Here is a sample Dakota {\tt interface} specification:
\begin{small}
\begin{verbatim}
  interface,
    matlab
      analysis_drivers = 'myanalysis.m'
\end{verbatim} 
\end{small}

Multiple Matlab analysis drivers are supported. Multiple analysis
components are supported as for other interfaces as described in
Section~\ref{interfaces:components}. The {\tt .m} extension in the
{\tt analysis\_drivers} specification is optional and will be stripped
by the interface before invoking the function. So {\tt myanalysis}
and {\tt myanalysis.m} will both cause the interface to attempt to
execute a Matlab function {\tt myanalysis} for the evaluation.

\subsubsection{Matlab .m file specification}

The Matlab analysis file {\tt myanalysis.m} must define a Matlab
function that accepts a Matlab structure as its sole argument and
returns the same structure in a variable called {\tt Dakota}. A
manual execution of the call to the analysis in Matlab should
therefore look like:
\begin{small}
\begin{verbatim}
  >> Dakota = myanalysis(Dakota)
\end{verbatim} 
\end{small}
Note that the structure named Dakota will be pushed into the Matlab
workspace before the analysis function is called. The structure
passed from Dakota to the analysis m-function contains essentially the
same information that would be passed to a Dakota direct function
included in {\tt DirectApplicInterface.C}, with fields shown in
Figure~\ref{advint:figure:matlabparams}.

\begin{figure}
\centering
\begin{bigbox}
\begin{small}
\begin{verbatim}
Dakota.
  numFns              number of functions (responses, constraints)
  numVars             total number of variables
  numACV              number active continuous variables
  numADIV             number active discrete integer variables
  numADRV             number active discrete real variables
  numDerivVars        number of derivative variables specified in directFnDVV
  xC                  continuous variable values ([1 x numACV]) 
  xDI                 discrete integer variable values ([1 x numADIV])
  xDR                 discrete real variable values ([1 x numADRV])
  xCLabels            continuous var labels (cell array of numACV strings)
  xDILabels           discrete integer var labels (cell array of numADIV strings)
  xDRLabels           discrete real var labels (cell array of numADIV strings)
  directFnASV         active set vector ([1 x numFns])
  directFnDVV         derivative variables vector ([1 x numDerivVars])
  fnFlag              nonzero if function values requested
  gradFlag            nonzero if gradients requested
  hessFlag            nonzero if hessians requested
  currEvalId          current evaluation ID
\end{verbatim}
\end{small}
\end{bigbox}
\caption{Dakota/Matlab parameter data
structure.\label{advint:figure:matlabparams}}
\end{figure}

The structure {\tt Dakota} returned from the analysis must contain a
subset of the fields shown in
Figure~\ref{advint:figure:matlabresponse}. It may contain additional
fields and in fact is permitted to be the structure passed in,
augmented with any required outputs.
\begin{figure} \centering
\begin{bigbox}
\begin{small}
\begin{verbatim}
Dakota.
  fnVals      ([1 x numFns], required if function values requested)
  fnGrads     ([numFns x numDerivVars], required if gradients  requested)
  fnHessians  ([numFns x numDerivVars x numDerivVars], 
               required if hessians requested)
  fnLabels    (cell array of numFns strings, optional)
  failure     (optional: zero indicates success, nonzero failure
\end{verbatim}
\end{small}
\end{bigbox}
\caption{Dakota/Matlab response data
structure.\label{advint:figure:matlabresponse}}
\end{figure}

An example Matlab analysis driver {\tt rosenbrock.m} for the
Rosenbrock function is shown in Figure
~\ref{advint:figure:matlabrosen}.
\begin{figure} \centering
\begin{bigbox}
\begin{tiny}
\begin{verbatim}
function Dakota = rosenbrock(Dakota)

  Dakota.failure = 0;

  if ( Dakota.numVars ~= 2 | Dakota.numADV | ...
      ( ~isempty( find(Dakota.directFnASM(2,:)) | ...
      find(Dakota.directFnASM(3,:)) ) & Dakota.numDerivVars ~= 2 ) )
    
    sprintf('Error: Bad number of variables in rosenbrock.m fn.\n');
    Dakota.failure = 1;

  elseif (Dakota.numFns > 2) 
  
    % 1 fn -> opt, 2 fns -> least sq
    sprintf('Error: Bad number of functions in rosenbrock.m fn.\n');
    Dakota.failure = 1;

  else
 
    if Dakota.numFns > 1 
      least_sq_flag = true;
    else
      least_sq_flag = false;
    end

    f0 = Dakota.xC(2)-Dakota.xC(1)*Dakota.xC(1);
    f1 = 1.-Dakota.xC(1);
  
    % **** f:
    if (least_sq_flag) 
      if Dakota.directFnASM(1,1)
        Dakota.fnVals(1) = 10*f0;
      end
      if Dakota.directFnASM(1,2)
        Dakota.fnVals(2) = f1;
      end
    else
      if Dakota.directFnASM(1,1)
        Dakota.fnVals(1) = 100.*f0*f0+f1*f1;
      end
    end
  
    % **** df/dx:
    if (least_sq_flag)
      if Dakota.directFnASM(2,1)
        Dakota.fnGrads(1,1) = -20.*Dakota.xC(1);
        Dakota.fnGrads(1,2) =  10.;
      end
      if Dakota.directFnASM(2,2)
        Dakota.fnGrads(2,1) = -1.;
        Dakota.fnGrads(2,2) =  0.;
      end
  
    else 
  
      if Dakota.directFnASM(2,1)
        Dakota.fnGrads(1,1) = -400.*f0*Dakota.xC(1) - 2.*f1;
        Dakota.fnGrads(1,2) =  200.*f0;
      end
      
    end

    % **** d^2f/dx^2:
    if (least_sq_flag)
     
      if Dakota.directFnASM(3,1)
        Dakota.fnHessians(1,1,1) = -20.;
        Dakota.fnHessians(1,1,2) = 0.;
        Dakota.fnHessians(1,2,1) = 0.;
        Dakota.fnHessians(1,2,2) = 0.;
      end
      if Dakota.directFnASM(3,2)
        Dakota.fnHessians(2,1:2,1:2) = 0.;
      end
      
    else
    
      if Dakota.directFnASM(3,1) 
        fx = Dakota.xC(2) - 3.*Dakota.xC(1)*Dakota.xC(1);
        Dakota.fnHessians(1,1,1) = -400.*fx + 2.0;
        Dakota.fnHessians(1,1,2) = -400.*Dakota.xC(1); 
        Dakota.fnHessians(1,2,1) = -400.*Dakota.xC(1);
        Dakota.fnHessians(1,2,2) =  200.;
      end
    
    end
  
    Dakota.fnLabels = {'f1'};
   
  end
\end{verbatim}
\end{tiny}
\end{bigbox}
\caption{Sample Matlab implementation of the Rosenbrock test function
for the Dakota/Matlab interface.\label{advint:figure:matlabrosen}}
\end{figure}

\subsection{Python}\label{advint:existingdirect:python} 

Dakota's Python direct interface has been tested on Linux with Python
2.x. When enabled, it allows Dakota to make function evaluation calls
directly to an analysis function in a user-provided Python module.
Data may flow between Dakota and Python either in multiply-subscripted
lists or NumPy arrays.

The Python direct interface must be enabled when compiling Dakota.
Set the CMake variable \\ {\tt DAKOTA\_PYTHON:BOOL=ON}, and optionally
{\tt DAKOTA\_PYTHON\_NUMPY:BOOL=ON} (default is ON) to use Dakota's
NumPy array interface (requires NumPy installation providing
arrayobject.h). If NumPy is not enabled, Dakota will use
multiply-subscripted lists for data flow.

An example of using the Python direct interface with both lists and
arrays is included in \\
{\tt Dakota/examples/linked\_interfaces/Python}. The Python direct driver is
selected with, for example,
\begin{verbatim}
  interface,
    python
      # numpy
      analysis_drivers = 'python_module:analysis_function'
\end{verbatim}
where {\tt python\_module} denotes the module (file {\tt
  python\_module.py}) Dakota will attempt to import into the Python
environment and {\tt analysis\_function} denotes the function to call
when evaluating a parameter set. If the Python module is not in the
directory from which Dakota is started, setting the {\tt PYTHONPATH}
environment variable to include its location can help the Python
engine find it.  The optional {\tt numpy} keyword indicates Dakota
will communicate with the Python analysis function using numarray data
structures instead of the default lists.

Whether using the list or array interface, data from Dakota is passed
(via kwargs) into the user function in a dictionary containing the
entries shown in Table~\ref{advint:table:pythonparams}. The {\tt
analysis\_function} must return a dictionary containing the data
specified by the active set vector with fields ``fns'', ``fnGrads'',
and ``fnHessians'', corresponding to function values, gradients, and
Hessians, respectively. The function may optionally include a failure
code in ``failure'' (zero indicates success, nonzero failure) and
function labels in ``fnLabels''. See the linked interfaces example
referenced above for more details.

\begin{table}
\centering
\caption{Data dictionary passed to Python direct interface.}
\label{advint:table:pythonparams}\vspace{2mm}
\begin{tabular}{|l|l|}
\hline
\textbf{Entry Name} & \textbf{Description}  \\
\hline
functions  & number of functions (responses, constraints) \\
variables  & total number of variables \\
cv         & list/array of continuous variable values \\
div        & list/array of discrete integer variable values \\
drv        & list/array of discrete real variable values \\
av         & single list/array of all variable values \\
cv\_labels  & continuous variable labels \\
div\_labels & discrete integer variable labels \\
drv\_labels & discrete real variable labels \\
av\_labels  & all variable labels \\
asv        & active set vector \\
dvv        & derivative variables vector \\
analysis\_components & list of analysis components as strings \\
currEvalId & current evaluation ID number \\
\hline
\end{tabular}
\end{table}

\section{Scilab Script and Direct Interfaces}\label{scilab}

Scilab is open source computation software which can be used to
perform function evaluations during Dakota studies, for example to
calculate the objective function in optimization. Dakota includes
three Scilab interface variants: scripted, linked, and compiled. In
each mode, Dakota calls Scilab to perform a function evaluation and
then retrieves the Scilab results. Dakota's Scilab interface was
contributed in 2011 by Yann Collette and Yann Chapalain. The
Dakota/Scilab interface variants are described next.

\subsection{Scilab Script Interface} 

Dakota distributions include a directory
\texttt{Dakota/examples/script\_interfaces/Scilab} which demonstrates
script-based interfacing to Scilab. The {\tt Rosenbrock} subdirectory
contains four notable files:
\begin{itemize}
  \item \texttt{dakota\_scilab\_rosenbrock.in} (the Dakota input file),
  \item \texttt{rosenbrock.sci} (the Scilab computation code),
  \item \texttt{scilab\_rosen\_bb\_simulator.sh} (the analysis driver), and
  \item \texttt{scilab\_rosen\_wrapper.sci} (Scilab script).
\end{itemize}

The \texttt{dakota\_scilab\_rosenbrock.in} file specifies the Dakota
study to perform. The interface type is external ({\tt fork}) and the
shell script \texttt{scilab\_rosen\_bb\_simulator.sh} is the analysis
driver used to perform function evaluations.

The Scilab file \texttt{rosenbrock.sci} accepts variable values and
computes the objective, gradient, and Hessian values of the Rosenbrock
function as requested by Dakota.

The \texttt{scilab\_rosen\_bb\_simulator.sh} is a short shell driver
script, like that described in Section~\ref{advint:building}, that
Dakota executes to perform each function evaluation. Dakota passes
the names of the parameters and results files to this script as
\texttt{\$argv[1]} and \texttt{\$argv[2]}, respectively. The
\texttt{scilab\_rosen\_bb\_simulator.sh} is divided into three parts:
pre-processing, analysis, and post-processing.

In the analysis portion, the \texttt{scilab\_rosen\_bb\_simulator.sh}
uses \texttt{scilab\_rosen\_wrapper.sci} to extract the current
variable values from the input parameters file (\texttt{\$argv[1]})
and communicate them to the computation code in
\texttt{rosenbrock.sci}. The resulting objective function is
transmitted to Dakota via the output result file (\texttt{\$argv[1]}),
and the driver script cleans up any temporary files.

The directory also includes PID and FemTRUSS examples, which are run
in a similar way.

\subsection{Scilab Linked Interface} 

The Dakota/Scilab linked interface allows Dakota to communicate
directly with Scilab through in-memory data structures, typically
resulting in faster communication, as it does not rely on files or
pipes. In this mode, Dakota publishes a data structure into the
Scilab workspace, and then invokes the specified Scilab
analysis\_driver directly. In Scilab, this structure is an mlist
(\url{http://help.scilab.org/docs/5.3.2/en\_US/mlist.html}), with the same
fields as in the Matlab interface~\ref{advint:figure:matlabparams},
with the addition of a field {\tt dakota\_type}, which is used to validate
the names of fields in the data structure.

The linked interface is implemented in source files {\tt
  src/ScilabInterface.[CH]} directory, and must be enabled at compile
time when building Dakota from source by setting {\tt
  DAKOTA\_SCILAB:BOOL=ON}, and setting appropriate environment
variables at compile and run time as described in {\tt README.Scilab}
in \\ {\tt Dakota/examples/linked\_interfaces/Scilab/}. This directory also
contains examples for the Rosenbrock and PID problems.

These examples are similar to those in {\tt Dakota/examples/script\_interfaces}, with
a few notable exceptions:
\begin{enumerate}
\item There is no shell driver script
\item The Dakota input file specifies the interface as 'scilab',
  indicating a direct, internal interface to Scilab using the Dakota
  data structure described above:
\begin{small}
\begin{verbatim}
interface,
  scilab
    analysis_driver = 'rosenbrock.sci'
\end{verbatim} 
\end{small}
\end{enumerate}


\subsection{Scilab Compiled Interface} 

In ``compiled interface'' mode, the Dakota analysis driver is a
lightweight shim, which communicates with the running application code
such as Scilab via named pipes. It is similar to that for Matlab in
\texttt{Dakota/examples/compiled\_interfaces/Matlab}, whose README is likely
instructive. An example of a Scilab compiled interface is included in \\
\texttt{Dakota/examples/compiled\_interfaces/Scilab/Rosenbrock}.

As with the other Scilab examples, there are computation code and
Dakota input files. Note the difference in the Dakota input file
\texttt{rosenbrock.in}, where the analysis driver starts the dakscilab
shim program and always evaluates functions, gradients, and Hessians.

\begin{small}
\begin{verbatim}
interface,
  fork
    analysis_driver = '../dakscilab -d -fp "exec fp.sci" -fpp "exec fpp.sci"'
    parameters_file = 'r.in'
    results_file = 'r.out'
    deactivate active_set_vector
\end{verbatim} 
\end{small}

The dakscilab executable results from compiling \texttt{dakscilab.c}
and has the following behavior and options. The driver dakscilab
launches a server. This server then facilitates communication between
Dakota and Scilab via named pipes communication. The user can also use
the first named pipe (\texttt{\$\{DAKSCILAB\_PIPE\}1}) to communicate
with the server:
\begin{small}
\begin{verbatim}
    echo dbg scilab_script.sce > ${DAKSCILAB_PIPE}1
    echo quit > ${DAKSCILAB_PIPE}1
\end{verbatim} 
\end{small}
The first command, with the keyword 'dbg', launches the script
scilab\_script.sci for evaluation in Scilab. It permits to give
instructions to Scilab. The second command 'quit' stops the server.

The dakscilab shim supports the following options for the driver call:
\begin{enumerate}
  \item -s  to start the server
  \item -si to run an init script
  \item -sf to run a final script
  \item -f -fp -fpp to specify names of objective function, gradient
    and hessian, then load them.
\end{enumerate}

For the included PID example, the driver call is 
\begin{small}
\begin{verbatim}
    analysis_driver = '../dakscilab -d -si "exec init_test_automatic.sce;"
                     -sf "exec visualize_solution.sce;" -f "exec f_pid.sci"'
\end{verbatim}
\end{small}

Here there is an initialization script
(\texttt{init\_test\_automatic.sce;}) which is launched before the
main computation. It initializes a specific Scilab module called
xcos. A finalization script to visualize the xcos solution is also
specified (\texttt{visualize\_solution.sce}). Finally, the objective
function is given with the computation code called
\texttt{f\_pid.sci}.


% LocalWords:  Scilab Yann Collette Chapalain Rosenbrock subdirectory dakota bb
% LocalWords:  scilab rosenbrock rosen argv pre PID FemTRUSS workspace mlist 1i
% LocalWords:  Matlab src ScilabInterface BOOL README dakscilab Hessians dbg si
% LocalWords:  init fp fpp sce xcos pid

