%\documentclass[aps,prb,preprint,showkeys]{revtex4}
\documentclass[aip,preprint]{revtex4-1}
%\documentclass[aip,preprint]{revtex4}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{epsfig}
\usepackage{subfigure}
\makeatletter
\newif\if@restonecol
\makeatother
\let\algorithm\relax
\let\endalgorithm\relax
\usepackage[boxed,longend]{algorithm2e}
\usepackage{textcomp}
%\usepackage{algorithmic}
%\usepackage{newalg}
\usepackage{color}
\usepackage{listings}
\usepackage{algpseudocode}
\usepackage{setspace}
\usepackage{palatino}
\usepackage{trackchanges}
\usepackage{natbib}

\setcounter{MaxMatrixCols}{30}
%EndMSIPreambleData
\makeatletter
\def\@dotsep{4.5pt}
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
%\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}

%\doublespacing
\begin{document}

\newcommand{\argmax}{\operatornamewithlimits{argmax}}

\title{String Method with Collective Variables from Instantaneous Normal Modes}
\author{Santanu Chatterjee}
\author{Christopher R. Sweet}
\author{Jes\'{u}s A. Izaguirre}
\affiliation{Department of Computer Science and Engineering, 
University of Notre Dame, Notre Dame, Indiana, USA}
\keywords{String Method, Instantaneous Normal Mode Analysis, Alanine Dipeptide}
\pacs{ number}

%--------------------------------------------------------------------------  %
%Abstract
%--------------------------------------------------------------------------  %

\begin{abstract}
The String Method on collective variables finds minimum free energy paths (MFEP)
between metastable states of a molecule. This requires choosing a set
of reaction coordinates, or collective variables along which the
transition takes place, which is non-trivial. In this paper we present
a method for automatically extracting collective variables from low
frequency instantaneous normal modes projected onto dihedral angles.
We have applied this method to characterize isomerization
transitions of alanine dipeptide. The algorithms we have developed are
general and can be readily applied to larger systems. We also show
that these collective variables simplify the String method and make it
numerically more robust.

\end{abstract}

\maketitle

%\tableofcontents
\pagebreak

%--------------------------------------------------------------------------  %
%Describe paper's contents
%--------------------------------------------------------------------------  %

\section{Introduction}
\label{sec:intro}

Understanding the mechanism of slow conformational transitions in
biomolecules is key for identifying reaction intermediates, improving
estimates of free energy of binding, and designing drugs. The slow
timescales of conformational changes of biological interest are beyond
the reach of conventional molecular dynamics simulations.  One
approach to solving this problem is to compute ensembles of
trajectories like in transition path
sampling~\cite{BCDG02,Crooks:2011p1563,Rogal:2008p1564} and Markov
state
models~\cite{Singhal415,NoeMSMReview2008,ChoderaMSM2007,HummerMSM2008,Bowman:2010p1565,Voelz:2010p1566,MorcosMSM2010}---
which requires sampling of large portions of conformational
space. Here we pursue an alternative approach: When at least two
meaningful conformations are known, one can find representative
reaction paths between these conformations using transition path
theory~\cite{E:2006hg,VandenEijnden:2006jw,Metzner:2006du,MaFV06}. These
two approaches can be combined in practice, for instance by finding
representative reaction paths using methods like
Milestoning~\cite{VandenEijnden:2008bn} and using these to generate
Markov State Models~\cite{Schutte:2011gs}.
 
There are many choices for paths to compute. A reasonable choice is to
approximate the most probable path as a minimum free energy path
(MFEP), which is the most probable path in the space of collective
variables in the zero temperature limit of dynamics on a finite
temperature free energy surface~\cite{MaFV06}.  At a finite
temperature, the actual path taken could differ from the MFEP due to
thermal fluctuations. The MFEP can be computed using either the
String method \cite{MarV07,MaFV06} or the
swarm-of-trajectories method \cite{PaSR08}. Other paths that can be
computed, such as the maximum flux transition path (MFTP)~\cite{ZhSS10} are
discussed below.

A major challenge in computing reaction paths, including the MFEP,
is the definition of collective variables which capture conformational
transitions.  These collective variables are generally not known in
the absence of extensive experimental and computational studies. In
this paper, we present a method for finding collective variables
automatically using Instantaneous Normal Mode Analysis (INMA)~\cite{Cho:1994cv,Stratt:1995jh,Keyes:1997fy,Li:1998js}.  It has
been shown earlier that large conformational transitions in
biomolecules can be projected to a set of low frequency normal modes
\cite{LeSS85,TamS01,PePa06}, which motivates our choice of collective
variables.  We incorporated these
collective variables in the String method, which we call
the Normal Mode String (NMS) method.  Our algorithm for finding
collective variables can be incorporated into other methods for
finding reaction paths or for enhanced sampling in generalized
ensembles \cite{ZhCY09}.

Conformational dynamics of proteins occur roughly in three
subspaces. The first is low dimensional (less than 1\% of the modes)
and consists of large conformational changes that affect the packing
topology of the protein. The modes that define a basis set for this
space are anharmonic and contain multiple minima~\cite{HaKG95}. These
modes can be found through principal component analysis (PCA) of MD
trajectories or analyses such as Jumping Among
Minima~\cite{KiSG98}. These modes persist on minima for hundreds to
thousands of picoseconds. Protein dynamics on this subspace account
for 80\% to 90\% of the fluctuations in MD simulations.  The second
subspace is an order of magnitude larger and mostly describes changes
on charged and surface side chains. Its modes are mildly anharmonic,
Gaussian-like, and persist for tens to hundreds of picoseconds. The
third subspace consists of the majority of the modes, which are
harmonic and highly self-similar.

Our approach to defining the collective variables for the String
method is to approximate the anharmonic modes of different protein
conformations in the reaction path using low frequency instantaneous
normal mode analysis (INMA), which is an approximation of NMA when the
protein is away from equilibrium.  Each subset of low frequency
instantaneous normal modes at a given protein conformation can
adequately describe local motion, but the collection of these
subsets does not define a global space of protein motion. Our solution
is to map from the instantaneous normal modes to dihedral space, which
defines a global space that can be used to describe conformational
change.

An implementation of the NMS method is available in the open source
framework Protomol~\cite{Matt04}. Example files and analysis scripts to reproduce
the results presented in the paper are also available~\cite{Chat11}. 

A review of the MFEP is discussed first (\S \ref{ss:mfep}), followed
by our method for collective variables (\S \ref{ss:collective}) and
the derivation of the Normal Mode String Method (\S \ref{ss:nmsm}),
then the results from numerical simulations on the isomerization of
alanine dipeptide (\S \ref{se:results}), and finally a discussion of
related approaches (\S \ref{se:discussion}).

\section{Minimum Free Energy Path (MFEP)}
\label{ss:mfep}
A biomolecule with $N$ atoms has $3N$ degrees of freedom. 
We assume that rare transitions can be characterized by a set of 
$m$ degrees of freedom, which are known as collective variables,
with $m<<3N$. 
Let us denote the space spanned by the collective variables as $\Omega$ (s.~t.~ $\mathrm{dim}(\Omega)=m$).
Let $\mathbf{x}$ be the vector of coordinates in Cartesian space and 
$\mathcal{C}$ be the coordinates in reduced space 
($\mathbf{x}{\in}\mathfrak{R}^{3N}$, $\mathcal{C}{\in}{\Omega}$).
The probability
of the conformation $\mathcal{C}{\in}{\Omega}$ in the canonical ensemble
is given by
\begin{equation}
\label{eq:canonical_prob_distribution}
P(\mathcal{C}) = Z^{-1}{\int}_{{\Re}^{3N}}{\exp}^{-{\beta}U(\mathbf{x})}\mathbf{\delta}(||\mathbf{c}(\mathbf{x}) - \mathcal{C}||)\mathrm{d}\mathbf{x},
\end{equation}
where $\mathbf{c}(\mathbf{x})$ is the mapping from Cartesian coordinates to
collective variable space,
 $Z$ is the partition function,
$\beta = \frac{1}{k_{B}T}$ where $T$ is the system temperature, $k_B$ is Boltzmann constant.
$U(\mathbf{x})$ is the potential energy of the system.
The Dirac Delta function 
$\mathbf{\delta}(.)$ in Eqn. (\ref{eq:canonical_prob_distribution})
accumulates the number of points from Cartesian space that are mapped to $\mathcal{C}$.
The free energy of the system at $\mathcal{C}$ is then given by
\begin{equation}
\label{eq:free_energy}
F(\mathcal{C}) = - {\beta}\ln{(P(\mathcal{C}))}.
\end{equation}
The minimum free energy path (MFEP) is the transition path between
endpoints of metastable states $A$ and $B$ with the highest
probability in the absence of thermal fluctuations. We can describe a
general path in collective variable space by a parameterized curve
$S({a})\in \Omega$ where
\begin{equation}
Y_{1}{\le}{a}{\le}Y_{2},{\,}S(Y_{1}){\in}A,{\,}S(Y_{2}){\in}B.
\end{equation}
The string is then
\begin{equation}
\label{eq:stringdef1}
\mathcal{S}(\alpha) = \{ S(a) | a{\in}{\alpha}\}.
\end{equation}
At a point on a string
$\mathcal{C}_{i} = S({a}_{i})$ the mean force is given by
\begin{equation}
\label{eq:meanforce1}
G(\mathcal{C}_{i}) = -\nabla_{\mathcal{C}_{i}}F(\mathcal{C}_{i}).
\end{equation}
The MFEP is a special case of a string where  the mean force is tangential to the string such that
\begin{equation}
\label{eq:meanforce_on_MFEP}
\mathcal{P}_S(\mathcal{C}_{i})^{T}~G(\mathcal{C}_{i}) = ||G(\mathcal{C}_{i})||,
\end{equation}
with $\mathcal{P}_S(\mathcal{C}_{i})$ a unit vector tangent to the string at a point $\mathcal{C}_{i}$
\begin{equation}
\label{eq:meanforce_on_MFEP2}
\mathcal{P}_S(\mathcal{C}_{i}) = \left(\biggl[\left({\frac{\mathrm{d}{S}({a})}{\mathrm{d}{a}}}\right)^{T}{\frac{\mathrm{d}{a}({\mathcal{C}})}{\mathrm{d}{\mathcal{C}}}}\biggr]\bigg/
\biggl|\biggl|\left({\frac{\mathrm{d}{S}({a})}{\mathrm{d}{a}}}\right)^{T}{\frac{\mathrm{d}{a}({\mathcal{C}})}{\mathrm{d}{\mathcal{C}}}}\biggr|\biggr|\right)\bigg|_{\mathcal{C}_i}.
\end{equation}
%In Eqn. (\ref{eq:meanforce_on_MFEP}), 
%${\frac{\mathrm{d}S({\alpha})}{\mathrm{d}{\alpha}}}\bigg|_{a_i}$ is
%a unit tangent to the string at a point $a_{i}$.
We denote the mean force perpendicular to the MFEP at a point $\mathcal{C}_{i}$ as $G^{\perp}(\mathcal{C}_{i})$.
%\begin{equation}
%\label{eq:gperp1}
%G^{\perp}(\mathcal{C}_{i}) = - {\nabla}_{\mathcal{C}_{i}}F^{\perp}(\mathcal{C}_{i}).
%\end{equation}
From Eqn. (\ref{eq:meanforce_on_MFEP}), it is evident that the mean
force perpendicular to the MFEP at any point is the zero vector; thus
the objective is to find a String such that $G^{\perp}(S({a})) =
\mathbf{0},{\,}{\forall}{a}{\in}{\alpha}$.

\section{Collective variables}
\label{ss:collective}

Our choice of collective variables is central to the broad
applicability of the method to proteins.  We start with a
configuration $\mathbf{x}_0$ at a local minimum of $U$. This allows
the expansion of the potential energy about the equilibrium point, and
we can separate the dynamics into the slow and fast spaces by normal
mode analysis (NMA). The separation is accomplished by identifying a
set of ordered eigenvectors (by eigenvalue) s.~t. their eigenvalues
are below a given cutoff. Since the eigenvalues represent the square
of their associated eigenvector's frequency, this cutoff defines the frequency
partition. Given this set of $m$ vectors, defined as matrix
$\mathbf{Q}$ with the eigenvectors as columns, we can project the low
frequency dynamics as
\begin{equation}
\label{eq:lowfrequency1}
\mathbf{x}_{\mathrm{low}} = M^{-\frac{1}{2}}\mathbf{Q}\mathbf{Q}^{\mathrm{T}}M^{\frac{1}{2}}(\mathbf{x}-\mathbf{x}_0) + \mathbf{x}_0,
\end{equation}
and high frequency dynamics
\begin{equation}
\label{eq:highfreq}
\mathbf{x}_{\mathrm{high}} = M^{-\frac{1}{2}}\left(I-\mathbf{Q}\mathbf{Q}^{\mathrm{T}}\right)M^{\frac{1}{2}}(\mathbf{x}-\mathbf{x}_0) + \mathbf{x}_0.
\end{equation}
We denote this
low frequency space as $\Omega$ and the remaining high frequency space as $\bar{\Omega}$. 
Using $\mathbf{Q}$, we can project from Cartesian
space to the mode space as
\begin{equation}
\label{eq:modespaceproj}
\mathbf{c}(\mathbf{x}) = \mathbf{Q}^{T}M^{1/2}(\mathbf{x}-\mathbf{x}_0).
\end{equation}

Since the projection matrices are dependent on the position, we need
to repeat this process as we explore conformational space.  However,
since generally we will no longer be at an equilibrium point, we use
the instantaneous normal mode analysis (INMA) described below.

Our application of INMA is based on the following observation: the
distribution of frequencies of proteins is such that the dimension of
the low frequency space is essentially fixed. This is illustrated in
Figure \ref{fig:PNMLfigCalmod} for two very distinct conformations of
the protein calmodulin. A consequence of this invariance is that the
dimension of $\mathbf{Q}$ is constant. Hence, to find the low
frequency space at a point $\mathbf{x}^{\prime}$ away from equilibrium
we use the eigenvectors of a ``nearby'' NMA or INMA analysis, for
example at $\mathbf{x}_0$. We then minimize the space spanned by the
previous high frequency eigenvectors, re-diagonalize and repeat the process, until the frequency division is
invariant (up to a given $\epsilon$).  At this point the system is
minimized in the high frequency space
$\bar{\Omega}$. The final rediagonalization gives a new basis set spanning
$\Omega$ at $\mathbf{x}^{\prime}$.

Computing normal modes involves diagonalization of the Hessian matrix
of second derivatives of the potential energy function. The
eigenvalues correspond to the square of the frequency of motions in
the direction of their corresponding eigenvectors. Brute force
diagonalization is $O(N^3)$ and clearly impractical. Even though
several approximate methods exist to compute low frequency
eigenvectors, these methods were not optimized to be used many times
as we need to do; they take either too long or are too inaccurate for
our application~\cite{SeBB10,Ghys10,MouP93}.  We have improved the
accuracy of the Rotation Translation Block (RTB) method~\cite{DuTS94},
which approximates the low frequency subspace of the protein by
constructing a model subspace ${E}$ from the first 6 eigenvectors of
block Hessians that represent contiguous residue blocks. $E$ is a
subspace that spans the translations and rotations among blocks. Then
the whole Hessian ${H}$ is projected onto the model space by $S={E}^T
{H} {E}$. Diagonalization of $S,$ which is a small matrix, gives
approximate low frequency eigenvectors, see
Fig.~\ref{fbmstrategy}. Here we show that this approximation does not
accurately capture the low frequency space. However, a simple
modification does: instead of only choosing 6 eigenvectors, we choose
a larger number of eigenvectors to account for internal flexibility of
the residues. We call this approach flexible block method
(FBM). Fig.~\ref{fbmperformance} illustrates the superior accuracy of
FBM over RTB for the protein BPTI with 58 residues. FBM and RTB can be
seen as an application of an inexact Trace Minimization
algorithm~\cite{Wisniewski:1982uy,Sameh:2000ba,Naumov:2008dz} where
the orthogonal trial matrices are formed from the residue-block model
but where there is no conjugate gradient solution at the end.
For our INMA we use
implicit solvent models, such as Generalized-Born~\cite{OnCB02}, which
replace the effect of solvent degrees of freedom on the potential
energy function with mean force potentials. Both RTB and FBM have
$O(N^{9/5})$ time complexity, which is favorable to the most commonly
used $O(N^2)$ implementations of Generalized-Born implicit solvent
forces.


Instantaneous normal modes (INM) provides a way for automatically
finding collective variables.  However, since each eigenvector set is
only defined locally (at the point of diagonalization) there is no
common global set. Our solution is to project the INM onto the space
of dihedral angles involving heavy atoms only, which define global
conformatios of the protein. Usually a reduced set of $p$ dihedrals
${\Theta}{\,}={\,}\{{\theta}_{1},{\dots},{\theta}_{p}\}$ can be
selected by merging the dihedrals that participate significantly on
motions around INM. 

\section{Normal Mode String Method}
\label{ss:nmsm}
We present the adaptation of the String method from
Ref.~\cite{MaFV06} to use our collective variables, here called
Normal Mode String (NMS) method, which is summarized in Algorithm
\ref{algo:stringalgo}. The discretization of the path is done like in
that paper. The solution of the nonlinear discrete equations uses the
semi-implicit simplified string method with a dragging algorithm that
we introduce here. Conditional averages are done using constrained
dynamics on INM space rather than restrained dynamics on a nonlinear
space, giving a simpler and numerically better conditioned algorithm. Sampling to
estimate the free energy is done using the Langevin Leapfrog
algorithm rather than potentially non-ergodic Nos\'{e}-Hoover
dynamics~\cite{Schmidler:2008vf,Fukuda:2010il}
 or slower
Brownian dynamics. 

Given $m$ collective variables defined from
two metastable states $A$ and $B$ we generate an initial string with
$R$ images (two endpoint images and $R-2$ intermediate images) by
calculating $\mathcal{C}_1$ at $A$, $\mathcal{C}_R$ at $B$ and form
the intermediate images by linear interpolation for $\mathcal{C}_2$ to
$\mathcal{C}_{R-1}$, spaced equidistantly in the collective variable
space.

The method proceeds by using molecular dynamics (MD) where the
collective variables are constrained at each image in the String to
calculate the mean force $G_i$. Once we estimate $G_i$,${\forall}i$,
we move the images on the string according to the equation
\begin{equation}
\label{eq:prop1}\mathbf{x}_i^{n+1} = \mathbf{x}_i^n+\Delta\tau_s M^{-1}G^{n+1}_i,
\end{equation}
where $\Delta \tau_s$ is a user defined displacement parameter,
$M$ is the mass matrix and $n$ denotes the step of iteration.

To maintain equal separation among the images we impose the following
constraint after string update
\begin{equation}
\label{eq:reparamconstrain}
||\mathcal{C}_{i} - \mathcal{C}_{i+1}||{\approx}||\mathcal{C}_{i+1} - \mathcal{C}_{i+2}||.
\end{equation}

By repeating this algorithm, depicted in Figure \ref{fig:stringperp1},
the string converges to the point where the mean force is almost
parallel to the string at all images, i.e. $G^{\perp}\approx
\mathbf{0}$. From Eqn. (\ref{eq:meanforce_on_MFEP}), this means that
the string is then stationary (up to some perturbation
$\epsilon$). This string also approximates the MFEP by the
requirements of Eqn. (\ref{eq:meanforce_on_MFEP}).


%\begin{algorithm}[!hbt]
%\caption{Algorithm for String evolution}\label{algo:stringalgo}
%\begin{flushleft}
%{\bf Input :}: $m$,$\mathbf{x}_{i}^{0},\mathbf{Q}_{i},{\theta}_{i}^{0},{\forall}i$,$R$,numsteps. \\
%\end{flushleft}

%\begin{algorithmic}

%\For{$j=1$ to numsteps}
%  \For{$i=1$ to $R$}
%     \State $\mathbf{Q}_{i}$ = Diagonalize($\mathbf{x}_{i}^{j}$).
%     \State $G_{i}$ = EstimateMeanForce($\mathbf{x}_{i}^{j}$,$n_{B}$,$\mathbf{Q}_{i}$).
%     \State $\mathbf{x}_{i}^{j+{\epsilon}}$ = StringUpdate($\mathbf{x}_{i}^{j}$,${\Delta}{\tau_s}$,$G_{i}$).
%     \State ${\theta_{i}^{j+\epsilon}}$ = $\theta$($\mathbf{x}_{i}^{j+{\epsilon}}$).
%  \EndFor
%  \State $\theta_{i}^{j,r}(i=1,{\dots},R)$ = Reparameterize($\theta_{i}^{j+\epsilon},(i=1,{\dots},R)$).
%  \For{$i=1$ to $R$}
%    \State ${\theta}_{i}^{j+1},\mathbf{x}_{i}^{j+1}$ = Dragging($\mathbf{Q}_{i}$,${\theta}_{i}^{j,r}$,$\theta_{i}^{j}$,$\mathbf{x}_{i}^{j+\epsilon}$).
%  \EndFor

%\EndFor

%\end{algorithmic}\vspace{0.8cm}
%\end{algorithm}


\subsection{Initial String}
\label{ss:initialstring}
Algorithm \ref{algo:initialstring} shows the steps required to find
the initial string given the set of dihedrals of interest $\Theta$.
We start with the coordinates at the endpoints $A$ and $B$.  Let
$\boldsymbol{\theta}_{1} = \Theta(A)$ and $\boldsymbol{\theta}_{R} =
\Theta(B)$.  We define $R$ images
$\boldsymbol{\theta}_i,{\,}i=1,{\dots},R$ and place the images on a
line joining the endpoints at an interval $(\boldsymbol{\theta}_{B} -
\boldsymbol{\theta}_{A})/(R-1)$.  From a general point $i$ we move
from $\boldsymbol{\theta}_{i}$ to
$\boldsymbol{\theta}_{i+1}=\boldsymbol{\theta}_{i}+(\boldsymbol{\theta}_{B}
- \boldsymbol{\theta}_{A})/(R-1)$ by applying the dragging algorithm
from Section \ref{ss:dragging}. Once we reach the target
$\boldsymbol{\theta}_{i+1}$, we rediagonalize to obtain new
eigenvectors before moving to the next target,
$\boldsymbol{\theta}_{i+2}$.

The string update requires several operations after we obtain the mean
force at arbitrary step $n$.  First we update the collective variables
using Eqn. (\ref{eq:prop1}) to find intermediate
$\mathbf{\hat{x}}^{n+\epsilon}$. This is followed by minimization in
$\bar\Omega$, the space orthogonal to the collective variables, to
find $\mathbf{\bar{x}}^{n+\epsilon}$.  This step yields intermediate
conformation $\mathbf{x}^{n+\epsilon}$ which is projected onto the
dihedral set $\Theta$ to get the intermediate string
${\Theta}^{n+\epsilon} = {\Theta} (\mathbf{x}^{n+\epsilon})$.
Finally, we apply re-parameterization on the intermediate string
${\Theta}^{n+\epsilon}$ to get the new string ${\Theta}^{n+1}$.

%\begin{algorithm}[!hbt]
%\caption{Algorithm for generating initial String}\label{algo:initialstring}
%\begin{flushleft}
%{\bf Input :}: $m$, $\Theta$,$\mathbf{x}_{A}$,$\mathbf{x}_{B}$,$R$. \\
%\end{flushleft}
%\begin{algorithmic}
%\State Set $\theta_{i} = \theta(\mathbf{x}_{A})$.
%\State Set $\theta_{R} = \theta(\mathbf{x}_{B})$.
%\State Set ${\theta}_{inc} = \frac{{\theta}_{R} - {\theta}_{1}}{R-1}$.
%\For{$i=2$ to $R-1$}
%  \State $\mathbf{x}_{i} = \mathbf{x}_{i-1}$.
%  \State $\mathbf{Q}_{i}$ = Diagonalize($x_{i}$).
%  \State ${\theta}_{i,t} = {\theta}_{i} + {\theta}_{inc}$.
%  \State ${\theta}_{i},\mathbf{x}_{i}$ = Dragging($\mathbf{Q}_{i}$,${\theta}_{i,t}$,$\theta_{i-1}$,$\mathbf{x}_{i}$).
%\EndFor
%\end{algorithmic}\vspace{0.8cm}
%\end{algorithm}


\subsection{Dragging Algorithm}
\label{ss:dragging}

The update to the new positions may produce non-physical conformations
(i.e., structures with very low probability).  So we adopt a dragging
algorithm to move in small steps from the
current string position $\mathbf{x}^{n}$ to the new to obtain
$\mathbf{x}^{n+1}$.

Moving from $\boldsymbol{\theta}_{a}$ to $\boldsymbol{\theta}_{b}$ in
$\Theta$ requires a corresponding move $\mathcal{C}_{a}$ to
$\mathcal{C}_{b}$ in low frequency mode space. Since large movements
can be non-physical and an exact mapping from dihedral space $\Theta$
to instantaneous normal modes is not available, we define a small
step-size in dihedral space and move the system over many steps,
denoted by $n_d$.

Algorithm \ref{algo:draggingalgo} describes the steps in the dragging
algorithm. At each dragging step, we pick one dihedral from the set
$\Theta$ and find the corresponding displacement ${\delta}\mathbf{c}$
in mode space using Eqn.~(\ref{eq:drageuler}). We can select a
dihedral by random or the one with maximum distance to the
target. After each update in mode space, we perform an energy
minimization in $\bar{\Omega}$ to find the most probable conformation
in that space. Next we project back to the Cartesian space to get the
new position $\mathbf{x}_{i}^{\epsilon}$.  We continue this loop until
we are sufficiently close to the target $\theta_{i}+{\delta}\theta$.

%\begin{algorithm}[!hbt]
%\caption{Dragging Algorithm}\label{algo:draggingalgo}
%\begin{flushleft}
%{\bf Input :} Position of point in Cartesian space $\mathbf{x}_{i}^{0}$, source $\theta_i$ and target $\theta_{i}+{\delta}{\theta}$ \\
%\end{flushleft}
%\begin{algorithmic}
%\While {true} 
%  \State j = FindADihedralToMove().
%  \State ${\delta}{\theta}_{j}$ = FindDistanceToMove().
%  \State ${\delta}\mathbf{c}$ = DisplacementInModeSpace(). Eqn.~(\ref{eq:drageuler})
%  \State Find ${\delta}\mathbf{\hat{x}}_{i}^{\epsilon}$. 
%  \State Update position in $\Omega$ with ${\delta}\mathbf{\hat{x}}_{i}^{\epsilon}$.
%  \State Minimize potential energy in $\bar{\Omega}$ to get new position $\mathbf{x}_{i}^{\epsilon}$.
%  \State $\theta_{i,d}$ = GetDihedrals($\mathbf{x}_{i}^{\epsilon}$).
%  \State ${\delta}{\theta}_{d}$ = $\theta_{i}+{\delta}{\theta}$ - $\theta_{i,d}$.
%  \If {${\delta}{\theta}_{d} {\leq} {\epsilon}_{\theta}$} 
%      \State break;
%  \EndIf
%\EndWhile   
%\end{algorithmic} \vspace{0.8cm}
%\end{algorithm}


The displacement in mode space is defined by
\begin{equation}
\label{eq:drageuler}
\mathcal{C}^{l+1} = \mathcal{C}^{l} + ({\delta}\boldsymbol{\theta}^{T})^{l}{\nabla}_{\boldsymbol{\theta}}\mathbf{c}^{l}(\boldsymbol{\theta}),
\end{equation}
such that
\begin{equation}
{\delta}{\boldsymbol{\theta}}^{l} = \frac{{\boldsymbol{\theta}}_{b}-{\boldsymbol{\theta}}_{l}}{n_{d}}.
\end{equation}
Derivation of this equation can be found in the Appendix.
Note that on step $l=0$, $\boldsymbol{\theta}_{l} =
\boldsymbol{\theta}_{a}$.  Iterations stop when we have
$||\boldsymbol{\theta}_{b} - \boldsymbol{\theta}_{l}||<{\epsilon}$.




\subsection{Derivation of mean force}
\label{ss:meanforce}
Here we derive the expression for mean force for the NMS method.
To find the mean forces 
$G(S({a}_{i}))$
on the string we use MD simulation in the fast frequency space $\bar{\Omega}$.
To achieve this we re-write Eqn. (\ref{eq:canonical_prob_distribution}) in terms
of $\mathbf{x}$ only.
A point $\mathbf{x}_{i}$ in Cartesian space can be written as
\begin{equation}
\label{eq:nmldivision}
\mathbf{x}_{i} = \mathbf{\hat{x}}_{i} + \mathbf{\bar{x}}_{i} + \mathbf{x}_{0},
\end{equation}
where $\mathbf{\hat{x}}_{i}\in\Omega$, 
$\mathbf{\bar{x}}_{i}\in\bar{\Omega}$
 and $\mathbf{x}_{0}$ is the point where the Hessian was
diagonalized.
Here $\mathbf{\hat{x}}_{i}$ is the projection of the coordinates of $\Omega$ in Cartesian space
and $\mathbf{\bar{x}}_{i}$ is the projection of the coordinate of a point from $\bar{\Omega}$ to
Cartesian space
($\Omega$ and $\bar{\Omega}$ are the sets of all these projections).
Note that Eqn. (\ref{eq:modespaceproj}) defines the mapping from Cartesian
space $\mathbf{x}$ to $\Omega$.
We can also define the inverse mapping according to Eqn. (\ref{eq:lowfrequency1}).
Mapping the remaining degrees of freedom from complement space $\bar{\Omega}$
to the Cartesian space is given by Eqn. (\ref{eq:highfreq}).
Detailed derivation of Eqn. (\ref{eq:nmldivision})
can be found in Ref.~\citenum{SPPI08}.
From Eqn. (\ref{eq:free_energy}) the mean force
at $\mathbf{c}_{i}{\in}{\Omega}$ is given
by
\begin{equation}
\label{eq:meanforce_expression}
G(\mathbf{c}_{i}) = \dfrac{\beta}{P({\mathbf{c}_{i}})}{\nabla}_{\mathbf{c}}P(\mathbf{c}_{i}).
\end{equation}
From the one-to-one mapping between a point $\mathbf{c}_{i}$ and $\mathbf{\hat{x}_{i}}$, we have
\begin{equation}
\label{eq:fe_derivation1}
P(\mathbf{c}_{i}) = P(\mathbf{\hat{x}_{i}}).
\end{equation}
Now we can write
\begin{equation}
\label{eq:prob_in_cartesian_space}
P(\mathbf{\hat{x}_{i}}) = Z^{-1}\displaystyle\int_{{\Re}^{3N}}{\exp}^{-{\beta}U(\mathbf{x})}\displaystyle{\Pi}_{k=1}^{m}{\delta}((M^{-\frac{1}{2}}\mathbf{Q}\mathbf{Q}^{T}M^{\frac{1}{2}}(\mathbf{x}-\mathbf{x_{0}}) - \mathbf{\hat{x}_{i}})_{k})\mathrm{d}{\mathbf{x}},
\end{equation}
where the subscript $k$ indicates $k$th element of
the vector.
The $\mathbf{\delta}(.)$ term in Eqn. (\ref{eq:prob_in_cartesian_space}) can be removed if we integrate only over
points in $\bar{\Omega}$,~ i.e. $M^{-\frac{1}{2}}\mathbf{Q}\mathbf{Q}^{T}M^{\frac{1}{2}}(\mathbf{x}-\mathbf{x_{0}})=\mathbf{\hat{x}_{i}},{\,}{\forall}\mathbf{x}$.
We can then write
\begin{equation}
\label{eq:prob_in_cartesian_space1}
P(\mathbf{\hat{x}_{i}}) = Z^{-1}\displaystyle\int_{{\bar{\Omega}}}{\exp}^{-{\beta}U(\mathbf{\hat{x}_{i}} + \mathbf{\bar{x}} + \mathbf{x_{0}})}\mathrm{d}\mathbf{\bar{x}}.
\end{equation}
The derivative of $P(\mathbf{\hat{x}_{i}})$ w.~r.~t.~ $\mathbf{\hat{x}_{i}}$ is given by
\begin{equation}
\label{eq:prob_in_cartesian_space2}
{\nabla}_{\mathbf{\hat{x}}} P(\mathbf{\hat{x}_{i}}) = -{\beta}Z^{-1}\displaystyle\int_{{\bar{\Omega}}}{\exp}^{-{\beta}U(\mathbf{\hat{x}_{i}} + \mathbf{\bar{x}} + \mathbf{x_{0}})}{\nabla}_{\mathbf{\hat{x}}}U(\mathbf{\hat{x}_{i}} + \mathbf{\bar{x}} + \mathbf{x_{0}})\mathrm{d}\mathbf{\bar{x}},
\end{equation}
which, in turn, gives us
\begin{equation}
\label{eq:mean_force_exp1}
G(\mathbf{c}_{i}) = \dfrac{\beta}{P({\mathbf{c}_{i}})} \left(\dfrac{\mathrm{d}\mathbf{\hat{x}_{i}}}{\mathrm{d}\mathbf{c}_{i}} \right)^{T} {\nabla}_{\mathbf{\hat{x}}_{i}}P(\mathbf{\hat{x}_{i}}).
\end{equation}
Finally
\begin{equation}
\label{eq:mean_force_exp2}
G(\mathbf{c}_{i}) = -\beta^{2}\dfrac{Z^{-1}}{P({\mathbf{c}_{i}})}\mathbf{Q}^{T}M^{\frac{1}{2}}\displaystyle\int_{{\bar{\Omega}}}{\exp}^{-{\beta}U(\mathbf{\hat{x}_{i}} + \mathbf{\bar{x}} + \mathbf{x_{0}})}{\nabla}_{\mathbf{\hat{x}}}U(\mathbf{\hat{x}_{i}} + \mathbf{\bar{x}} + \mathbf{x_{0}})\mathrm{d}\mathbf{\bar{x}}.
\end{equation}

\subsection{Numerical estimation of mean force}
\label{ss:meanforcecalc}
We use constrained Langevin dynamics to limit
sampling to $\bar{\Theta}$ to compute the mean force,
\begin{equation}
\label{eq:langevinimpulse1}
\mathrm{d}\mathbf{\bar{x}}=\mathbf{\bar{v}}\mathrm{d}t
\end{equation}
and
\begin{equation}
\label{eq:langevinimpulse2}
M\mathrm{d}\mathbf{\bar{v}}=\mathbf{\bar{f}}\mathrm{d}t - {\gamma}M\mathbf{\bar{v}}\mathrm{d}t + \sqrt{2k_{B}T{\gamma}}M^{-1}P_{f}^{\perp}M^{\frac{1}{2}}\mathrm{d}\mathbf{W}(t).
\end{equation}
In Eqn. (\ref{eq:langevinimpulse2}) $t$ is time, $\mathbf{W}(t)$ is a collection of Wiener processes,
$\mathbf{\bar{v}}$ is the velocity in $\Omega$, $\mathbf{\bar{f}}=P_{f}^{\perp}\mathbf{f}$ and
$P_{f}^{\perp}$ is the force projection matrix given by
\begin{equation}
\label{eq:forceprojection1}
P_{f}^{\perp} = M^{\frac{1}{2}}(I - \mathbf{{Q}}\mathbf{{Q}}^{T})M^{-\frac{1}{2}},
\end{equation}
$\gamma$ is 
the friction term in ${\bar{\Omega}}$ and
$\mathbf{\bar{v}}$ is the velocity of the system in ${\bar{\Omega}}$.
We discretize this equation using the Langevin Leapfrog integrator
\cite{IzSP10}. Although in the literature Brownian dynamics in
collective variable space is often proposed for its amenability to analysis, our own numerical tests
show that using Langevin dynamics gives superior sampling. 
Figure~\ref{fig:bdvsld} shows that Langevin dynamics samples a
substantially larger space than Brownian dynamics for alanine
dipeptide with 8 modes constrained. 



\label{sss:avgforce}
We perform a user defined number of steps, $n_B$, of the Langevin propagator.
At step $j$, for point $i$, if the system force is $\mathbf{f}_{i}^{j}$, the force in $\bar{\Omega}$ is given
by $\bar{\mathbf{f}}_{i}^{j} = P_{f}^{\perp}\mathbf{f}_{i}^{j}$
The force in $\Omega_x$ is given
$\mathbf{\hat{f}}_{i}^{j} = \mathbf{f}_{i}^{j} - \bar{\mathbf{f}}_{i}^{j}$.
The mean force is estimated as the average of
$n_{B}$ steps of dynamics
\begin{equation}
\label{eq:subspacemeanforce}
\mathbf{\hat{f}}_{i,mf}^{j} = \frac{1}{n_{B}}\displaystyle\sum_{j=1}^{n_{B}}\mathbf{\hat{f}}_{i}^{j}.
\end{equation}



\section{Numerical tests}
\label{se:results}

\subsection{Test system: Alanine dipeptide in vacuum}
We applied the 
Normal Mode String Method
to study the transition from $C7_{eq}$ to $C7_{ax}$
for alanine dipeptide in vacuum. Due to its simplicity, conformational transition of alanine
dipeptide (Figure \ref{fig:alanine}) has been widely studied \cite{WeES07,PaSR08}.
We implemented our algorithm in ProtoMol \cite{Matt04}.
Backbone dihedrals ($\phi$,$\psi$) are known to describe the 
transition from $C7_{eq}$ to $C7_{ax}$. 
Therefore, ($\phi$,$\psi$)
has been used as collective variable for this system successfully with String Method.
We use the CHARMM 22 \cite{MACK92,MACK98} force 
field without CMAP corrections for our simulations. 
We found the endpoints of the transition by energy minimization using
the conjugate gradient
method \cite{Schl02}. 


\subsection{Choice of parameters}
The performance and correctness of the NMS method depends upon the
following parameters: $m,$ the number of low frequency modes in
$\mathbf{Q},$ $p,$ the number of dihedrals to define the global
collective variable space, $R,$ the number of images in the String,
and $n_{B},$ the number of sampling steps in the high frequency
frequency space to estimate the mean force in the low frequency space.

We choose $m$ as the number of slow modes with anharmonic factors of 2
or greater, which is less than 1\% of the modes for large
proteins~\cite{KiSG98}. The choice of $p$ does not affect the
performance of the algorithm, so it is safe to pick a larger set of
dihedrals than needed. $R$ determines the resolution of the MFEP after
convergence and directly affects cost.  $n_{B}$ determines the
accuracy of the mean force calculation and numerical error associated
with the string update step.

\paragraph{Choice of Dihedrals}
For alanine dipeptide, we only considered the backbone dihedrals with
heavy atoms (no hydrogens).  There are a total of ten dihedrals for
alanine with heavy atoms. The description of backbone dihedrals with
their values at the minimum energy conformations at $C7_{eq}$ and
$C7_{ax}$ are shown in Table \ref{table:dihvalues}.
We chose the last four to define our global collective variable set
since these exhibit the largest changes between conformations. These
dihedrals include backbone $\phi$ and $\psi$, which are generally used
as collective variables for alanine dipeptide.

\paragraph{Choice of $n_{B}$}
Smaller value of $n_B$ implies less computation. On the other hand, it introduces statistical
error in the estimation of mean force due to insufficient sampling.
Here, we studied the accuracy and performance of our
algorithm for $n_{B}=1,10,100$. In general, we used less sampling than
other versions of the String method because the orthogonal space of
our collective variables is highly harmonic by construction and takes
less time to sample.

\paragraph{Choice of $m$}
NMA at a point gives us $3N$ eigenvectors with $N$ atoms for a
biomolecule.  Assuming we diagonalize at a local energy minimum, the
first six degrees of freedom correspond to translation and
rotation. However, at a point outside of the energy minimum, that is
not the case.  In this case, the first six modes mix with the low
frequency modes.  Therefore, we need to include the first six modes in
our collective variable set.  Since the backbone dihedrals are enough
to study the conformation of interest in alanine dipeptide,
intuitively we need the first eight modes to capture the conformational
transition in alanine (first six + two slowest degrees of freedom).
Nevertheless, it has been found that 10-12 low frequency modes give
accurate kinetics for alanine dipeptide between $C7_{eq}$ and
$C7_{ax}$ \cite{IzSP10}. So, we chose $m=10$ and $m=12$ for our
simulations.


\paragraph{Choice of $R$}
Local details of the MFEP can be captured with more intermediate
images.  We found that with 20 intermediate images the String method
converges to MFEP \cite{Cick09}. In our simulation, we used
$R=12,20,30$. These numbers are comparable to the number of images
used for this example in References~\cite{MaFV06,ZhSS10}.

Next, we will present further simulation details and results of
simulations with different set of parameters.  
We developed a metric to measure
convergence of the string to MFEP that we discuss next.


\subsection{Convergence criterion}
If the free energy at point $i$ is given by $F_{i}$, the sum of free energies
\begin{equation}
\label{eq:stringfesum}
F(\mathcal{S}) = \displaystyle\sum_{i=1}^{R}F(S(\alpha_i))
\end{equation}
is used as the metric for convergence. From the definition of free energy (Eqn. (\ref{eq:free_energy}))
the sum in Eqn. (\ref{eq:stringfesum}) 
is equal to the product of probabilities
\begin{equation}
\label{eq:prob_product}
\displaystyle\prod_{i=1}^{R}P(S{\alpha_i}),
\end{equation}
plus a constant.
This
estimates the area under the free energy curve.
The string should converge to the same MFEP with different parameters, except $R$. After convergence,
we expect to get the same free energy profile along the MFEP.  
Therefore, we can capture the convergence to MFEP by monitoring $F(\mathcal{S})$.

However, our convergence criterion does not work if we use different $R$. In that case, $F(\mathcal{S})$
can not be compared directly. If we have two simulations with $R_{1}$ and $R_2$ with $R_{2} > R_{1}$, we
multiply the free energy at each point of the simulation with $R_{1}$ by a factor $R_{2}/R_{1}$ for 
comparison. 

Estimation of free energy at each point can be done using a simple
algorithm. During the sampling step in high frequency space, we record
the potential energy of the system. At the end of the sampling, we
divide the sum by $n_B$ to estimate the average potential
energy. Since sampling is performed in the canonical ensemble, we can
estimate the probability of this point in the reduced space from the
average potential energy.

\subsection{Convergence Results}



Most of our simulations start with endpoints at the energy minimum of $C7_{eq}$ and $C7_{ax}$.
We also tested our algorithm with endpoints away from the metastable regions.
First we generate the initial string using the straight line approximation, as described earlier. 
We used ${\epsilon}_{\theta}=0.025{\,}rad$ in our dragging algorithm.
In all our simulations we use
${\Delta}{\tau}_{S} = 0.01$ for string update.

First, we present the simulation details with $m=12$,$p=4$,$R=30$ and $n_{B}=100$.
We distribute the 28 intermediate images on a straight line
joining the endpoints using the algorithm described above. 
We reparameterized the string
once every 3 steps. At each step, we diagonalized the system at its current position. Next, we did $n_{B}=100$ steps
of dynamics to estimate the mean force in $\Omega$. After estimation of mean force, we reset the positions of the
system to that at which we performed the diagonalization. Next, we apply the algorithm for string update.
After reparameterization, we moved from the current position $\theta_{i}^{n+{\epsilon}}$ to ${\theta}_{i}^{n+1}$ using
the dragging algorithm.
The endpoints
were allowed to move during the simulation.
In each of these simulations, we used ${\gamma}=1.0{\,}$ps$^{-1}$ in
the Langevin Leapfrog method 
for sampling in $\bar{\Omega}$. 
Figure \ref{fig:string_plots_m12_R30_p4_nB100} shows the result of the simulation. We found convergence after
800 steps. The red curve at the right shows the converged string at the end of the simulation.
We also plot the string at (a) the start, (b) after 50 steps and (c) after 300 steps in order to show
the evolution of the string towards MFEP.


Figure \ref{fig:feprofile1} shows the free energy profile along the string during the different steps of the simulation.
The plot shows convergence of the free energy profile along the string as the string converges to MFEP.
Starting with a large energy barrier, the free energy at steps 800,
1000 and 1200 coincide on each other. 
 We found that the height of the barrier is around 8 kcal/mol which
is consistent with previous findings \cite{MarV07}. 

We estimate the mean change in $F(\mathcal{S})$ after convergence (between steps 800-1000) as a metric
to determine convergence of simulations with different parameters. We found the the average ${\Delta}
F(\mathcal{S})=1$ kcal/mol over the steps 800-1000. 


We found that our algorithm can also converge to the correct MFEP for $R=20$. 
We got the initial string using similar techniques as with $R=30$. NMS
converged after 749 steps based on the convergence
criterion described above. Since, we have only 20 intermediate images, we used 
a threshold of $1{\times}(2.0/3.0)$kcal/mol to determine convergence. With $R=12$, NMS
convergend after 1299 steps.

Next, we wanted to study how does our algorithm change if we change $m$. We used $m=10$ and $R=20$. In this
case, the convergence was obtained after 552 steps. 
The results are summarized in Table \ref{table:stringcomp_R}. 
The convergence step does not change 
significantly when we move from $R=30$ to $R=20$. However, it requires more steps for $R=12$. It is also evident that the convergence is somewhat
faster for $m=10$. If we include more high-frequency components which do not contribute to the slow collective
variables, the simulation would still converge to MFEP. However, more steps will be required to filter out the
statistical error. 

We compared the efficiency of NMS with the  String method using
$\Phi$ and $\Psi$ dihedrals as collective variables as
implemented in \cite{Cick09}. We found that the average runtime of
the String method in dihedral space is half of NMS if we use $n_{B}=10$.
For larger systems, the benefit of automatically determining the
collective variables will come at a smaller overhead than that found
for alanine dipeptide. Besides, the main point of our work is to have
an automatic way of determining the collective variables, which is
highly non-intuitive for larger systems. Note also that the larger number of
iterations required for our method to converge compared to the method
of Reference~\cite{ZhSS10} is offset by the much smaller number of sampling
steps required to compute the mean force. 

We also ran tests in which the starting string does not have the endpoints at
minimum energy conformation. Figure \ref{fig:string_comp_nomin_start1} shows
the results. Using the criterion described above we found that the convergence was achieved after
700 steps. In the figure, the red curve is showing the average string
between steps 800-1000. We ran with parameters $m=12$,$p=4$,$R=20$ and $n_{B}=100$.

\section{Discussion}
\label{se:discussion}
Our main contribution in this work is implementing a method for
picking collective variables \textit{a priori} for the String method.
We have used instantaneous normal mode analysis (INMA) to determine
collective variables in the directions of slow motions. These
directions change when we move from one configuration to another. To
account for this, we rediagonalize as the String moves and recompute
the collective variables. We project these collective variables to
dihedral space to be able to define the String and for the dragging
algorithm, which is a kind of targeted molecular dynamics. We directly
use the INM to compute the mean force using Langevin dynamics in low
frequency mode space, which has much better numerical stability than
using restraints with dihedrals directly, as done in earlier versions
of the  String method.  The size of the set $\Theta$ does
not affect the performance of our algorithm since the heavy lifting is
performed in normal mode space. We have demonstrated the method in the
isomerization of alanine dipeptide in vacuum, and developed the
Flexible Block Method, a scalable normal mode analysis that should
allow application of the method to larger systems.

In the past, identification of slow variables has been attempted using
normal modes from elastic network models
(ENM)~\cite{Atilgan:2001be,Eyal:2007ki,CRYB05,Schuyler:2009p562,Peng:2010p1442,VandenEijnden:2009jh}, which have attracted
wide attention from the scientific community due to their low
computational cost. We have instead used INMA because we need higher
precision in identifying collective variables so that the numerical
methods associated with the String method converge easily.  Principal
Component Analysis (PCA)~\cite{Skjaerven:2010p558}, quasi-harmonic
analysis (QHA)~\cite{RaAr09} and related approaches require analyses
of relatively long MD simulations; however, the String method is
efficient because it parameterizes the string on curvature instead of
in time. Using INMA on different conformations, each of which can be
mapped to a set of dihedrals that are defined globally, results in a
better match of the String method and avoids expensive long timescale
simulations.

Another approach finds order-parameters by using Legendre polynomials
as basis functions that span the simulation
volume~\cite{Singharoy:2011p1441}. Implementation of these
order-parameters is fairly involved compared to the simplicity of INMA
and projection to dihedral space.  Other examples of collective
variables include dihedral
angles~\cite{Altis:2007p1500,Nguyen:2007p1501,Nguyen:2008p1502} and
curvilinear coordinates to characterize macromolecular folding and
coiling~\cite{Ortoleva:2008p1554}, which unfortunately require that
the forces be recalibrated for most new applications. By contrast,
INMs mapped to dihedral space can work with any standard force field.

For systems where the effect of solvent on the transition path needs
to be understood, dihedral space may not be sufficient. It is believed
that for roughly homogeneous spherical solutes the effect of solvent
friction is to slow down the same transition paths that are present in
vacuum, whereas for nonspherical solutes deviation from the vacuum
transition paths is large and increases with solute
size~\cite{Northrup:1983ex}.  A genetic neural network has been used
to find physical variables that explain the committor probability of
alanine dipeptide in explicit solvent~\cite{Ma:2005p1444}. Combining a
solute dihedral angle, a solute-solute pairwise distance, and a
solvent-derived electrostatic torque around one of the rotating bonds
was sufficient to obtain a good correlation with the committor
probability. The method has been applied to a model three-stranded
beta-sheet protein in implicit solvent~\cite{Qi:2010p1445} and a
DNA-protein complex with mobile solvent regions~\cite{Hu:2008p1447}. A
difficulty with this approach is that computation of the committor
probabilities requires using either transition path sampling or a
Markov State Model, both of which are more computationally expensive
than computing the MFEP. The extension of collective variables to
systems with strong solvent effects remains an important research problem.

Pan \textit{et al.} \cite{PaSR08} proposed a different version of the
String method which evolves the string using a swarm of
trajectories. The method starts with an initial string with a discrete
set of images. Next, an equilibrium trajectory is generated around
each point using restrained simulation. A large swarm of trajectories
are initiated from the configurations of equilibrium trajectories.
These are used to estimate the average drift of the intermediate
images in the collective variable space.  Nonetheless, \textit{a
  priori} knowledge of the choice of collective variables are still
required in their method.

Better than the MFEP is the maximum flux transition path (MFTP)
\cite{ZhSS10}, which is defined as a path that crosses at points which
(locally) have the highest crossing rate of distinct reactive
trajectories. Unlike MFEP, the MFTP includes finite-temperature
effects and does not suffer from cusps in the string that can slow
down convergence. However, for the simple case of alanine dipeptide no
difference has been found between MFEP and MFPT
solutions~\cite{ZhSS10}. Related to MFTP are the minimum resistance
path~\cite{Berkowitz:1983dd}, the MaxFlux
algorithm~\cite{Straub:1997vg}, a temperature-dependent nudged elastic
band algorithm~\cite{Crehuet:2003gi,Galvan:2007gd}, a path based on
mean first-passage times~\cite{Park:2003kk}, and a most probable path
for Brownian dynamics using a path integral
formulation~\cite{Elber:2000ct}. Our collective
variables from INMA projected onto dihedrals can be applied for the
computation of MFTP by simple modifications of the String
algorithm~\cite{ZhSS10} and this is a fruitful avenue for further
work. An advantage of using low frequency modes as collective
variables is that by construction they provide a smooth energy
landscape which allows the use of the path methods with temperature
correction just mentioned.  Alternatively, our collective variables
could be used in the Finite Temperature String
Method~\cite{VandenEijnden:2009jh}.  A method developed by Brokaw
\textit{et al.}  \cite{BrHC09} applies holonomic constraints between
images to maintain equal separation and computes minimum Hamiltonian
paths, which also prevent the formation of kinks. Similarly, our
collective variables could be applied in that algorithm.

\section*{Acknowledgment}
This work was partially supported by grants CCF-1018570 and
CCF-0622940 from the National Science Foundation. We would like to
thank Prof. Jeffrey Peng from the University of Notre Dame for a
collaboration that started this work. Thanks to Prof. Robert Skeel and
Prof. Ron Elber for helpful discussions about transition paths. Haoyun
Feng generated Figure~\ref{fig:bdvsld}.


\appendix
    \begin{center}
      {\bf APPENDIX}
    \end{center}

Let us assume that we are at a point in $\Theta$ given by
$\boldsymbol{\theta}_{i} = {\Theta}(\mathbf{x}_{i}^{0})$.  We want to
move to a point
$\boldsymbol{\theta}_{i}+{\delta}{\boldsymbol{\theta}}$, where
${\delta}{\boldsymbol{\theta}}$ is sufficiently small, along a set of
low frequency modes. Let $\mathcal{C}_{i} =
\mathbf{c}(\boldsymbol{\theta}_{i})$ and we want to move to a point
$\mathcal{C}_{i}^{1}$ s.~t.~we can project $\mathcal{C}_{i}^{1}$ back
to the Cartesian space to get $\mathbf{x}_{i}^{1}$
s.~t.~$\boldsymbol{\theta}_{i}+{\delta}{\boldsymbol{\theta}}
{\approx}\Theta(\mathbf{x}_{i}^{1})$.  So, we have
$\mathcal{C}_{i}^{1} {\approx}
\mathbf{c}(\boldsymbol{\theta}_{i}+{\delta}{\boldsymbol{\theta}})$ and
\begin{equation}
\label{eq:dragalgo1}
\mathbf{c}({\boldsymbol{\theta}}_{i}+{\delta}{\boldsymbol{\theta}}) = \mathbf{c}(\boldsymbol{\theta}_{i}) + ({\delta}{\boldsymbol{\theta}})^{T}{\nabla}_{\boldsymbol{\theta}_{i}}\mathbf{c} + {\dots}
\end{equation}
We truncate the Taylor Series after second term in the r.~h.~s. to obtain Eqn. (\ref{eq:drageuler}).
We can estimate the second term at the r.h.s. of Eqn. (\ref{eq:dragalgo1}) as
\begin{equation}
\label{eq:dervcdphi}
{\nabla}_{\boldsymbol{\theta}_{i}}\mathbf{c} = \frac{\mathrm{d}\mathbf{c}}{\mathrm{d}\mathbf{x}}\left(\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}{\boldsymbol{\theta}}}\bigg|_{\boldsymbol{\theta}_i}\right) .
\end{equation}


Now we have
\begin{equation}
\frac{\mathrm{d}c}{\mathrm{d}x} = \mathbf{Q}^{T}M^{\frac{1}{2}},
\end{equation}
where $\mathbf{Q}$ is a column matrix with $m$ low frequency eigenvectors
and 
\begin{equation}
\label{eq:dervxphi}
\frac{\mathrm{d}x}{\mathrm{d}{\theta}} =
\begin{pmatrix}
{\mathrm{d}x_{1}}/{\mathrm{d}{\theta}} \\
{\mathrm{d}x_{2}}/{\mathrm{d}{\theta}} \\
{\dots} \\
{\mathrm{d}x_{3N}}/{\mathrm{d}{\theta}}
\end{pmatrix}
\end{equation}
We can use 
\begin{equation}
{\mathrm{d}x_{k}}/{\mathrm{d}{\theta}} = 1/\left({\mathrm{d}{\theta}}/{\mathrm{d}x_{k}}\right).
\end{equation}
Note that if dihedral ${\theta}$ is not a function of $x_{k}$ then 
$\left({\mathrm{d}{\theta}}/{\mathrm{d}x_{k}}\right)=0$. In that case, we
assume ${\mathrm{d}x_{k}}/{\mathrm{d}{\theta}} = 0$.

\bibliography{achemso}

\newpage

\begin{figure}[!t]
\centering
\subfigure[Calmodulin structures]{
\includegraphics[scale=0.6]{images/calmodulinCartoon.eps}
\label{fig:calmodconfs}
}
\subfigure[Frequency distributions]{
\includegraphics[scale=0.4]{images/calmSpeedMinimized.eps}
\label{fig:calmodfreq_dist}
}
\caption{Frequency distributions at two conformations of calmodulin
  are similar. Vacuum normal mode analysis using the CHARMM 22 force
  field was
  performed to obtain these spectra for PDBs 1CLL (open and active) and 1LIN (closed and
  inactive).
\label{fig:PNMLfigCalmod}}
\end{figure}

\begin{figure}[htb]
\subfigure[RTB and FBM strategy]{
\includegraphics[
width=4.0in
]
{images/InnerDiagFigure.eps}
\label{fbmstrategy}
}
\subfigure[FBM vs RTB]{
\includegraphics[
width=4in
]{images/coarseSigmaLambda1.eps}%
\label{fbmperformance}
}
\caption{(a) RTB and FBM strategy. If the vectors in model subspace
${E}$ span the low
frequency space of interest in ${H}$, then the
diagonalization of ${S}$
produces a low frequency basis set.
(b) Comparison of true eigenvalues and Rayleigh quotients for
RTB (Rotation-translation) and FBM (Rotation-translation plus
$\Phi-\Psi$ angles) for BPTI, PDB 6PTI in vacuum using CHARMM 22. }
\end{figure}

\begin{figure}[!h]
\includegraphics[scale=0.6]{images/string3.eps}
\caption{String update is performed at a point with the
mean force $G(\mathbf{c})$.\label{fig:stringperp1}}
\end{figure} 

\begin{figure}[!h]
\includegraphics[width=3in]{images/brownVSLang_ala.eps}
\caption{Conformational space sampling for Brownian vs. Langevin
  dynamics for alanine dipeptide with 8 modes constrained. Both runs
  use the Langevin Leapfrog integrator at $T=300\,$K and
  timestep=$1\,$fs with CHARMM 22 force field in vacuum without CMAP
  correction for a total length of 1$\,$ps. Cutoffs for nonbonded
  interactions are applied at 20$\,$\AA. The starting position is
  the so-called C5 conformation. The damping coefficients are
  91$\,$ps$^{-1}$ and 9000$\,$ps$^{-1}$ for Langevin and Brownian
  simulations, respectively. \label{fig:bdvsld}}
\end{figure} 


\begin{figure}[!ht]
\begin{center}
\subfigure[ Topology]{
\includegraphics[scale=0.3]{images/alanine2d.eps}
\label{fig:alanine}
}
\subfigure[ Contour plot on $\phi$-$\psi$ plane]{
\includegraphics[scale=0.4]{images/alanine_contour.eps}
\label{fig:alanine_contour}
}
\caption{Alanine Dipeptide in Vacuum according to the CHARMM 22 force field.}
\end{center}
\end{figure}

\begin{figure}[!ht]
\centering
\subfigure[ Plot of $F(\mathcal{S})$]{
\includegraphics[scale=0.45]{images/feplot_R30.eps}
\label{fig:fesumplot1}
}
\subfigure[ Plot showing the evolution of string towards MFEP]{
\includegraphics[scale=0.45]{images/converged_string_visual_R30.eps}
\label{fig:str_m12_R30_p4_nB100}
}
\caption{The plot to the left showing $F(\mathcal{S})$ over the steps of simulation. Clearly
the $F(\mathcal{S})$ converges after about 800 steps. From this plot, we assume convergence 
after 800 steps. The plot to the right shows the evolution of the string towards. After convergence,
we plot the average string over 200 steps to filter out statistical error. The strings are superimposed
over the free energy contours in $\phi$-$\psi$ space of alanine. \label{fig:string_plots_m12_R30_p4_nB100}}
\end{figure}

\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{images/feplot_R30_m12_p4_nb100}
\caption{Plot showing the convergence of free energy profile along the string at different steps of the simulation.
\label{fig:feprofile1}}
\end{figure}

\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{images/notmin_R20_strings.eps}
\caption{Plot showing the string simulation results in which the endpoints were outside of the 
energy minima at $C7_{eq}$ and $C7_{ax}$. \label{fig:string_comp_nomin_start1}}
\end{center}
\end{figure}

\newpage
\newpage

\IncMargin{1em}
\begin{algorithm}[H]
\SetKwInOut{Input}{input}

\Input{$m$,$\mathbf{x}_{i}^{0},\mathbf{Q}_{i},{\theta}_{i}^{0},{\forall}i$,$R$,numsteps.}
\BlankLine
\For{$j=1$ to numsteps}{
  
	\For{$i=1$ to $R$}{
    		$\mathbf{Q}_{i} = Diagonalize(\mathbf{x}_{i}^{j})$.\\
    		$G_{i} = EstimateMeanForce(\mathbf{x}_{i}^{j},n_{B},\mathbf{Q}_{i})$.\\
    		$\mathbf{x}_{i}^{j+{\epsilon}} = StringUpdate(\mathbf{x}_{i}^{j},{\Delta}{\tau_s},G_{i})$.\\
    		$\mathbf{x}_{i}^{j+{\epsilon}} = StringUpdate(\mathbf{x}_{i}^{j},{\Delta}{\tau_s},G_{i})$.\\
    		${\theta_{i}^{j+\epsilon}} = \theta(\mathbf{x}_{i}^{j+{\epsilon}})$.\\
	} 

	$\theta_{i}^{j,r}(i=1,{\dots},R) = Reparameterize(\theta_{i}^{j+\epsilon},(i=1,{\dots},R))$. \\

	\For{$i=1$ to $R$} { 
  		${\theta}_{i}^{j+1},\mathbf{x}_{i}^{j+1} = Dragging(\mathbf{Q}_{i},{\theta}_{i}^{j,r},\theta_{i}^{j},\mathbf{x}_{i}^{j+\epsilon})$.\\
	}
}
\caption{Algorithm for String evolution}
\label{algo:stringalgo}
\end{algorithm}
\DecMargin{1em}

\IncMargin{1em}
\begin{algorithm}[H]
\SetKwInOut{Input}{input}

\Input{$m$, $\Theta$,$\mathbf{x}_{A}$,$\mathbf{x}_{B}$,$R$}
\BlankLine
$\theta_{i} = \theta(\mathbf{x}_{A})$.\\
$\theta_{R} = \theta(\mathbf{x}_{B})$.\\
${\theta}_{inc} = \frac{{\theta}_{R} - {\theta}_{1}}{R-1}$.\\
\For{$i=2$ to $R-1$}{
	$\mathbf{x}_{i} = \mathbf{x}_{i-1}$. \\
	$\mathbf{Q}_{i} = Diagonalize(x_{i})$. \\
	${\theta}_{i,t} = {\theta}_{i} + {\theta}_{inc}$.\\
	${\theta}_{i},\mathbf{x}_{i} = Dragging(\mathbf{Q}_{i},{\theta}_{i,t},\theta_{i-1},\mathbf{x}_{i})$. \\
}
\caption{Algorithm for generating initial String}
\label{algo:initialstring}
\end{algorithm}
\DecMargin{1em}

\IncMargin{1em}
\begin{algorithm}[H]
\SetKwInOut{Input}{input}

\Input{Position of point in Cartesian space $\mathbf{x}_{i}^{0}$, source $\theta_i$ and target $\theta_{i}+{\delta}{\theta}$}
\BlankLine
\While {true} {
	$j = FindADihedralToMove()$.\\
	${\delta}{\theta}_{j} = FindDistanceToMove()$. \\
	Find ${\delta}\mathbf{\hat{x}}_{i}^{\epsilon}$. \\
	Update position in $\Omega$ with ${\delta}\mathbf{\hat{x}}_{i}^{\epsilon}$. \\
	Minimize potential energy in $\bar{\Omega}$ to get new position $\mathbf{x}_{i}^{\epsilon}$. \\
	$\theta_{i,d} = GetDihedrals(\mathbf{x}_{i}^{\epsilon})$. \\
	${\delta}{\theta}_{d} = \theta_{i}+{\delta}{\theta} - \theta_{i,d}$.\\
	\If {${\delta}{\theta}_{d} {\leq} {\epsilon}_{\theta}$} {
		break\;
	}
}
\caption{Dragging Algorithm}
\label{algo:draggingalgo}
\end{algorithm}
\DecMargin{1em}

\newpage
\newpage

\begin{table}
\begin{center}
\begin{tabular}{|l | c | r|}
\hline
dihedral & $C7_{eq}$ & $C7_{ax}$ \\
\hline
CAY-CY-N-CA  & -3.09 & 3.04 \\
\hline
OY-CY-N-CA & 0.064 & -0.086 \\
\hline
CY-N-CA-CB  &  2.75 & -1.02 \\
\hline
CY-N-CA-C ($\phi$) & -1.39   & 1.27 \\
\hline
N-CA-C-NT ($\psi$) & 1.15  & -1.22 \\
\hline
N-CA-C-O  & -1.99 & 1.79 \\
\hline
\end{tabular}
\caption{Table showing the values of dihedrals at the endpoints of
conformational transition of alanine dipeptide. \label{table:dihvalues}}
\end{center}
\end{table}

\begin{table}[!h]
\begin{center}
 \begin{tabular}{||l|l|l|l|l|l||}
 \hline
 \hline
 $m$ & $p$ & $R$ & $n_{B}$ & Convergence Step \\
 \hline
 12 & 4 & 30 & 100 & 800 \\
 \hline
 12 & 4 & 20 & 100 & 749 \\
 \hline
 12 & 4 & 12 & 100 & 1299 \\
 \hline
 10 & 4 & 20 & 100 & 552 \\
 \hline
 \hline
  \end{tabular}
\end{center}
\caption{Convergence of strings with different parameters.
\label{table:stringcomp_R}}
\end{table}

\end{document} 

