\chapter{The molecular dynamics methodology}
\label{ch:meth}
Molecular dynamics (MD) is a computer simulation technique employed to compute the equilibrium and transport properties of a classical many-body system. The word {\em classical} is intended to mean that the motion of the constituent particles obeys the laws of classical mechanics. 
Molecular dynamics is typically employed to simulate matter at the molecular scale. The fundamental components of a molecular dynamics model are:
\begin{itemize}
\item a position-dependent potential, to describe the interaction (forces) between particles;  
\item an integrator, to evolve the system in time according to the forces experienced by each particle.
\end{itemize} 
The main data generated by a simulation comprise the trajectories of every particle, which can be used to calculate dynamic and thermodynamic properties of the system.  

In this chapter, the MD method is summarised, with special focus on the model potentials, integration schemes, and computational techniques relevant for the applications presented in the following chapters. Thorough descriptions of particle-based simulation methods can be found in several excellent books,~\cite{allen,leach,frenkel,schlick,rapa} which are also the main sources of this chapter.
The most technical aspects, such as details of specific algorithms adopted and  explicit derivations of forces and torques, are reported in the appendices. %~\ref{app:brahms} and~~\ref{app:force}. 

%\section{Relation to statistical mechanics} A central concept in statistical mechanics is the {\em ensemble average}, corresponding to series of measurements over an ensemble of independent systems. The {\em ergodic hypothesis} states that . % rapa p.5
  
\section{Foundations}
In statistical mechanics, for the canonical ensemble where the number of particles $N$, the volume $V$ and the temperature $T$ are fixed, the equilibrium average of some quantity $G$ is expressed in terms of phase-space integrals involving the potential function $U(\mathbf{r}_1,\dots,\mathbf{r}_N)$:
\begin{equation}\label{eq:ensAvg}
\langle G \rangle= \frac{\int G (\mathbf{r}_1,\dots,\mathbf{r}_N)\,e^{-\beta U(\mathbf{r}_1,\dots,\mathbf{r}_N)}\,\de \mathbf{r}_1\cdots\mathbf{r}_N}{\int e^{-\beta U(\mathbf{r}_1,\dots,\mathbf{r}_N)}\,\de \mathbf{r}_1\cdots\mathbf{r}_N}
\end{equation}
with $\mathbf{r}_i$ the coordinates, $\beta=1/k_BT$ and $k_B$ the Boltzmann constant. This average corresponds to a series of measurements over an ensemble of independent systems. 

In MD simulation, the microscopic state of a system is defined by the positions and momenta of the particles of the system under investigation. In particular, the total energy, or Hamiltonian, $H$ can be written as the sum of kinetic energy~$K$ and potential energy~$U$:
\begin{equation}
H(\mathbf{q, p})=K(\mathbf{p})+U(\mathbf{q})
\end{equation}
with $\mathbf{q}$ and $\mathbf{p}$ the system sets of coordinates and momenta, respectively. From the potential energy it is possible to obtain the forces acting on each molecule, and from there calculating the entire time evolution of the system. It is therefore possible to calculate the average of a quantity $G$ as:
\begin{equation}\label{eq:mdAvg}
\langle G \rangle=\frac{1}{M}\sum_{\mu=1}^MG_\mu (\mathbf{r}_1,\dots,\mathbf{r}_N)
\end{equation}
over a set of $M$ measurements taken as the (single) system evolves in time. 

The {\em ergodic hypothesis}, which is the fundamental assumption of molecular dynamics, states that the ensemble average of Equation~\ref{eq:ensAvg} is equal to the time average of Equation~\ref{eq:mdAvg}. %, i.e., measurements carried out for a single equilibrium system during the course of its natural evolution should produce the same results as the ensemble average.

%\paragraph{Molecular dynamics: the general scheme}
A global diagram for MD is given in Figure~\ref{fig:global}. 
\begin{figure}
\begin{center}
\addtolength{\fboxsep}{.5cm}
\begin{shadowenv}[14cm]
{\large \bf THE GLOBAL MD ALGORITHM}
\rule{\textwidth}{2.5pt} \\
\medskip
{\bf 1. Input initial conditions:}\\[2ex]
\begin{itemize}
\item Interaction potential $U$ and system topology (the {\em force field})\\
\item Positions and velocities of all sites in the system \\
\item Simulation parameters (time-step, temperature, pressure, etc.) \\
\end{itemize}
%$\Downarrow$\\
\rule{\textwidth}{1pt}\\
{\bf 2. Compute:} \\[1ex]
\begin{itemize}
\item Forces and torques \\
\item Thermodynamic quantities \\  
\item Properties of interest \\   
\end{itemize}
{\bf 3. Integrate  equations of motion } \\[1ex]
\rule{\textwidth}{1pt}\\
repeat steps {\bf 2,3} for the required number of cycles\\
\rule{\textwidth}{1pt}\\
%$\Downarrow$\\
{\bf 4.} {\bf Output:} \\
\begin{itemize}
\item Trajectory of every particle\\
\item Averages of properties of interest \\
\end{itemize}
\end{shadowenv}
\caption[The molecular dynamics algorithm]{The main steps of a typical molecular dynamics simulation.}
\label{fig:global}
\end{center}
\end{figure}
The various components are described in the following sections.

\section{Interaction potentials}

The potential energy $U$ describes the interactions between the particles of a system. $U$ typically comprises several terms, accounting for different types of intermolecular interactions (such as van der Waals and electrostatics) and intramolecular interactions (such as covalent bonding).
Considering that forces and torques are defined completely by the total potential $U$, the set of constituent potential functions and corresponding parameters of a given system is often called the {\em force field}. 


\section[Rigid bodies]{Rigid bodies}
\label{sec:rigidBodies}
%Classical mechanics is the branch of physics that studies the motion of material bodies.~\cite{goldstein} 
The molecular models employed in the work presented in this thesis include both standard isotropic potentials (such as Lennard-Jones and Coulomb) and more complex anisotropic, orientation-dependent potentials (such as Gay-Berne and dipolar). 
In molecular dynamics, particles represented by isotropic potentials are simulated as simple point-masses, their motion being completely described by translational degrees of freedom (typically the mass centre coordinates). However, sites modelled by anisotropic potentials also possess orientational degrees of freedom, so that a point-mass representation becomes insufficient. Moreover, it is sometimes practical to model entire molecules as rigid, neglecting intramolecular flexibility; in this case it is convenient to treat such particles as single entities and characterise their motion also in terms of linear and orientational degrees of freedom.  In such cases particles can be efficiently represented as {\em rigid bodies}.  %he motion of a rigid body is characterized by a mass centre and an orientation. In this section we summarise some notions of classical mechanics relevant to the simulation of rigid bodies by molecular dynamics. 
%\subsection{Position}
The linear motion of a rigid body is described by the motion of its mass centre, which can be simply treated as a point-mass equivalent to the mass of the entire body. The rotational motion is however more complex, and it requires a representation of orientational degrees of freedom.
%Rigid-body mechanics generally relies on two coordinate frames: one fixed in space, the other attached to the principal axes of the rotating body. 
In particular, the orientation of a rigid body specifies the relation between an axis system $S$ fixed in space and one (in general) translating and rotating attached to the body, usually the ``principal'' body-fixed system $b$ in which the inertia tensor is diagonal.~\cite{allen} %[p.85]  %The position can be defined with the mass centre coordinates and a {\em rotation}: rotations can in turn be described via {\em rotation matrices}.
The orientation of a rigid body can be expressed through the full rotation matrix $\mathbf{R}$. The nine components of the rotation matrix are the direction cosines of the body-fixed axis vectors in the space-fixed frame.
%\begin{equation} \mathbf{R}=\mathbf{R}(\psi)\mathbf{R}(\theta)\mathbf{R}(\phi) \end{equation} where $\phi$, $\theta$ and $\psi$ are the Eulerian angles.
There are two ways of interpreting the rotation described by $\mathbf{R}$:

\begin{itemize}
\item consider a vector $\mathbf{r}^S$ and use $\mathbf{R}$ to obtain its components in the rotated coordinate system, namely $\mathbf{r}^b=\mathbf{R}\mathbf{r}^S$;
\item rotate a vector, beginning with $\mathbf{r}^b$ and applying the opposite rotations in reverse order by means of the transpose of $\mathbf{R}$, in which case the result is the rotated vector $\mathbf{r}^S$=$\mathbf{R}^T\mathbf{r}^b$.
\end{itemize}
Clearly, if $\mathbf{r}$ is a vector fixed in the molecular frame (for instance a bond vector of a rigid molecule) then $\mathbf{r}^b$ will not change with time; in space-fixed coordinates, though, the components of $\mathbf{r}^S$ will vary. 


\section{Forces and torques}

The force $\mathbf{f}$ on the mass centre of a particle %$i$ 
can be obtained from the gradient of the potential $U$:
\begin{equation}\label{eq:forces}
\mathbf{f}=-\nabla_{\mathbf{r}} U
\end{equation}
with $\mathbf{r}$ the vector defining the particle's position.
For ``molecular'' rigid bodies comprising a number of $n$ atomic sites $a=1,\dots,n$ the total force $\mathbf{f}$ acting on the molecule is:
\begin{equation}
\mathbf{f} = \sum_{a=1}^n \mathbf{f}_a
\end{equation}
being $\mathbf{f}_a$ the force acting on the atomic site $a$.
The torque $\mathbf{T}$ about the centre of mass of the particle is computed as:
\begin{equation}
\mathbf{T} = \sum_{a=1}^n (\mathbf{r}_a - \mathbf{r}_\textrm{COM})\wedge \mathbf{f}_a = \sum_{a=1}^n \mathbf{d}_a \wedge \mathbf{f}_a
\end{equation}
with $\mathbf{r}_a$ the atom position in the system's frame of reference, $\mathbf{r}_\textrm{COM}$ the molecule's centre of mass position and $\mathbf{d}_a$ the atomic position relative to the molecule's centre of mass.~\cite{allen}

Single-site particles must also be treated as rigid bodies when the potential is orientation-dependent. For symmetric particles (such as Gay-Berne sites),  the torque is equivalent to a force acting on a point separated by a unit distance from the mass centre and acting in a direction orthogonal to the molecular symmetry axis.~\cite{luck90} This equivalent force can be defined in terms of the derivative of the potential with respect to the coordinates of this point, where the mass centre is taken as the origin. These coordinates are just the components of the unit vector $\hat{\mathbf{e}}$ describing the molecular orientation, so that the torque can be calculated as: 
\begin{equation}\label{eq:torques}
\mathbf{T}=-\hat{\mathbf{e}} \wedge  \nabla_{\hat{\mathbf{e}}} U.
\end{equation}
The torque acting on a symmetric rigid body is thus defined as perpendicular to the symmetry axis; this relies on the assumption that the inertia moment along the symmetry axis is infinite. 

% It is useful to notice that the {\it local} angular momentum is conserved: \begin{equation} \mathbf{T}_{ij} + \mathbf{T}_{ji} + \mathbf{r}_{ij} \times \mathbf{f}_{ij}=0 \end{equation} In principle, also the {\em total} angular momentum should be conserved.

\section{Equations of motion}\label{sec:eom} % moldy user's manual p.2
For simple point-mass particles, the motion is completely described by Newton's equation:
\begin{equation}
\label{eq:newton}
m\ddot{\mathbf{r}}
%=-\nabla U(\mathbf{r})
=\mathbf{f}
%=\sum_{j=1}^{N}\mathbf{f}_{ij}
\end{equation}
with $m$ the mass of the particle, $\ddot{\mathbf{r}}$ its acceleration and $\mathbf{f}$ the force acting on it.
For rigid bodies, the motion also contains a rotational contribution, which can be described by Euler's equation:% goldstein p.199
\begin{equation}
\label{eq:euler}
\bomega\wedge\mathbf{I}\bomega+\mathbf{I}\dot{\bomega}=\mathbf{T}
\end{equation}
with  $\mathbf{I}$  the moment of inertia tensor, $\bomega$  the angular velocity, $\dot{\bomega}$  the angular acceleration and $\mathbf{T}$ the torque about the body mass centre, all these quantities being expressed in the body-fixed (principal) reference frame.
In MD simulations, the equations of motion must be solved via numerical integration. The {\it integrator} is the beating heart of any dynamics simulation; it is the scheme which replaces a differential equation in continuous time by a difference equation defining approximate snapshots of the solution at discrete time-steps.~\cite{leimkuhler} 
%As far as integrators are concerned, the only required information about the studied physical system are its interacting potential and the timescale of the fastest motion in the system, which determines the integration step size. 
The crucial properties that a good integrator should possess, and that are possessed by the equations of motion in the first place, are:
\begin{itemize}
\item symplecticity, which implies preservation of phase-space volume and Hamiltonian value (energy);
\item time-reversibility, that is, the system capability to trace back its trajectory by reversing the velocities.%\footnote{``Any lack of time-reversibility should be due to rounding errors only, not the program'' -~\cite{bune67}.}. 
\end{itemize}
It has indeed been proved that symplecticity and reversibility  closely relate to the stability of an integrator, on extremely long-time simulations and allowing  large step sizes;~\cite{tuckerman00} there are now numerous examples illustrating the superior preservation of phase-space structures and qualitative dynamics by symplectic integrators.~\cite{mclac93a,grays94a,dullw97a,mille02a,omely02b,prapr05a} %It should be noticed that MD systems are {\em chaotic}, meaning that very small perturbations to the initial conditions grow exponentially in time. Hence it is inappropriate to expect that accurate trajectories for molecular systems be computed for more than a short time interval; rather, it is expected that the trajectories have the correct {\em statistical} properties. This is believed to be accomplished if the numerical integrator is symplectic.~\cite{izagu99a} 
In practice, the total energy is not preserved exactly, but the energy error remains constant over long times; this is different from non-symplectic methods, which typically display a systematic energy drift in time.~\cite{schlick} Rigorously, it may be shown that symplectic integrators exactly conserve a ``pseudo-Hamiltonian'' or ``shadow-Hamiltonian'' $\overline{\mathcal{H}}$ which differs from the true one by a small amount (vanishing as $\Delta t\rightarrow 0$, with $\Delta t$ the integration time-step). This means that no drift in energy will occur: the system will remain on a ``hypersurface'' in phase space which is ``close'' (in the above sense) to the true constant-energy hypersurface.~\cite{allen04} % Such stability property is extremely useful in molecular dynamics, since we wish to sample constant-energy states.~\cite{allen04}.

In this work, we employ the symplectic and time-reversible rigid body integrator developed by Dullweber, Leimkuhler and McLachlan,~\cite{dullw97a} DLM for short. The DLM method, based on a representation of the orientation of rigid bodies with rotation matrices,\footnote{The rotation matrix representation and the related rotational part of the integration scheme are really the novel features of the algorithm,~\cite{dullw97a} in that the linear integration is performed as in the popular velocity-Verlet scheme.~\cite{allen}} comprises two parts, as described in the following paragraphs.

\paragraph{Part A}
Given the forces $\mathbf{f}(t)$ and the space-frame torques $\mathbf{T}^S(t)$ at the current time $t$, the momenta of all molecules are advanced from $t$ to $t+\Delta t/2$, whereas  mass centre positions $\mathbf{r}$ are moved a full time step:
\begin{eqnarray} 
\mathbf{v}(t+\Delta t/2)&=&\mathbf{v}(t)+\Delta t\, \mathbf{f}(t)/2m \\ 
\mathbf{r}(t+\Delta t)&=&\mathbf{r}(t)+\Delta t\,\mathbf{v}(t+\Delta t/2)\\ 
\mathbf{h}^b(t+\Delta t/2)&=&\mathbf{h}^b(t)+\Delta t\, \mathbf{T}^b(t)/2 \end{eqnarray} 
where $\mathbf{h}^b = \mathbf{I}\bomega^b$ is the body-frame angular momentum, with $\mathbf{I}$ the principal moments of inertia tensor and $\bomega^b$ the body-frame angular velocity, and $\mathbf{T}^b$ is the body-frame torque, which is obtained from $\mathbf{T}^b=\mathbf{Q}(t)\mathbf{T}^S(t)$, $\mathbf{Q}(t)$ being the rotation matrix. 
Now five consecutive body-frame rotations $\mathbf{R}_1, \dots, \mathbf{R}_5$ are applied to all angular momenta and all orientation matrices are propagated for a full time step, from $\mathbf{Q}(t)$ to $\mathbf{Q}(t+\Delta t)$:  
\[
\mathbf{Q}(t+\Delta t)=\mathbf{Q}(t)\mathbf{R}_1^T\mathbf{R}_2^T\mathbf{R}_3^T\mathbf{R}_4^T\mathbf{R}_5^T
\]
with the explicit computation being:
\begin{eqnarray*}
\mathbf{R}_1:=\mathbf{R}_x\left(\frac{1}{2}\Delta t\frac{h_1}{I_1}\right)\!, &\mathbf{h}^b=\mathbf{R}_1\mathbf{h}^b,&\mathbf{Q}^T=\mathbf{Q}^T\mathbf{R}_1^T;\\
\mathbf{R}_2:=\mathbf{R}_y\left(\frac{1}{2}\Delta t\frac{h_2}{I_2}\right)\!, &\mathbf{h}^b=\mathbf{R}_2\mathbf{h}^b,&\mathbf{Q}^T=\mathbf{Q}^T\mathbf{R}_2^T;\\
\mathbf{R}_3:=\mathbf{R}_z\left(\;\Delta t\,\frac{h_3}{I_3}\;\right)\!, &\mathbf{h}^b=\mathbf{R}_3\mathbf{h}^b,&\mathbf{Q}^T=\mathbf{Q}^T\mathbf{R}_3^T;\\
\mathbf{R}_4:=\mathbf{R}_y\left(\frac{1}{2}\Delta t\frac{h_2}{I_2}\right)\!, &\mathbf{h}^b=\mathbf{R}_4\mathbf{h}^b,&\mathbf{Q}^T=\mathbf{Q}^T\mathbf{R}_4^T;\\\mathbf{R}_5:=\mathbf{R}_x\left(\frac{1}{2}\Delta t\frac{h_1}{I_1}\right)\!, &\mathbf{h}^b=\mathbf{R}_5\mathbf{h}^b,&\mathbf{Q}^T=\mathbf{Q}^T\mathbf{R}_5^T;
\end{eqnarray*} 
where $I_1$, $I_2$, $I_3$ are elements of the diagonal inertia tensor of a molecule and $h_1$, $h_2$, $h_3$ are the corresponding components of $\mathbf{h}^b$ in the (body-fixed frame of reference) principal axes system. 
$\mathbf{R}_x(\phi)$ denotes a rotation\footnote{A computationally efficient representation of $\mathbf{R}(\phi)$ is achievable by setting $\cos\phi\approx(1-\phi^2/4)/(1+\phi^2/4)$ and $\sin\phi\approx\phi/(1+\phi^2/4)$, and use:
\begin{displaymath}
\mathbf{R}_x(\phi)\approx \left(
\begin{array}{ccc}
 1 & 0 & 0 \\
 0 &\cos\phi & \sin\phi \\
 0 & -\sin\phi & \cos\phi
\end{array}
\right)
\end{displaymath}
\begin{displaymath}
\mathbf{R}_y(\phi)\approx \left(
\begin{array}{ccc}
 \cos\phi & 0 & -\sin\phi \\
0 &1 &0 \\
\sin\phi  & 0 & \cos\phi 
\end{array}
\right)
\end{displaymath}
\begin{displaymath}
\mathbf{R}_z(\phi)\approx \left(
\begin{array}{ccc}
\cos\phi  & \sin\phi  & 0\\
-\sin\phi  & \cos\phi &0 \\
0 &0 &1
\end{array}
\right)
\end{displaymath}
The above rational orthogonal approximation formula are reliable only when dealing with small angles (which is the case in MD simulation).
}
 around the (body-frame) $x$-axis by an angle $\phi$, and $\mathbf{R}_i^T$ is the transpose of $\mathbf{R}_i$. 

\paragraph{Part B}
After having obtained $\mathbf{r}(t+\Delta t)$ and $ \mathbf{Q}(t+\Delta t)$ from the previous part, the corresponding new forces $\mathbf{f}(t+\Delta t)$ and torques $\mathbf{T}^S(t+ \Delta t)$ are calculated.
Subsequently, the momenta are propagated another half time step through the following formulae:
\begin{eqnarray} \mathbf{v}(t+\Delta t)&=&\mathbf{v}(t+\Delta t/2)+\Delta t\, \mathbf{f}(t+\Delta t)/2m\\ 
\mathbf{h}^b(t+\Delta t)&=&\mathbf{h}^b(t+\Delta t/2)+\Delta t\, \mathbf{T}^b(t+\Delta t)/2 \end{eqnarray}
where again $\mathbf{T}^b(t+\Delta t)=\mathbf{Q}(t+\Delta t)\mathbf{T}^S(t+\Delta t)$.  The integration step is now complete.

% The advantage of the DLM method over traditional integrators~\cite{finch81,finch84,allen} is that the rotation matrix representation allows normalisation calculations to be avoided. This is necessary to preserve the time-reversibility property. Also, the DLM integrator is symplectic, whereas the traditional alternatives are not. The practical advantage of time-reversible symplectic integrators is that they prove extremely stable. In Appendix~\ref{app:integrationTests} we show examples where the DLM scheme permits integration steps to be used  that are ten times larger than those possible with traditional methods.


\subsection{Integration timestep: how long?}
Choosing the optimal timestep size for integrating the equations of motion is not trivial, especially for coarse-grain simulations. Longer timesteps are desirable because they increase the sampling (hence reducing the computation time), however shorter timesteps can be required for accuracy. It has been proposed to check that, during NVE simulations, the fluctuations of the total energy shuold be significantly smaller than the fluctuations of the potential (or kinetic) energy.~\cite{winger09}
Energy fluctuations $\Delta E$ can be calculated as:
\begin{equation}
\Delta E = \sqrt{\langle[E-\langle E\rangle]^2\rangle} = \sqrt{\langle[E^2+\langle E\rangle^2-2E\langle E\rangle]\rangle}=
 \sqrt{\langle E^2\rangle+\langle E\rangle^2-2\langle E\rangle^2}= \sqrt{\langle E^2\rangle-\langle E\rangle^2}
\end{equation}
where the angular brackets indicate time-averaging. An empirical criterion for accurate integration requires that the fluctuations of the total energy should be less than one fifth of  the fluctuations of the potential (or kinetic) energy.~\cite{winger09}

\section{Periodic boundary conditions}\label{sec:pbc}
Owing to limitation of computer resources, simulations are typically performed on systems containing relatively small numbers of particles.
In a typical MD simulation comprising 1000 particles, roughly half of them are in contact with the outer boundaries. Even for $10^6$ atoms, the surface atoms amount to 6\% of the total. Assuming, as it is often the case, that we are  interested in the bulk properties of the system (and not in boundary effects), the presence of a boundary surface will introduce severe simulation artefacts. This problem can be solved by surrounding the cell with replicas of itself, thus effectively eliminating any physical boundary. This is shown in Figure~\ref{fig:pbc}. 
\begin{figure}
\centering
\includegraphics[scale=.58]{md/pbc.eps}
\caption[Periodic boundary conditions]{Periodic boundary conditions in two dimensions. From Allen.~\cite{allen04}}
\label{fig:pbc}
\end{figure}
Whenever a molecule leaves the central cell passing through a particular face of the central simulation region, a ``replacement'' particle will enter the central cell through the opposite face.
Only the coordinates in the central box need to be recorded; in the course of a simulation, when a particle leaves the central simulation box, its coordinates are updated with the values of the corresponding incoming image. 

\subsection{Minimum image convention}\label{sec:mic}

Periodic boundaries solve the problem of surface effects but introduce a ``computational paradox'': all the infinite images of any given particle should now be considered in the interaction calculations. To avoid this (impossible) task, the {\em minimum image convention} is normally  adopted: each atom of the main simulation cell interacts only with the nearest image of any other particle. %Such an image can be located in the same (central) cell or in any of the adjacent replicas.
 
%Of course, it is important to bear in mind the imposed artificial periodicity when considering properties which are influenced by long-range correlations. Special attention must be paid to the case where the potential range is not short: for example for charged and dipolar systems.

\section{Truncation of nonbonded interactions}
 Even with the implementation of the minimum image convention, the evaluation of all nonbonded interactions is still a computationally expensive job; also, such an ``all-pair'' approach is often unnecessary to achieve the typical degree of accuracy required. A further approximation is therefore made to increase the computational efficiency of simulation programs. Nonbonded interaction models are normally short-range: the potential energy between a pair of particles rapidly decays with increasing interparticle distance, becoming almost negligible after some {\em cutoff distance}~$r_c$. To maximise computational efficiency, the potential is thus normally ignored (truncated) after~$r_c$. For consistency with the minimum image convention, the cutoff radius $r_c$ must be smaller than half the length of the shortest edge of the simulation region.

A problem arising from truncating the interactions is the introduction of a discontinuity in the potential and its derivative (force), affecting both the energy of the system and the motion of the particles.  This problem can be tackled by changing the form of the potential function slightly, adding a constant and a linear term so that both the potential and its derivative go smoothly to zero at the cutoff distance~$r_c$:~\cite{allen,rapa}%\cite[Section~3.3.2] 
\begin{equation}\label{eq:SF} U^{\mathrm{SF}}(r)=U(r)-U(r_\mathrm{c})-(r-r_\mathrm{c})\frac{\de U(r)}{\de r}\rvert_{r=r_\mathrm{c}}  \end{equation} 
where $U^{\mathrm{SF}}(r)$ is the ``new'' model, called {\em shifted-force} potential, and $U(r)$ is the original potential.
This removes problems in energy conservation and any numerical instability in the equations of motion.~\cite{allen} %\cite[Section~5.2.4] 
 A possible issue with this treatment is that the potential is modified across the entire interaction range (even if only slightly); properties sensitive to the specific form of the potential might be affected.  Alternative methods involve using so-called {\em switching} functions, applied in the proximity of $r_c$ to remove the discontinuity without changing the overall potential form.~\cite{rapa} In this case the potential is ``switched off'' smoothly across a (small) distance between a switching distance $r_s$ and the cutoff distance $r_c$ (where for instance $r_s=0.9\,r_c$).

\section{Derivation of forces and torques}\label{sec:derFrcTrq}

In this section, explicit formulae for the forces and torques derived from the potentials used in \brahms are given. Please note the following conventions:

\begin{itemize}
\item The inter-site distance vector $\mathbf{r}_{ij}$ between the pair of sites ({\em i,j}) is defined as: $\mathbf{r}_{ij}=\mathbf{r}_{i} - \mathbf{r}_{j}$
\item The inter-site distance magnitude is defined as $r$, that is: $r=|\mathbf{r}_{ij}|$ 
\item The interaction cutoff radius is defined as $r_c$ 
\item The force vector representing the force on site $i$ due to site $j$ is defined as $\mathbf{f}_{ij}$
\end{itemize}
These conventions are consistent with those in Rapaport~\cite{rapa} and Allen \& Tildesley.~\cite{allen}

\subsection{Dipole-dipole interactions: shifted-force variant}

Consider a pair of dipoles $i$ and $j$. The orientations are defined by the unit vectors $\hat{\mathbf{e}}_i$ and $\hat{\mathbf{e}}_j$; the angles between these orientation vectors and the interparticle separation vector $\mathbf{r}_{ij}$ are defined respectively as $\theta_i$ and $\theta_j$. Also, we define $\gamma_{ij}$ as the angle between %the plane containing 
$\hat{\mathbf{e}}_i$ and 
%$\mathbf{r}_{ij}$  and the plane containing 
$\hat{\mathbf{e}}_j$, %and  $\mathbf{r}_{ij}$, 
so that $\cos{\gamma}_{ij}=\hat{\mathbf{e}}_i\cdot\hat{\mathbf{e}}_j$. 
%as in Figure~\ref{fig:orient}.
%\begin{figure} \centering \includegraphics[scale=1]{appendices/orient} \caption[Relative orientation of two dipoles]{The relative orientation of two vectors associated with dipolar sites. From Allen and Tildesley.~\cite{allen}} \label{fig:orient} \end{figure}

\subsubsection{Dipolar potential}
The electrostatic interaction potential energy is:
\begin{equation}
u_{ij}^{SF}=\frac{\mu^2}{r^3}(\cos\gamma_{ij}-3\cos\theta_i\cos\theta_j)\left[1-4\left(\frac{r}{r_c}\right)^3+3\left(\frac{r}{r_c}\right)^4\right]
\end{equation}
where $r = |\mathbf{r}_{ij}|$. Cosines are computed through:
\begin{equation}
\cos\theta_i = \frac{\hat{\mathbf{e}}_{i}\cdot\mathbf{r}_{ij}}{r|\hat{\mathbf{e}}_{i}|}\;\; \;\;\;\cos\theta_j = \frac{\hat{\mathbf{e}}_{j}\cdot\mathbf{r}_{ij}}{r|\hat{\mathbf{e}}_{j}|}
\end{equation}


\subsubsection{Dipolar forces}

Pair force:
\begin{equation}
\mathbf{f}_{ij}^{SF}=\frac{3\mu^2}{r^4}\left\{(\cos\gamma_{ij}-3\cos\theta_i\cos\theta_j)\left[1-\left(\frac{r}{r_c}\right)^4\right]\frac{\mathbf{r}_{ij}}{r}
+\left(\cos\theta_j\hat{\mathbf{e}}_i+\cos\theta_i\hat{\mathbf{e}}_j-2\cos\theta_i\cos\theta_j\frac{\mathbf{r}_{ij}}{r}\right)
\left[1-4\left(\frac{r}{r_c}\right)^3+3\left(\frac{r}{r_c}\right)^4\right]\right\}
\end{equation}

\subsubsection{Dipolar torques}

Pair torques:
\begin{equation}
\mathbf{T}_{ij}^{SF}=-\frac{\mu^2}{r^3}\left(\hat{\mathbf{e}}_i\times\hat{\mathbf{e}}_j-3\cos\theta_j\frac{\hat{\mathbf{e}}_i\times\mathbf{r}_{ij}}{r}\right)\left[1-4\left(\frac{r}{r_c}\right)^3+3\left(\frac{r}{r_c}\right)^4\right]
\end{equation}
\begin{equation}
\mathbf{T}_{ji}^{SF}=-\frac{\mu^2}{r^3}\left(\hat{\mathbf{e}}_j\times\hat{\mathbf{e}}_i-3\cos\theta_i\frac{\hat{\mathbf{e}}_j\times\mathbf{r}_{ij}}{r}\right)\left[1-4\left(\frac{r}{r_c}\right)^3+3\left(\frac{r}{r_c}\right)^4\right]
\end{equation}
A complete treatment of the dipolar potential, along with the explicit derivation to obtain forces and torques, can be found elsewhere.~\cite{allen} %[p~332-334] 

\section{Improving the interaction computations} The site-site interactions can be simply computed through the examination of all possible (different) pairs of sites: for a system of $N$ particles, $N(N-1)/2$ pair distances are evaluated, and eventually forces and torques are computed for those particles separated by a distance shorter than the cutoff radius $r_c$. This method is however extremely inefficient when the interaction range $r_c$ is small compared with the linear size of the simulation region: the fact that the amount of computation grows as $O(N^2)$ rules out this method for all but the smallest values of $N$. Two techniques for reducing this growth rate to $O(N)$ are presented in the following subsections.

\subsection{Cell subdivision} The simulation region is divided into a lattice of small cells, and the cell edges all exceed $r_c$ in length. Then if atoms are assigned to cells on the basis of their current positions it is clear that interactions are only possible between atoms that are either in the same cell or in immediately adjacent cells (Figure~\ref{fig:cell}).
\begin{figure}
   \begin{center}
      \includegraphics*[width=1.5in]{md/cellSubdiv.eps}
      \caption[Cell subdivision]{Cell subdivision. The cutoff range for the particle in white is represented by the white circle; in searching for neighbours of that particle, it is only necessary to examine the particle's own cell and its adjacent cells (shaded). Figure from Allen.~\cite{allen04}}
      \label{fig:cell}
   \end{center}
\end{figure}
Obviously the region size must be at least $4\, r_c$ for the method to be useful. The cell subdivision method involves a general organisation of data known as a {\em linked list}:~\cite{knuth68} rather than accessing data sequentially, the linked list associates a pointer $p_n$ with each data item $x_n$, the purpose of which is to provide a non-sequential path through the data. Each linked list requires a separate pointer $f$ to access the first data item, and the item terminating the list must have a special pointer value, such as $-1$, that cannot be mistaken for anything else. Thus $f=a$ points to $x_a$ as the first item in the list, $p_a=b$ points to $x_b$ as the second item, and so on until a pointer value $p_z=-1$ terminates the list.~\cite{rapa} In the cell subdivision algorithm, linked lists are used to associate atoms with the cells in which they reside at any given instant; a separate list is required for each cell. All data are eventually sorted in a one-dimensional array of integer number. The cell-subdivision method has also been successfully used in the simulation of plasmas, galaxies and ionic crystals.~\cite{allen}

\subsection{Neighbour List} Verlet~\cite{verlet67} suggested a technique for improving the speed of a molecular dynamics program by maintaining a list of the neighbours of a particular molecule, which is updated at intervals: between updates the program does not check through all the possible pairs, but only through neighbours (Figure~\ref{fig:NL}).
\begin{figure}
   \begin{center}
      \includegraphics*[width=3in]{md/nebrList.eps}
      \caption[Verlet neighbour list]{Verlet neighbour list. The left panel shows the initial list construction: the ``neighbours'' of the central particle, enclosed by the dashed circle, are depicted as white and light-grey particles. The white particles are inside the cutoff radius (solid circle), and hence they represent the only  particles interacting with the central particle at this initial stage. The central panel shows a possible later configuration of the system: now some of the grey particles have entered the cutoff. Since they were recorded on the neighbour list, they are properly taken into account for the interaction computation with the central particle. The right panel shows a potentially problematic situation: some of the black particles, not listed on the neighbour list of the central particle and hence not considered in the interaction calculation, have penetrated into the cutoff zone. The list must be reconstructed before the system reaches such a configuration. Figure from Allen.~\cite{allen04}}
      \label{fig:NL}
   \end{center}
\end{figure}
 The cell subdivision method can be used to speed up the list construction; in the end all neighbour pairs are consecutively sorted in a (rather long)~$2\times k_{NL}\times N_{sites}$ one-dimensional array of integer numbers, where $k_{NL}$ is a parameter controlling the (predicted) maximum number of neighbours per particle.

 %Different list radii should be tried in order to identify the one yielding the best performances. For Gay-Berne systems, Berardi et al.~\cite{berar93a} found out that with a list radius of 4.8\,$\sigma_0$ their program was 1.5 faster with respect to a calculation with the same cutoff - in this case being $r_c=4.0\,\sigma_0$ - but without NL.

\section{Thermodynamic measurements} % p. 19 rapa -  p. 46 AT
Basic thermodynamic properties can be easily calculated from an MD simulation; measurements are averaged over time, typically after an initial equilibration stage. 

\subsection{Potential energy}
 The total potential energy of a system is measured through the evaluation of a double-loop over all pair interactions:
\begin{equation}
  \mathcal{U}=\sum_{ i}\sum_{j>i} u(r_{ij})
\end{equation}
where $i$ and $j$ identify the interacting particles, $u(r_{ij})$ their potential energy and  $r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|$ their separation distance.


\subsection{Kinetic energy and temperature}
The instantaneous kinetic energy $\mathcal{K}$ for a system of $N$ point-mass particles is:
\begin{equation}
  \mathcal{K}=\frac{1}{2}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2\right)
\end{equation}
with $m_i$ and $\mathbf{v}_i$ the mass and velocity of the $i$-th particle.
%The instantaneous temperature $\mathcal{T}$ can be defined as: \begin{equation}   \mathcal{T}=\frac{2}{3}\frac{\mathcal{K}}{Nk_B}=\frac{1}{3N k_B}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2\right) \end{equation} whose average gives the temperature $T$, i.e.: $T=\langle\mathcal{T}\rangle$.
%\paragraph{Symmetric rigid bodies} Symmetric (e.g. Gay-Berne) sites possess $3N$ degrees of freedom for the translational mass-centres motion  and $2N$ for the rotational motion: therefore $5N$ for both motions.  Kinetic energy and temperature can be measured as:  \begin{equation}\label{kinEn}   \mathcal{K}=\frac{1}{2}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2 + I_i|\bomega_i|^2\right)  \Rightarrow   T = \langle\mathcal{T}\rangle=\frac{2}{5N k_B}\left\langle\mathcal{K}\right\rangle \end{equation} being $I_i$ the moment of inertia and $\bomega_i$ the angular velocity of molecule $i$ in the body-frame.
The instantaneous kinetic energy $\mathcal{K}$ for a system of $N$ symmetric rigid bodies (such as Gay-Berne sites) is:
\begin{equation}\label{kinEn}   \mathcal{K}=\frac{1}{2}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2 + I_i|\bomega^b_i|^2\right)
\end{equation}
with $I_i$ and $\bomega^b_i$ the principal moments of inertia tensor and the body-frame angular velocity of the $i$-th particle.
For general (non-symmetric) rigid bodies, the kinetic energy can be measured as: 
\begin{equation}
  \mathcal{K}=\frac{1}{2}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2 + I_{x_i}(\omega_{x_i}^{b})^2 + I_{y_i}(\omega_{y_i}^{b})^2+ I_{z_i}(\omega_{z_i}^{b})^2  \right) =\frac{1}{2}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2 + |\mathbf{h}^b_i|^2/\mathbf{I} \right) 
\end{equation}
%\begin{equation}   T = \langle\mathcal{T}\rangle=\frac{2}{6N k_B}\left\langle\mathcal{K}\right\rangle \end{equation}
with $I_i$, $\omega^b_i$ and $\mathbf{h}^b$ being  the inertia moment, the angular velocity and the angular momenta of the $i$-th particle in the  body-frame, respectively.

The equipartition theorem states that each degree of freedom (DOF) contributes an average of $k_BT/2$ to the kinetic energy, where $k_B$ is the Boltzmann constant and $T$ the temperature. In general, the instantaneous temperature $\mathcal{T}$ is therefore given by:
\begin{equation}
  \mathcal{T}=\frac{2\mathcal{K}}{k_B\cdot DOFs}
\end{equation}
with $DOFs$ the total number of degrees of freedom in the system:
\begin{equation}
 DOFs=3N+2N_\mathrm{SRB}+3N_\mathrm{NSRB}-N_\mathrm{C}
\end{equation}
$N$ being the total number of sites, $N_\mathrm{SRB}$ the number of symmetric rigid bodies (two non-zero moments of inertia), $N_\mathrm{NSRB}$ the number of general, non-symmetric rigid bodies (three non-zero moments of inertia) and $N_\mathrm{C}$ the total number of constraints on the system (the centre of mass is typically constrained, so $N_\mathrm{C}$ is normally at least 3).  
%\begin{equation} \boxed{  \mathcal{T}=\frac{2\mathcal{K}}{k_B (\mathrm{DOFs}-\mathrm{Cs})}} \end{equation} with $DOFs=3\cdot N_\mathrm{PM}+5\cdot N_\mathrm{SRB}+6\cdot N_\mathrm{NSRB}$ the number of degrees of freedom, and $Cs$ the number of constraints. The centre of mass is typically constrained, so $Cs$ is at least 3. 


%The translational and rotational temperatures can be therefore linked to the associated kinetic energies:  % AllenTildesley p.170 \begin{equation} \mathcal{K}_{TRAN}=\frac{1}{2}\sum_{i=1}^{N}m_{i}v_{i}^{2}=\frac{3N}{2}kT_{TRAN}, \label{ktran2} \end{equation} \begin{equation} \mathcal{K}_{ROT}=\frac{1}{2}\sum_{i=1}^{N}I_{i}w_{i}^{2}=\frac{2N}{2}kT_{ROT}. \label{krot2} \end{equation} %where $m_{i}$ is the mass of particle $i$, $v_{i}$ the velocity of particle $i$, $I_{i}$ the moment of inertia of particle $i$ and $w_{i}$ the angular momentum of particle $i$ among $N$ particles.  The definition of the total kinetic energy $\mathcal{K}_{TOT}$ enables the overall temperature $T$ to be computed:  \begin{equation} {K}_{TOT}=\frac{5N}{2}kT=\frac{3N}{2}kT_{TRAN}+\frac{2N}{2}kT_{ROT} \label{ktot1} \end{equation} \begin{equation} T=\frac{1}{5}\left(3T_{TRAN}+2T_{ROT}\right) \label{ktot2} {equation}  $N$ rigid linear sites (e.g. Gay-Berne sites) there are $5N$ degrees of freedom: 3 for the translational motion and 2 for the rotational motion, for each molecule. The instantaneous temperature $\mathcal{T}$ can be defined as: %moldy UM p.11 {equation} {T}=\frac{1}{5N k_B}\sum_{i=1}^{N}( m_i|\mathbf{v}_i|^2 + I_i|\mathbf{\omega}_i|^2) {equation} whose average gives the temperature $T$, i.e.: $T=\langle\mathcal{T}\rangle$.

% \begin{equation}   T = \langle\mathcal{T}\rangle=\frac{1}{3N k_B}\left\langle\sum_{i=1}^{N} m_i|\mathbf{v}_i|^2 + \mathbf{\omega}_iI_i\mathbf{\omega}_i\right\rangle \end{equation} The kinetic energy is: \begin{equation}\label{kinEn}   \mathcal{K}=\frac{1}{2}\sum_{i=1}^{N}\left( m_i|\mathbf{v}_i|^2 + \sum_{j=1}^{3} I^j_i|\mathbf{\omega}^j_i|^2\right) \end{equation}
 

\paragraph{Mixtures}
Considering $nS$ species $(1,2,\dots,i,\dots,nS)$ each characterised by $N_i$ sites each carrying $f_i$ degrees of freedom, the kinetic energy is:
\begin{equation}
  \mathcal{K}=\frac{k_BT}{2}\sum_{i=1}^{nS}(f_i\cdot N_i-N_\mathrm{C})
\end{equation}
hence the instantaneous temperature is:
\begin{equation}
  \mathcal{T}=\frac{2\mathcal{K}}{k_B\sum_{i=1}^{nS}(f_i\cdot N_i-N_\mathrm{C})}
\end{equation}
In particular then, considering a mixture of $N_\mathrm{PM}$ point-masses (3 degrees of freedom each) and $N_{SRB}$ symmetric rigid bodies (5 degrees of freedom each), the total kinetic energy $K$ is: \begin{equation} \mathcal{K}=\frac{k_BT}{2}(3\cdot N_\mathrm{PM}+5\cdot N_\mathrm{SRB} -N_\mathrm{C})
\end{equation}
 Hence the instantaneous temperature is:
\begin{equation}
  \mathcal{T}=\frac{2\mathcal{K}}{k_B (3\cdot N_\mathrm{PM}+5\cdot N_\mathrm{SRB}-N_\mathrm{C})}
\end{equation}
In the general case where the system  also comprises $N_\mathrm{NSRB}$ non-symmetric rigid bodies, the temperature becomes:
\begin{equation}
  \mathcal{T}=\frac{2\mathcal{K}}{k_B (3\cdot N_\mathrm{PM}+5\cdot N_\mathrm{SRB}+6\cdot N_\mathrm{NSRB} -N_\mathrm{C})}
\end{equation}

\subsection{Pressure tensor}

The macroscopic pressure tensor of a system of $N$ particles can be written as:~\cite{thompson09,alejandre95}
\begin{equation} \label{eq:pressTens}
\mathbf{P}=\frac{1}{V}\left(\sum_{i=1}^Nm_i\mathbf{v}_i\otimes\mathbf{v}_i+ \mathbf{W} \right)
\end{equation}
with $V$ the total volume of the system, $m_i$ and $\mathbf{v}_i$ the mass and velocity of site $i$, and $\mathbf{W}$ the global virial tensor. 
$\mathbf{W}$ can be decomposed into various contributing terms:
\begin{equation}\label{eq:gvt}
\mathbf{W}=\mathbf{W}_{nb}+\mathbf{W}_{b}+\mathbf{W}_{a}
\end{equation}
where {\em nb}, {\em b} and {\em a} refer to {\em nonbonded}, {\em bonded} and {\em angle}, respectively. These separate contributions to $\mathbf{W}$ are detailed in the following paragraphs.\footnote{Note that in general, {\it for nonperiodic systems}, $\mathbf{W}$ can be simply obtained as:~\cite{cheng96}
\begin{equation}\label{eq:gvtNP}
\mathbf{W}=\sum_{i=1}^N\mathbf{r}_{i}\otimes\mathbf{f}_{i} 
\end{equation}
with $\mathbf{r}_i$ the position of site $i$ and $\mathbf{f}_i$ the total force on it. However, Eq.~\ref{eq:gvtNP} does not hold for periodic systems (the typical case in molecular simulations, see \S\ref{sec:pbc}) because of the forces arising from ``minimum image'' interactions, which must be taken into account specifically.~\cite{brown95,carpenter02}}

\subsubsection{Virial contribution from nonbonded pair interactions}

For nonbonded pair interactions (e.g., Lennard-Jones), $\mathbf{W}_{nb}$ can be calculated as:
\begin{equation} \label{eq:pressTensNb}
\mathbf{W}_{nb}=\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\mathbf{r}_{ij}\otimes\mathbf{f}_{ij}
\end{equation}
with $\mathbf{f}_{ij}$ the force on site $i$ due to the pair interaction with site $j$ and $\mathbf{r}_{ij}$ the ``minimum image'' distance between sites $i$ and $j$ (the minimum image convention is described in ~\S\ref{sec:mic}). % [see also p.~123, Rapaport~\cite{rapa}].
Note that in this manual (and in Brahms) we use the convention $\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}$. Others use the opposite convention ($\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}$), in which case all the expressions reported here containing $\mathbf{r}_{ij}$ might require modifications, such as sign changes. %~\cite{lindahl00b} between sites $i$ and $j$ (assuming the minimum image convention, see~\S\ref{sec:mic}).

\subsubsection{Virial contribution from bonded (Hooke) pair interactions}
For bonded pair interactions (e.g., Hooke harmonic interactions typically used to model covalent bonds), $\mathbf{W}_b$ can be calculated as:
\begin{equation} \label{eq:pressTensB}
\mathbf{W}_b=\sum_{bonds}\mathbf{r}_{ij}\otimes\mathbf{f}_{ij}
\end{equation}
with $\mathbf{f}_{ij}$ the force on site $i$ due to the pair interaction with site $j$ and $\mathbf{r}_{ij}$ the distance between sites $i$ and $j$ (again assuming the minimum image convention, see~\S\ref{sec:mic}). The summation is performed over all interacting pairs (bonds).

\subsubsection{Virial contribution from angle-bending interactions}
Angle-bending interactions within triplets of sites can be decomposed into pair interactions for the purpose of virial decomposition.~\cite{gulli04}
The contribution to the global virial tensor $\mathbf{W}_a$ from a 3-body angle potential is:~\cite{brown95}
\begin{equation}\label{eq:wab}
\mathbf{W}_a=\sum_{angles}\mathbf{r}_{i}'\otimes\mathbf{f}_{i} + \mathbf{r}_{j}'\otimes\mathbf{f}_{j} + \mathbf{r}_{k}'\otimes\mathbf{f}_{k}
\end{equation}
where sites $i$, $j$ and $k$ define the angle and $\mathbf{r}'$ are the positions used in the calculation of the corresponding forces $\mathbf{f}$ {\em according to the minimum image convention}; the $\mathbf{r}'$ positions therefore can correspond to either sites in the ``primary'' cell or in any of the sorrounding periodic replicas.~\cite{brown95} Clearly, Eq.~\ref{eq:wab} should be applied in a molecular dynamics code together with the calculation of the corresponding forces, as in this case the minimum image positions can be correctly taken into account.
The specific contribution to the global virial tensor $\mathbf{W}_a$ from the potential defined by Eq.~17 of the ELBA paper Supporting Information~\cite{orsi11elba} is:
\begin{equation}
\mathbf{W}_a = \mathbf{r}_{i1}\otimes\mathbf{f}_{1} + \mathbf{r}_{i2}\otimes\mathbf{f}_{2} + \mathbf{r}_{iA}\otimes(-\mathbf{f}_{1}-\mathbf{f}_{2})= (\mathbf{r}_{i1}-\mathbf{r}_{iA})\otimes\mathbf{f}_{1} + (\mathbf{r}_{i2}-\mathbf{r}_{iA})\otimes\mathbf{f}_{2} = - \mathbf{rA1}\otimes\mathbf{f}_1 + \mathbf{r2A}\otimes\mathbf{f}_2
\end{equation}
with $\mathbf{rA1}= \mathbf{r}_{iA} - \mathbf{r}_{i1}$ and $\mathbf{r2A}= \mathbf{r}_{i2} - \mathbf{r}_{iA}$ (see Figure~1 of the ELBA paper Supporting Information~\cite{orsi11elba}).
In general, angle-bending interactions do not contribute to the ``scalar'' pressure, that is, $\mathbf{W}_{a_{xx}}+\mathbf{W}_{a_{yy}}+\mathbf{W}_{a_{zz}}=0$. The off-diagonal components of $\mathbf{W}_{a}$ are in general nonzero, and hence angle-bending forces can contribute to the pressure tensor,~\cite{cheng96,carpenter02} in particular when considering local subvolumes. However, the average of the off-diagonal stresses over the total volume of the system must be zero in the absence of externally applied shear stresses;~\cite{veld03} when considering such global averages, it can be claimed that ``the forces arising from bond-angle distortions make no contribution to the molecular pressure tensor''.~\cite{alejandre95}


\subsubsection{Scalar (hydrostatic) pressure}
The total scalar (hydrostatic) pressure of the system $P$ can be obtained from the trace of pressure tensor $\mathbf{P}$ with:
\begin{equation}\label{eq:P} 
P = {\mathrm tr}(\mathbf{P}) / 3 =  (P_{xx}+P_{yy}+P_{zz})/3
\end{equation}

\subsubsection{Surface tension}
The off-diagonal elements of $\mathbf{P}$ vanish in equilibrium, and for an isotropic system the diagonal elements are expected to be equal. For an anisotropic system such as a lipid bilayer, the diagonal elements need not be equal, leading to a finite {\em surface tension}.
In particular, assuming the surface to be parallel to the $xy$ plane, and hence normal to the $z$ axis, the surface tension $\gamma$ %along a bilayer is usually calculated from the difference between the normal ($P_N=P_{zz}$) and lateral ($P_L=(P_{xx}+P_{yy})/2$) components of the pressure tensor: \begin{equation} \gamma=\int[P_N(z)-P_L(z)]\,\de z \end{equation} The surface tension per leaflet of a bilayer 
is related to the pressure tensor by:
\begin{equation}\label{eq:surfTens} 
\gamma=  L_z\times[P_{zz}-(P_{xx}+P_{yy})/2]
\end{equation}
where $L_z$ denotes the length of the simulation region normal to the surface, $P_{zz}$ is the component of the pressure tensor normal to the surface and $P_{xx}$, $P_{yy}$ are the tangential components. 


% \section{Shifted potentials} p. 145 A/T

\section{Temperature control} % p72 rapa
Conventionally, MD simulations are carried out in the constant energy ensemble; on the other hand, it would be desirable to perform simulations in conditions closer to the real world, i.e., constant-temperature and constant-pressure. In the following sections some methods to control the temperature in MD calculations are presented and discussed.%The actual temperature $T$ of a system with $N$ particle is:
%\begin{equation}\label{t1}
%T= \frac{1}{3Nk_B}\sum_{i=1}^{N}m_i|\mathbf{v}_i|^2
%\end{equation}
%A desired temperature $T_\lambda$ can be expressed as:
%\begin{equation}\label{t2}
%T_\lambda= \frac{1}{3Nk_B}\sum_{i=1}^{N}m_i|\lambda\mathbf{v}_i|^2
%\end{equation}
%being $\lambda$ a scaling factor.
%Combining  Equations~\ref{t1} and \ref{t2} yields: 
%\begin{equation*}
%T_\lambda=\lambda^2T,\;\;\lambda=\sqrt{T_\lambda/T}=velMag\sqrt{\frac{\sum_i^Nm_i}{\sum_i^Nm_i|\bf{v}_i|^2}}
%\end{equation*}
%therefore, adjusting the system temperature to a desired value $T_\lambda$ only requires the velocities to be scaled by the factor $\lambda$.  
% vvsSum = 0.;
% DO_MOL vvsSum += mInert * VLenSq (mol[n].sv);
% vFac = velMag / sqrt (1.5 * vvsSum / nMol);
% DO_MOL VScale (mol[n].sv, vFac);
%Angular velocities are scaled by $\gamma=velMag\sqrt{\frac{\sum_i^NI_i}{\sum_i^NI_i|\bf{\omega}_i|^2}}$.%CHECK!
% moldy UM p.12

%\subsection{Velocity scaling} A trivial method to control the temperature is via velocity scaling. At periodic intervals linear and angular velocities are multiplied by a factor: \begin{equation} \lambda=\sqrt{\frac{gk_BT}{2\langle\mathcal{K}\rangle}} \end{equation} where  $T$ is the desired temperature, $\mathcal{K}$ is the kinetic energy and $g$ is the number of degrees of freedom (e.g., 3 for Lennard-Jones sites, 5 for Gay-Berne sites and 6 for general, non-symmetric molecules). In case of mixtures, the kinetic energy needs to be split into different contributions. For example, in a system with $N_{LJ}$ Lennard-Jones point-mass sites and $N_{GB}$ Gay-Berne axially symmetric rigid bodies, the velocities of the Lennard-Jones particles must be rescaled with: \begin{equation} \lambda_{LJ}=\sqrt{\frac{3k_BT}{2\langle\mathcal{K}_{LJ}\rangle}} \end{equation} whereas both linear and angular velocities of the Gay-Berne particles with: \begin{equation} \lambda_{GB}=\sqrt{\frac{5k_BT}{2\langle\mathcal{K}_{GB}\rangle}}\end{equation} where $\mathcal{K}_{LJ}$ and $\mathcal{K}_{GB}$ are the kinetic energy of the Lennard-Jones and Gay-Berne particles, respectively. It must be noticed that the {\it velocity scaling} scheme is suitable for use during the equilibration period but does not generate meaningful particle trajectories. In other words, an MD with scaling does not generate a valid statistical ensemble, therefore this control method must be switched off before any calculation of thermodynamic averages is performed.  % moldy man p.12  

\subsection{Weak-coupling method - Berendsen thermostat}
Berendsen et al.~\cite{ber84} proposed to control the temperature by rescaling the velocities at each step by a factor $\chi$:
\begin{equation}\label{berenT}
\chi = \sqrt{1+\frac{\Delta t}{\tau_T}\left(\frac{T}{\mathcal{T}}-1\right)}
\end{equation}
with $\Delta t$ the integration timestep, $\tau_T$ a time constant, $T$ the desired temperature and  $\mathcal{T}$ the current temperature.
This algorithm forces the system towards the desired temperature $T$ at a rate determined by the time constant $\tau_T$, while only slightly perturbing the forces on each molecule. This  method does not generate states in the canonical ensemble.~\cite{allen} %[Sec.~7.4.4]
Instead, the weak-coupling scheme can be shown~\cite{morishita00} to produce an ensemble with properties intermediate between the canonical (NVT) and the microcanonical (NVE). 
The velocity-Verlet implementation of this algorithm is straightforward: at the end of the second part, velocities are rescaled according to Equation~\ref{berenT}. %The time constant $\tau_T$ normally takes value in the range $0.5\div2$\,ps~\cite{dl_poly}.



\section{Pressure (and temperature) control} % p72 rapa

The system's pressure can be controlled (along with the temperature) by a variety of methods. 
The region shape can be cubic, orthorhombic or general triclinic, as long as it is space-filling.
Some constant-pressure methods allow for size and shape changes of the simulation box; this possibility is particularly helpful in the study of solids, since it allows for phase changes in the simulation which may involve changes in the unit cell dimensions and angles.~\cite{allen} %[Sec.~7.5.4]\footnote{The distortion of the simulation box is really limited by the requirement that the interaction range does not exceed half the smallest region dimension.}
Here we will describe the weak-coupling method,~\cite{ber84} as this is the scheme implemented in {\sc brahms}. 
%%%%%The next section describes some general aspects of pressure-control algorithms, whereas the weak-coupling method~\cite{ber84} will be described in the following Section~\ref{sec:nptBeren}.

\subsection{Box transformation matrix and scaled variables}
When controlling the pressure, it is useful to define the tranformation matrix $\mathbf{H}=(\mathbf{a}, \mathbf{b}, \mathbf{c})$, whose columns are the three vectors ($\mathbf{a}, \mathbf{b}, \mathbf{c}$) representing the edges of the simulation box. 
Periodic boundaries and minimum image convention are most readily handled when the problem is expressed in terms of scaled coordinates, because the simulation region is then a {\em fixed unit cube}; use of physical variables introduces unnecessary complications when handling boundary crossings, because velocities and accelerations must be adjusted as well as coordinates~\cite[p.~158]{rapa}.
 It is then most convenient to introduce {\em scaled} (or {\em lattice}) coordinates $\mathbf{s}$ %, which give the position of atoms relative to the simulation cell
through the linear transformation:
\begin{equation}
\mathbf{r}=\mathbf{H}\mathbf{s}
\end{equation}
where $\mathbf{H}=(\mathbf{a}, \mathbf{b}, \mathbf{c})$ is a transformation matrix whose columns are the three vectors ($\mathbf{a}, \mathbf{b}, \mathbf{c}$) representing the edges of the simulation box. 
Conversly, a real-space vector $\mathbf{r}$ can be transformed into a box-space vector $\mathbf{s}$ via:
\begin{equation}
\mathbf{s}=\mathbf{H}^{-1}\mathbf{r}
\end{equation}
Since the cell vectors are linearly independent, the matrix $\mathbf{H}$ can be inverted.
The box volume $V$ is given by the determinant\footnote{In general, the determinant of a $3\times3$ matrix is: $\det{[a_x b_x c_x; a_y b_y c_y; a_z b_z c_z]}= a_xb_yc_z-a_xc_yb_z-b_xa_yc_z+b_xc_ya_z+c_xa_yb_z-c_xb_ya_z$.} of  $\mathbf{H}$: $V=\det{\mathbf{H}=\mathbf{a}\cdot\mathbf{b}\times\mathbf{c}}$.

\subsection{Weak-coupling method - Berendsen barostat}\label{sec:nptBeren}
Berendsen et al.~\cite{ber84} proposed a simple technique to control the pressure by coupling to a ``pressure bath''. An extra term is added to the equation of motion to produce a pressure change. The system is made to obey the equation:
\begin{equation}
\de P(t) / \de t = [P_\mathrm{ext} - P(t)]/\tau_P
\end{equation}
\subsubsection{Isotropic}
At each step, the coordinates and box edges are transformed (rescaled) by a factor $\mu$:
\begin{equation}\label{muBeren}
\mu = 1-\frac{\beta\Delta t}{3\tau_P}\left[P_\mathrm{ext}-P(t)\right]
\end{equation}
with $\beta$ the isothermal compressibility of water: $\beta\sim4.6\times10^{-5}$\,Atm. 

\subsubsection{Anisotropic}
For the general case of anisotropic triclinic systems, Equation~\ref{muBeren} becomes a tensorial equation:
\begin{equation}
\bmu = \mathbf{1}-\frac{\beta\Delta t}{3\tau_P}\left[\mathbf{P}_\mathrm{ext}-\mathbf{P}(t)\right]
\end{equation}
Mass centres are scaled as:
\begin{equation}
\mathbf{r}' = \bmu\mathbf{r}
\end{equation}
And the simulation region is scaled as:
\begin{equation}
\mathbf{H}' = \bmu\mathbf{H}
\end{equation}
where in particular $\mathbf{H}=(\mathbf{a}, \mathbf{b}, \mathbf{c})$ is a transformation matrix whose columns are the three vectors ($\mathbf{a}, \mathbf{b}, \mathbf{c}$) representing the edges of the simulation box. 
Berendsen's algorithm only slightly alters the trajectories and is easy to program, but does not generate states in the NPT ensemble.~\cite{allen} %[Sec.~7.5.3]

%A metrix tensor $\mathbf{G}$ can be introduced: \[\mathbf{G}=\mathbf{H}^T\mathbf{H}\] Intersite distance can be computed: \[\mathbf{r}_{ij}^2=\mathbf{s}^T_{ij}\mathbf{G}\mathbf{s}_{ij}\Rightarrow\mathbf{r}_{ij}=\sqrt{\mathbf{s}^T_{ij}\mathbf{G}\mathbf{s}_{ij}}\]  where $\mathbf{s}_{ij}=\mathbf{s}_i-\mathbf{s}_j$  is the scaled vector specifying intersite distance in terms of lattice coordinates~\cite{hernandez01}. The scaled coordinates span the unit cube and periodic images have coordinates $\mathbf{H}(\mathbf{s}+(n_x, n_y, n_z))$~\cite[Sec.~6.2]{rapa}.

To avoid overall cell rotation, the three sub-diagonal elements of $\mathbf{H}$ can be constrained to zero, that is, the $\mathbf{a}$ cell vector is constrained to lie along the $x$-axis and $\mathbf{b}$ is constrained to lie in the $xy-$plane:\cite{procacci97, refson}
\begin{displaymath}
\mathbf{H}=\frac{1}{2}\left(
\begin{array}{ccc}
 a & b\,\cos\gamma & c\,\cos\beta \\
 0 & b\,\sin\gamma & c\,(\cos\alpha-\cos\beta\cos\gamma)/\sin\gamma \\
 0 & 0& (c/\sin\gamma)\sqrt{\sin^2\!\beta\,\sin^2\!\gamma-(\cos\alpha - \cos\beta\cos\gamma)^2}
\end{array}
\right)
\end{displaymath}
 Practically, at each time step the acceleration of those components, $\ddot{\mathbf{H}}_{ij}$, is set to zero (which is equivalent to adding a fictituous opposing force). This technique may also be used to allow uniaxial expansion only.

\subsection{Stochastic velocity rescaling}

Bussi et al.~\cite{bussi09} recently proposed an NPT method which seems to be almost as simple and robust as the weak-coupling scheme while being able to rigorously sample the isothermal-isobaric ensemble. Futture work on \brahms will be devoted to the implementation of this algorithm.

\section{Statistical analysis}
The measurement process in MD must undergo rigorous statistical analysis to quantify the errors due to random fluctuations of the properties investigated, and hence to establish the significance of the results.~\cite{allen,rapa} % p. 85 rapa, p.191 A/T
Besides, statistical parameters (such as the variance) are sometimes needed to calculate interesting properties.
%In general, results may be subject to {\em systematic} and {\em statistical} errors. Sources of systematic errors in MD include size-dependence, poor equilibration, etc.: these should be estimated and eliminated where possible. In this section we will focus on statistical error due to random fluctuations in the measurements: under normal circumstances this determines the degree of confidence that can be placed in the results.
From a series of $M$ measurements of a fluctuating property $A$ in a system at equilibrium, the mean value is:
\begin{equation}
\left\langle \mathcal{A} \right\rangle = \frac{1}{M}\sum_{\mu=1}^M \mathcal{A}_\mu
\end{equation}
and if each measurement $\mathcal{A}_\mu$ is independent, with variance
\begin{equation}
\sigma^2(\mathcal{A})= \frac{1}{M}\sum_\mu (\mathcal{A}_\mu - \left\langle \mathcal{A} \right\rangle )^2 = \left\langle \mathcal{A}^2 \right\rangle - \left\langle \mathcal{A} \right\rangle^2
\end{equation}
then the variance of the mean $\langle \mathcal{A} \rangle$ is:
\begin{equation}
\sigma^2(\langle \mathcal{A} \rangle)= \frac{1}{M}\sigma^2(\mathcal{A})
\end{equation}
and the estimated error in the mean is simply $\sigma(\langle \mathcal{A} \rangle)$.
In MD simulation, the variance is underestimated because successive measurements are not independent, but (highly) correlated. Luckily, the simple method of {\em block averaging} can be used to tackle this issue. In particular, assuming the $A_\mu$ to be correlated, if averages are evaluated over blocks of successive values, then as the block length increases the block averages will be decreasingly correlated; eventually, once the block length exceeds the (unknown) longest correlation time present in the data, the block averages will be independent from a statistical point of view.

\paragraph{Standard deviation} 
The standard or Root-Mean-Squared (RMSD) deviation $\sigma(\mathcal{A})$ is simply the square root of the variance: %~\cite[sec~2.3]{allen}:
%\begin{equation}
%\sigma(\mathcal{A})=\sqrt{\frac{1}{M}\left[
%  \sum_{\mu=1}^M \mathcal{A}_\mu^2 - \left(\sum_{\mu=1}^M \mathcal{A}_\mu\right)^2 \right]}
%\end{equation}
\begin{equation}
\sigma(\mathcal{A})=\sqrt{\frac{1}{M} \sum_{\mu=1}^M \mathcal{A}_\mu^2 - \left(\frac{1}{M}\sum_{\mu=1}^M \mathcal{A}_\mu\right)^2 }
\end{equation}
In general the standard deviation gives an indication of the spread of data: in most distributions, the bulk of the distribution lies within two standard deviations from the mean, i.e., within the interval $[ \left\langle \mathcal{A} \right\rangle -2\sigma(\mathcal{A} ), \left\langle \mathcal{A} \right\rangle +2\sigma(\mathcal{A} )]$.
The RMSD can also be employed to calculate physical properties, such as the specific heat or compressibility moduli.




