| 1 |
|
\chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND} |
| 2 |
|
|
| 3 |
|
\section{Background on the Problem\label{In:sec:pro}} |
| 4 |
< |
Phospholipid molecules are chosen to be studied in this dissertation |
| 5 |
< |
because of their critical role as a foundation of the bio-membrane |
| 6 |
< |
construction. The self assembled bilayer of the lipids when dispersed |
| 7 |
< |
in water is the micro structure of the membrane. The phase behavior of |
| 8 |
< |
lipid bilayer is explored experimentally~\cite{Cevc87}, however, fully |
| 9 |
< |
understanding on the mechanism is far beyond accomplished. |
| 4 |
> |
Phospholipid molecules are the primary topic of this dissertation |
| 5 |
> |
because of their critical role as the foundation of biological |
| 6 |
> |
membranes. Lipids, when dispersed in water, self assemble into a |
| 7 |
> |
mumber of topologically distinct bilayer structures. The phase |
| 8 |
> |
behavior of lipid bilayers has been explored |
| 9 |
> |
experimentally~\cite{Cevc87}, however, a complete understanding of the |
| 10 |
> |
mechanism and driving forces behind the various phases has not been |
| 11 |
> |
achieved. |
| 12 |
|
|
| 13 |
|
\subsection{Ripple Phase\label{In:ssec:ripple}} |
| 14 |
< |
The {\it ripple phase} $P_{\beta'}$ of lipid bilayers, named from the |
| 14 |
> |
The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the |
| 15 |
|
periodic buckling of the membrane, is an intermediate phase which is |
| 16 |
|
developed either from heating the gel phase $L_{\beta'}$ or cooling |
| 17 |
< |
the fluid phase $L_\alpha$. Although the ripple phase is observed in |
| 18 |
< |
different |
| 19 |
< |
experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03}, |
| 20 |
< |
the mechanism of the formation of the ripple phase has never been |
| 21 |
< |
explained and the microscopic structure of the ripple phase has never |
| 22 |
< |
been elucidated by experiments. Computational simulation is a perfect |
| 23 |
< |
tool to study the microscopic properties for a system, however, the |
| 24 |
< |
long range dimension of the ripple structure and the long time scale |
| 25 |
< |
of the formation of the ripples are crucial obstacles to performing |
| 26 |
< |
the actual work. The idea to break through this dilemma forks into: |
| 17 |
> |
the fluid phase $L_\alpha$. A Sketch is shown in |
| 18 |
> |
figure~\ref{Infig:phaseDiagram}.~\cite{Cevc87} |
| 19 |
> |
\begin{figure} |
| 20 |
> |
\centering |
| 21 |
> |
\includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf} |
| 22 |
> |
\caption{A phase diagram of lipid bilayer. With increasing the |
| 23 |
> |
temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple |
| 24 |
> |
($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.} |
| 25 |
> |
\label{Infig:phaseDiagram} |
| 26 |
> |
\end{figure} |
| 27 |
> |
Most structural information of the ripple phase has been obtained by |
| 28 |
> |
the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture |
| 29 |
> |
electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray |
| 30 |
> |
diffraction work by Katsaras {\it et al.} showed that a rich phase |
| 31 |
> |
diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples |
| 32 |
> |
is possible for lecithin bilayers.\cite{Katsaras00} Recently, |
| 33 |
> |
Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe |
| 34 |
> |
ripple phase morphology in bilayers supported on |
| 35 |
> |
mica.~\cite{Kaasgaard03} |
| 36 |
> |
\begin{figure} |
| 37 |
> |
\centering |
| 38 |
> |
\includegraphics[width=\linewidth]{./figures/inRipple.pdf} |
| 39 |
> |
\caption{The experimental observed ripple phase. The top image is |
| 40 |
> |
obtained by X-ray diffraction~\cite{Sun96}, and the bottom one is |
| 41 |
> |
observed by AFM.~\cite{Kaasgaard03}} |
| 42 |
> |
\label{Infig:ripple} |
| 43 |
> |
\end{figure} |
| 44 |
> |
Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray |
| 45 |
> |
diffraction and AFM. The experimental results provide strong support |
| 46 |
> |
for a 2-dimensional triangular packing lattice of the lipid molecules |
| 47 |
> |
within the ripple phase. This is a notable change from the observed |
| 48 |
> |
lipid packing within the gel phase,~\cite{Cevc87} although Tenchov |
| 49 |
> |
{\it et al.} have recently observed near-hexagonal packing in some |
| 50 |
> |
phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the |
| 51 |
> |
physical mechanism for the formation of the ripple phase has never |
| 52 |
> |
been explained and the microscopic structure of the ripple phase has |
| 53 |
> |
never been elucidated by experiments. Computational simulation is a |
| 54 |
> |
perfect tool to study the microscopic properties for a |
| 55 |
> |
system. However, the large length scale the ripple structure and the |
| 56 |
> |
long time scale of the formation of the ripples are crucial obstacles |
| 57 |
> |
to performing the actual work. The principal ideas explored in this |
| 58 |
> |
dissertation are attempts to break the computational task up by |
| 59 |
|
\begin{itemize} |
| 60 |
< |
\item Simplify the lipid model. |
| 61 |
< |
\item Improve the integrating algorithm. |
| 60 |
> |
\item Simplifying the lipid model. |
| 61 |
> |
\item Improving algorithm for integrating the equations of motion. |
| 62 |
|
\end{itemize} |
| 63 |
< |
In Ch.~\ref{chap:mc} and~\ref{chap:md}, we use a simple point dipole |
| 64 |
< |
model and a coarse-grained model to perform the Monte Carlo and |
| 65 |
< |
Molecular Dynamics simulations respectively, and in Ch.~\ref{chap:ld}, |
| 66 |
< |
we implement a Langevin Dynamics algorithm to exclude the explicit |
| 67 |
< |
solvent to improve the efficiency of the simulations. |
| 63 |
> |
In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point |
| 64 |
> |
dipole spin model and a coarse-grained molecualr scale model to |
| 65 |
> |
perform the Monte Carlo and Molecular Dynamics simulations |
| 66 |
> |
respectively, and in chapter~\ref{chap:ld}, we develop a Langevin |
| 67 |
> |
Dynamics algorithm which excludes the explicit solvent to improve the |
| 68 |
> |
efficiency of the simulations. |
| 69 |
|
|
| 70 |
|
\subsection{Lattice Model\label{In:ssec:model}} |
| 71 |
< |
The gel like characteristic of the ripple phase ensures the feasiblity |
| 72 |
< |
of applying the lattice model to study the system. It is claimed that |
| 73 |
< |
the packing of the lipid molecules in ripple phase is |
| 74 |
< |
hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models, |
| 75 |
< |
{\it i.e.}, Ising model, Heisenberg model and $X-Y$ model, show |
| 76 |
< |
{\it frustration} on triangular lattice. |
| 71 |
> |
The gel-like characteristic (relatively immobile molecules) exhibited |
| 72 |
> |
by the ripple phase makes it feasible to apply a lattice model to |
| 73 |
> |
study the system. The popular $2$ dimensional lattice models, {\it |
| 74 |
> |
i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration} |
| 75 |
> |
on triangular lattices. The Hamiltonians of these systems are given by |
| 76 |
> |
\begin{equation} |
| 77 |
> |
H = |
| 78 |
> |
\begin{cases} |
| 79 |
> |
-J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\ |
| 80 |
> |
-J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and |
| 81 |
> |
Heisenberg}, |
| 82 |
> |
\end{cases} |
| 83 |
> |
\end{equation} |
| 84 |
> |
where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and |
| 85 |
> |
$s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins |
| 86 |
> |
prefer an aligned structure, and if $J < 0$, spins prefer an |
| 87 |
> |
anti-aligned structure. |
| 88 |
> |
|
| 89 |
|
\begin{figure} |
| 90 |
|
\centering |
| 91 |
< |
\includegraphics[width=\linewidth]{./figures/frustration.pdf} |
| 92 |
< |
\caption{Sketch to illustrate the frustration on triangular |
| 93 |
< |
lattice. Spins are represented by arrows, no matter which direction |
| 94 |
< |
the spin on the top of triangle points to, the Hamiltonian of the |
| 95 |
< |
system is the same, hence there are infinite possibilities for the |
| 49 |
< |
packing of the spins.} |
| 50 |
< |
\label{fig:frustration} |
| 91 |
> |
\includegraphics[width=3in]{./figures/inFrustration.pdf} |
| 92 |
> |
\caption{Frustration on triangular lattice, the spins and dipoles are |
| 93 |
> |
represented by arrows. The multiple local minima of energy states |
| 94 |
> |
induce the frustration for spins and dipoles picking the directions.} |
| 95 |
> |
\label{Infig:frustration} |
| 96 |
|
\end{figure} |
| 97 |
< |
Figure~\ref{fig:frustration} shows an illustration of the frustration |
| 98 |
< |
on a triangular lattice. The direction of the spin on top of the |
| 99 |
< |
triangle has no effects on the Hamiltonian of the system, therefore |
| 100 |
< |
infinite possibilities for the packing of spins induce the frustration |
| 101 |
< |
of the lattice. |
| 97 |
> |
The spins in figure~\ref{Infig:frustration} shows an illustration of |
| 98 |
> |
the frustration for $J < 0$ on a triangular lattice. There are |
| 99 |
> |
multiple local minima energy states which are independent of the |
| 100 |
> |
direction of the spin on top of the triangle, therefore infinite |
| 101 |
> |
possibilities for the packing of spins which induces what is known as |
| 102 |
> |
``complete regular frustration'' which leads to disordered low |
| 103 |
> |
temperature phases. The similarity goes to the dipoles on a hexagonal |
| 104 |
> |
lattice, which are shown by the dipoles in |
| 105 |
> |
figure~\ref{Infig:frustration}. In this circumstance, the dipoles want |
| 106 |
> |
to be aligned, however, due to the long wave fluctuation, at low |
| 107 |
> |
temperature, the aligned state becomes unstable, vortex is formed and |
| 108 |
> |
results in multiple local minima of energy states. The dipole on the |
| 109 |
> |
center of the hexagonal lattice is frustrated. |
| 110 |
|
|
| 111 |
< |
The lack of translational degree of freedom in lattice models prevents |
| 112 |
< |
their utilization on investigating the emergence of the surface |
| 113 |
< |
buckling which is the imposition of the ripple formation. In this |
| 114 |
< |
dissertation, a modified lattice model is introduced to this specific |
| 62 |
< |
situation in Ch.~\ref{chap:mc}. |
| 111 |
> |
The lack of translational degrees of freedom in lattice models |
| 112 |
> |
prevents their utilization in models for surface buckling. In |
| 113 |
> |
chapter~\ref{chap:mc}, a modified lattice model is introduced to |
| 114 |
> |
tackle this specific situation. |
| 115 |
|
|
| 116 |
|
\section{Overview of Classical Statistical Mechanics\label{In:sec:SM}} |
| 117 |
|
Statistical mechanics provides a way to calculate the macroscopic |
| 119 |
|
computational simulations. This section serves as a brief introduction |
| 120 |
|
to key concepts of classical statistical mechanics that we used in |
| 121 |
|
this dissertation. Tolman gives an excellent introduction to the |
| 122 |
< |
principles of statistical mechanics~\cite{Tolman1979}. A large part of |
| 122 |
> |
principles of statistical mechanics.~\cite{Tolman1979} A large part of |
| 123 |
|
section~\ref{In:sec:SM} will follow Tolman's notation. |
| 124 |
|
|
| 125 |
|
\subsection{Ensembles\label{In:ssec:ensemble}} |
| 126 |
|
In classical mechanics, the state of the system is completely |
| 127 |
|
described by the positions and momenta of all particles. If we have an |
| 128 |
< |
$N$ particle system, there are $6N$ coordinates ($3N$ position $(q_1, |
| 128 |
> |
$N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1, |
| 129 |
|
q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$) |
| 130 |
|
to define the instantaneous state of the system. Each single set of |
| 131 |
|
the $6N$ coordinates can be considered as a unique point in a $6N$ |
| 147 |
|
normalize $\rho$ to unity, |
| 148 |
|
\begin{equation} |
| 149 |
|
1 = \int d \vec q~^N \int d \vec p~^N \rho, |
| 150 |
< |
\label{normalized} |
| 150 |
> |
\label{Ineq:normalized} |
| 151 |
|
\end{equation} |
| 152 |
|
then the value of $\rho$ gives the probability of finding the system |
| 153 |
< |
in a unit volume in the phase space. |
| 153 |
> |
in a unit volume in phase space. |
| 154 |
|
|
| 155 |
|
Liouville's theorem describes the change in density $\rho$ with |
| 156 |
< |
time. The number of representive points at a given volume in the phase |
| 156 |
> |
time. The number of representative points at a given volume in phase |
| 157 |
|
space at any instant can be written as: |
| 158 |
|
\begin{equation} |
| 159 |
< |
\label{eq:deltaN} |
| 159 |
> |
\label{Ineq:deltaN} |
| 160 |
|
\delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. |
| 161 |
|
\end{equation} |
| 162 |
< |
To calculate the change in the number of representive points in this |
| 162 |
> |
To calculate the change in the number of representative points in this |
| 163 |
|
volume, let us consider a simple condition: the change in the number |
| 164 |
< |
of representive points in $q_1$ axis. The rate of the number of the |
| 165 |
< |
representive points entering the volume at $q_1$ per unit time is: |
| 164 |
> |
of representative points along the $q_1$ axis. The rate of the number |
| 165 |
> |
of the representative points entering the volume at $q_1$ per unit time |
| 166 |
> |
is: |
| 167 |
|
\begin{equation} |
| 168 |
< |
\label{eq:deltaNatq1} |
| 168 |
> |
\label{Ineq:deltaNatq1} |
| 169 |
|
\rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N, |
| 170 |
|
\end{equation} |
| 171 |
< |
and the rate of the number of representive points leaving the volume |
| 171 |
> |
and the rate of the number of representative points leaving the volume |
| 172 |
|
at another position $q_1 + \delta q_1$ is: |
| 173 |
|
\begin{equation} |
| 174 |
< |
\label{eq:deltaNatq1plusdeltaq1} |
| 174 |
> |
\label{Ineq:deltaNatq1plusdeltaq1} |
| 175 |
|
\left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 + |
| 176 |
|
\frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. |
| 177 |
|
\end{equation} |
| 178 |
< |
Here the higher order differentials are neglected. So the change of |
| 179 |
< |
the number of the representive points is the difference of |
| 180 |
< |
eq.~\ref{eq:deltaNatq1} and eq.~\ref{eq:deltaNatq1plusdeltaq1}, which |
| 181 |
< |
gives us: |
| 178 |
> |
Here the higher order differentials are neglected. So the change in |
| 179 |
> |
the number of representative points is the difference between |
| 180 |
> |
eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, |
| 181 |
> |
which gives us: |
| 182 |
|
\begin{equation} |
| 183 |
< |
\label{eq:deltaNatq1axis} |
| 183 |
> |
\label{Ineq:deltaNatq1axis} |
| 184 |
|
-\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N, |
| 185 |
|
\end{equation} |
| 186 |
|
where, higher order differetials are neglected. If we sum over all the |
| 187 |
< |
axes in the phase space, we can get the change of the number of |
| 188 |
< |
representive points in a given volume with time: |
| 187 |
> |
axes in the phase space, we can get the change in the number of |
| 188 |
> |
representative points in a given volume with time: |
| 189 |
|
\begin{equation} |
| 190 |
< |
\label{eq:deltaNatGivenVolume} |
| 190 |
> |
\label{Ineq:deltaNatGivenVolume} |
| 191 |
|
\frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial |
| 192 |
|
{\dot q_i}}{\partial q_i} + \frac{\partial |
| 193 |
|
{\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial |
| 194 |
|
q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. |
| 195 |
|
\end{equation} |
| 196 |
< |
From Hamilton's equation of motion, |
| 196 |
> |
From Hamilton's equations of motion, |
| 197 |
|
\begin{equation} |
| 198 |
|
\frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial |
| 199 |
|
{\dot p_i}}{\partial p_i}, |
| 200 |
< |
\label{eq:canonicalFormOfEquationOfMotion} |
| 200 |
> |
\label{Ineq:canonicalFormOfEquationOfMotion} |
| 201 |
|
\end{equation} |
| 202 |
|
this cancels out the first term on the right side of |
| 203 |
< |
eq.~\ref{eq:deltaNatGivenVolume}. If both sides of |
| 204 |
< |
eq.~\ref{eq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2 |
| 203 |
> |
eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of |
| 204 |
> |
eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2 |
| 205 |
|
\ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we |
| 206 |
< |
can derive Liouville's theorem: |
| 206 |
> |
arrive at Liouville's theorem: |
| 207 |
|
\begin{equation} |
| 208 |
|
\left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left( |
| 209 |
|
\frac{\partial {\rho}}{\partial |
| 210 |
|
q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right). |
| 211 |
< |
\label{eq:simpleFormofLiouville} |
| 211 |
> |
\label{Ineq:simpleFormofLiouville} |
| 212 |
|
\end{equation} |
| 213 |
|
This is the basis of statistical mechanics. If we move the right |
| 214 |
< |
side of equation~\ref{eq:simpleFormofLiouville} to the left, we |
| 214 |
> |
side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we |
| 215 |
|
will obtain |
| 216 |
|
\begin{equation} |
| 217 |
|
\left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left( |
| 218 |
|
\frac{\partial {\rho}}{\partial |
| 219 |
|
q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right) |
| 220 |
|
= 0. |
| 221 |
< |
\label{eq:anotherFormofLiouville} |
| 221 |
> |
\label{Ineq:anotherFormofLiouville} |
| 222 |
|
\end{equation} |
| 223 |
|
It is easy to note that the left side of |
| 224 |
< |
equation~\ref{eq:anotherFormofLiouville} is the total derivative of |
| 225 |
< |
$\rho$ with respect of $t$, which means |
| 224 |
> |
equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of |
| 225 |
> |
$\rho$ with respect to $t$, which means |
| 226 |
|
\begin{equation} |
| 227 |
|
\frac{d \rho}{dt} = 0, |
| 228 |
< |
\label{eq:conservationofRho} |
| 228 |
> |
\label{Ineq:conservationofRho} |
| 229 |
|
\end{equation} |
| 230 |
|
and the rate of density change is zero in the neighborhood of any |
| 231 |
< |
selected moving representive points in the phase space. |
| 231 |
> |
selected moving representative points in the phase space. |
| 232 |
|
|
| 233 |
|
The condition of the ensemble is determined by the density |
| 234 |
|
distribution. If we consider the density distribution as only a |
| 235 |
|
function of $q$ and $p$, which means the rate of change of the phase |
| 236 |
< |
space density in the neighborhood of all representive points in the |
| 236 |
> |
space density in the neighborhood of all representative points in the |
| 237 |
|
phase space is zero, |
| 238 |
|
\begin{equation} |
| 239 |
|
\left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0. |
| 240 |
< |
\label{eq:statEquilibrium} |
| 240 |
> |
\label{Ineq:statEquilibrium} |
| 241 |
|
\end{equation} |
| 242 |
|
We may conclude the ensemble is in {\it statistical equilibrium}. An |
| 243 |
< |
ensemble in statistical equilibrium often means the system is also in |
| 243 |
> |
ensemble in statistical equilibrium means the system is also in |
| 244 |
|
macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t} |
| 245 |
|
\right)_{q, p} = 0$, then |
| 246 |
|
\begin{equation} |
| 248 |
|
\frac{\partial {\rho}}{\partial |
| 249 |
|
q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right) |
| 250 |
|
= 0. |
| 251 |
< |
\label{constantofMotion} |
| 251 |
> |
\label{Ineq:constantofMotion} |
| 252 |
|
\end{equation} |
| 253 |
|
If $\rho$ is a function only of some constant of the motion, $\rho$ is |
| 254 |
|
independent of time. For a conservative system, the energy of the |
| 255 |
< |
system is one of the constants of the motion. Here are several |
| 256 |
< |
examples: when the density distribution is constant everywhere in the |
| 257 |
< |
phase space, |
| 255 |
> |
system is one of the constants of the motion. There are many |
| 256 |
> |
thermodynamically relevant ensembles: when the density distribution is |
| 257 |
> |
constant everywhere in the phase space, |
| 258 |
|
\begin{equation} |
| 259 |
|
\rho = \mathrm{const.} |
| 260 |
< |
\label{eq:uniformEnsemble} |
| 260 |
> |
\label{Ineq:uniformEnsemble} |
| 261 |
|
\end{equation} |
| 262 |
< |
the ensemble is called {\it uniform ensemble}. Another useful |
| 263 |
< |
ensemble is called {\it microcanonical ensemble}, for which: |
| 262 |
> |
the ensemble is called {\it uniform ensemble}. |
| 263 |
> |
|
| 264 |
> |
\subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}} |
| 265 |
> |
Another useful ensemble is the {\it microcanonical ensemble}, for |
| 266 |
> |
which: |
| 267 |
|
\begin{equation} |
| 268 |
|
\rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)} |
| 269 |
< |
\label{eq:microcanonicalEnsemble} |
| 269 |
> |
\label{Ineq:microcanonicalEnsemble} |
| 270 |
|
\end{equation} |
| 271 |
|
where $\Sigma(N, V, E)$ is a normalization constant parameterized by |
| 272 |
|
$N$, the total number of particles, $V$, the total physical volume and |
| 276 |
|
Hamiltonian of the system. The Gibbs entropy is defined as |
| 277 |
|
\begin{equation} |
| 278 |
|
S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho], |
| 279 |
< |
\label{eq:gibbsEntropy} |
| 279 |
> |
\label{Ineq:gibbsEntropy} |
| 280 |
|
\end{equation} |
| 281 |
|
where $k_B$ is the Boltzmann constant and $C^N$ is a number which |
| 282 |
< |
makes the argument of $\ln$ dimensionless, in this case, it is the |
| 283 |
< |
total phase space volume of one state. The entropy in microcanonical |
| 282 |
> |
makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the |
| 283 |
> |
total phase space volume of one state. The entropy of a microcanonical |
| 284 |
|
ensemble is given by |
| 285 |
|
\begin{equation} |
| 286 |
|
S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right). |
| 287 |
< |
\label{eq:entropy} |
| 287 |
> |
\label{Ineq:entropy} |
| 288 |
|
\end{equation} |
| 289 |
+ |
|
| 290 |
+ |
\subsubsection{The Canonical Ensemble\label{In:sssec:canonical}} |
| 291 |
|
If the density distribution $\rho$ is given by |
| 292 |
|
\begin{equation} |
| 293 |
|
\rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T}, |
| 294 |
< |
\label{eq:canonicalEnsemble} |
| 294 |
> |
\label{Ineq:canonicalEnsemble} |
| 295 |
|
\end{equation} |
| 296 |
|
the ensemble is known as the {\it canonical ensemble}. Here, |
| 297 |
|
\begin{equation} |
| 298 |
|
Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T}, |
| 299 |
< |
\label{eq:partitionFunction} |
| 299 |
> |
\label{Ineq:partitionFunction} |
| 300 |
|
\end{equation} |
| 301 |
< |
which is also known as {\it partition function}. $\Gamma$ indicates |
| 302 |
< |
that the integral is over all the phase space. In the canonical |
| 301 |
> |
which is also known as the canonical{\it partition function}. $\Gamma$ |
| 302 |
> |
indicates that the integral is over all phase space. In the canonical |
| 303 |
|
ensemble, $N$, the total number of particles, $V$, total volume, and |
| 304 |
< |
$T$, the temperature are constants. The systems with the lowest |
| 304 |
> |
$T$, the temperature, are constants. The systems with the lowest |
| 305 |
|
energies hold the largest population. According to maximum principle, |
| 306 |
< |
the thermodynamics maximizes the entropy $S$, |
| 306 |
> |
thermodynamics maximizes the entropy $S$, implying that |
| 307 |
|
\begin{equation} |
| 308 |
|
\begin{array}{ccc} |
| 309 |
|
\delta S = 0 & \mathrm{and} & \delta^2 S < 0. |
| 310 |
|
\end{array} |
| 311 |
< |
\label{eq:maximumPrinciple} |
| 311 |
> |
\label{Ineq:maximumPrinciple} |
| 312 |
|
\end{equation} |
| 313 |
< |
From Eq.~\ref{eq:maximumPrinciple} and two constrains of the canonical |
| 314 |
< |
ensemble, {\it i.e.}, total probability and average energy conserved, |
| 315 |
< |
the partition function is calculated as |
| 313 |
> |
From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the |
| 314 |
> |
canonical ensemble, {\it i.e.}, total probability and average energy |
| 315 |
> |
must be conserved, the partition function can be shown to be |
| 316 |
> |
equivalent to |
| 317 |
|
\begin{equation} |
| 318 |
|
Z_N = e^{-A/k_B T}, |
| 319 |
< |
\label{eq:partitionFunctionWithFreeEnergy} |
| 319 |
> |
\label{Ineq:partitionFunctionWithFreeEnergy} |
| 320 |
|
\end{equation} |
| 321 |
|
where $A$ is the Helmholtz free energy. The significance of |
| 322 |
< |
Eq.~\ref{eq:entropy} and~\ref{eq:partitionFunctionWithFreeEnergy} is |
| 322 |
> |
Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is |
| 323 |
|
that they serve as a connection between macroscopic properties of the |
| 324 |
< |
system and the distribution of the microscopic states. |
| 324 |
> |
system and the distribution of microscopic states. |
| 325 |
|
|
| 326 |
|
There is an implicit assumption that our arguments are based on so |
| 327 |
< |
far. A representive point in the phase space is equally to be found in |
| 328 |
< |
any same extent of the regions. In other words, all energetically |
| 329 |
< |
accessible states are represented equally, the probabilities to find |
| 330 |
< |
the system in any of the accessible states is equal. This is called |
| 331 |
< |
equal a {\it priori} probabilities. |
| 327 |
> |
far. A representative point in the phase space is equally likely to be |
| 328 |
> |
found in any energetically allowed region. In other words, all |
| 329 |
> |
energetically accessible states are represented equally, the |
| 330 |
> |
probabilities to find the system in any of the accessible states is |
| 331 |
> |
equal. This is called the principle of equal a {\it priori} |
| 332 |
> |
probabilities. |
| 333 |
|
|
| 334 |
< |
\subsection{Statistical Average\label{In:ssec:average}} |
| 335 |
< |
Given the density distribution $\rho$ in the phase space, the average |
| 334 |
> |
\subsection{Statistical Averages\label{In:ssec:average}} |
| 335 |
> |
Given a density distribution $\rho$ in phase space, the average |
| 336 |
|
of any quantity ($F(q^N, p^N$)) which depends on the coordinates |
| 337 |
|
($q^N$) and the momenta ($p^N$) for all the systems in the ensemble |
| 338 |
|
can be calculated based on the definition shown by |
| 339 |
< |
Eq.~\ref{eq:statAverage1} |
| 339 |
> |
Eq.~\ref{Ineq:statAverage1} |
| 340 |
|
\begin{equation} |
| 341 |
< |
\langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N |
| 342 |
< |
F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}. |
| 343 |
< |
\label{eq:statAverage1} |
| 341 |
> |
\langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N |
| 342 |
> |
F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}. |
| 343 |
> |
\label{Ineq:statAverage1} |
| 344 |
|
\end{equation} |
| 345 |
|
Since the density distribution $\rho$ is normalized to unity, the mean |
| 346 |
|
value of $F(q^N, p^N)$ is simplified to |
| 347 |
|
\begin{equation} |
| 348 |
< |
\langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N, |
| 349 |
< |
p^N, t) \rho, |
| 350 |
< |
\label{eq:statAverage2} |
| 348 |
> |
\langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N, |
| 349 |
> |
p^N) \rho, |
| 350 |
> |
\label{Ineq:statAverage2} |
| 351 |
|
\end{equation} |
| 352 |
< |
called {\it ensemble average}. However, the quantity is often averaged |
| 353 |
< |
for a finite time in real experiments, |
| 352 |
> |
called the {\it ensemble average}. However, the quantity is often |
| 353 |
> |
averaged for a finite time in real experiments, |
| 354 |
|
\begin{equation} |
| 355 |
|
\langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty} |
| 356 |
< |
\frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt. |
| 357 |
< |
\label{eq:timeAverage1} |
| 356 |
> |
\frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt. |
| 357 |
> |
\label{Ineq:timeAverage1} |
| 358 |
|
\end{equation} |
| 359 |
|
Usually this time average is independent of $t_0$ in statistical |
| 360 |
< |
mechanics, so Eq.~\ref{eq:timeAverage1} becomes |
| 360 |
> |
mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes |
| 361 |
|
\begin{equation} |
| 362 |
|
\langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty} |
| 363 |
< |
\frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt |
| 364 |
< |
\label{eq:timeAverage2} |
| 363 |
> |
\frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt |
| 364 |
> |
\label{Ineq:timeAverage2} |
| 365 |
|
\end{equation} |
| 366 |
|
for an infinite time interval. |
| 367 |
|
|
| 368 |
< |
{\it ergodic hypothesis}, an important hypothesis from the statistical |
| 369 |
< |
mechanics point of view, states that the system will eventually pass |
| 368 |
> |
\subsubsection{Ergodicity\label{In:sssec:ergodicity}} |
| 369 |
> |
The {\it ergodic hypothesis}, an important hypothesis governing modern |
| 370 |
> |
computer simulations states that the system will eventually pass |
| 371 |
|
arbitrarily close to any point that is energetically accessible in |
| 372 |
|
phase space. Mathematically, this leads to |
| 373 |
|
\begin{equation} |
| 374 |
< |
\langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t. |
| 375 |
< |
\label{eq:ergodicity} |
| 374 |
> |
\langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t. |
| 375 |
> |
\label{Ineq:ergodicity} |
| 376 |
|
\end{equation} |
| 377 |
< |
Eq.~\ref{eq:ergodicity} validates the Monte Carlo method which we will |
| 378 |
< |
discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity |
| 379 |
< |
can be related to the time average measured in the experiments. |
| 377 |
> |
Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of |
| 378 |
> |
averaging for sufficiently ergodic systems. Also Monte Carlo may be |
| 379 |
> |
used to obtain ensemble averages of a quantity which are related to |
| 380 |
> |
time averages measured in experiments. |
| 381 |
|
|
| 382 |
< |
\subsection{Correlation Function\label{In:ssec:corr}} |
| 383 |
< |
Thermodynamic properties can be computed by equillibrium statistical |
| 384 |
< |
mechanics. On the other hand, {\it Time correlation function} is a |
| 385 |
< |
powerful method to understand the evolution of a dynamic system in |
| 386 |
< |
non-equillibrium statistical mechanics. Imagine a property $A(q^N, |
| 387 |
< |
p^N, t)$ as a function of coordinates $q^N$ and momenta $p^N$ has an |
| 388 |
< |
intial value at $t_0$, at a later time $t_0 + \tau$ this value is |
| 389 |
< |
changed. If $\tau$ is very small, the change of the value is minor, |
| 390 |
< |
and the later value of $A(q^N, p^N, t_0 + |
| 391 |
< |
\tau)$ is correlated to its initial value. Howere, when $\tau$ is large, |
| 392 |
< |
this correlation is lost. The correlation function is a measurement of |
| 393 |
< |
this relationship and is defined by~\cite{Berne90} |
| 382 |
> |
\subsection{Correlation Functions\label{In:ssec:corr}} |
| 383 |
> |
Thermodynamic properties can be computed by equilibrium statistical |
| 384 |
> |
mechanics. On the other hand, {\it Time correlation functions} are a |
| 385 |
> |
powerful tool to understand the evolution of a dynamical |
| 386 |
> |
systems. Imagine that property $A(q^N, p^N, t)$ as a function of |
| 387 |
> |
coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and |
| 388 |
> |
at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very |
| 389 |
> |
small, the change of the value is minor, and the later value of |
| 390 |
> |
$A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However, |
| 391 |
> |
when $\tau$ is large, this correlation is lost. A time correlation |
| 392 |
> |
function measures this relationship and is defined |
| 393 |
> |
by~\cite{Berne90} |
| 394 |
|
\begin{equation} |
| 395 |
< |
C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty} |
| 395 |
> |
C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow |
| 396 |
> |
\infty} |
| 397 |
|
\frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau). |
| 398 |
< |
\label{eq:autocorrelationFunction} |
| 398 |
> |
\label{Ineq:autocorrelationFunction} |
| 399 |
|
\end{equation} |
| 400 |
< |
Eq.~\ref{eq:autocorrelationFunction} is the correlation function of a |
| 401 |
< |
single variable, called {\it autocorrelation function}. The defination |
| 402 |
< |
of the correlation function for two different variables is similar to |
| 403 |
< |
that of autocorrelation function, which is |
| 400 |
> |
Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of |
| 401 |
> |
a single variable, called an {\it autocorrelation function}. The |
| 402 |
> |
definition of the correlation function for two different variables is |
| 403 |
> |
similar to that of autocorrelation function, which is |
| 404 |
|
\begin{equation} |
| 405 |
< |
C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty} |
| 405 |
> |
C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty} |
| 406 |
|
\frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau), |
| 407 |
< |
\label{eq:crosscorrelationFunction} |
| 407 |
> |
\label{Ineq:crosscorrelationFunction} |
| 408 |
|
\end{equation} |
| 409 |
|
and called {\it cross correlation function}. |
| 410 |
|
|
| 411 |
< |
In section~\ref{In:ssec:average} we know from Eq.~\ref{eq:ergodicity} |
| 412 |
< |
the relationship between time average and ensemble average. We can put |
| 413 |
< |
the correlation function in a classical mechanics form, |
| 411 |
> |
We know from the ergodic hypothesis that there is a relationship |
| 412 |
> |
between time average and ensemble average. We can put the correlation |
| 413 |
> |
function in a classical mechanics form, |
| 414 |
|
\begin{equation} |
| 415 |
< |
C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p) |
| 416 |
< |
\label{eq:autocorrelationFunctionCM} |
| 415 |
> |
C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)] |
| 416 |
> |
A[q^N(t+\tau), q^N(t+\tau)] \rho(q, p) |
| 417 |
> |
\label{Ineq:autocorrelationFunctionCM} |
| 418 |
|
\end{equation} |
| 419 |
|
and |
| 420 |
|
\begin{equation} |
| 421 |
< |
C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau) |
| 422 |
< |
\rho(q, p) |
| 423 |
< |
\label{eq:crosscorrelationFunctionCM} |
| 421 |
> |
C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)] |
| 422 |
> |
B[q^N(t+\tau), q^N(t+\tau)] \rho(q, p) |
| 423 |
> |
\label{Ineq:crosscorrelationFunctionCM} |
| 424 |
|
\end{equation} |
| 425 |
|
as autocorrelation function and cross correlation function |
| 426 |
|
respectively. $\rho(q, p)$ is the density distribution at equillibrium |
| 428 |
|
single exponential |
| 429 |
|
\begin{equation} |
| 430 |
|
C(t) \sim e^{-t / \tau_r}, |
| 431 |
< |
\label{eq:relaxation} |
| 431 |
> |
\label{Ineq:relaxation} |
| 432 |
|
\end{equation} |
| 433 |
|
where $\tau_r$ is known as relaxation time which discribes the rate of |
| 434 |
|
the decay. |
| 454 |
|
applied to the canonical ensemble, a Boltzmann-weighted ensemble, in |
| 455 |
|
which the $N$, the total number of particles, $V$, total volume, $T$, |
| 456 |
|
temperature are constants. The average energy is given by substituding |
| 457 |
< |
Eq.~\ref{eq:canonicalEnsemble} into Eq.~\ref{eq:statAverage2}, |
| 457 |
> |
Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2}, |
| 458 |
|
\begin{equation} |
| 459 |
|
\langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}. |
| 460 |
< |
\label{eq:energyofCanonicalEnsemble} |
| 460 |
> |
\label{Ineq:energyofCanonicalEnsemble} |
| 461 |
|
\end{equation} |
| 462 |
|
So are the other properties of the system. The Hamiltonian is the |
| 463 |
|
summation of Kinetic energy $K(p^N)$ as a function of momenta and |
| 464 |
|
Potential energy $U(q^N)$ as a function of positions, |
| 465 |
|
\begin{equation} |
| 466 |
|
H(q^N, p^N) = K(p^N) + U(q^N). |
| 467 |
< |
\label{eq:hamiltonian} |
| 467 |
> |
\label{Ineq:hamiltonian} |
| 468 |
|
\end{equation} |
| 469 |
|
If the property $A$ is only a function of position ($ A = A(q^N)$), |
| 470 |
|
the mean value of $A$ is reduced to |
| 471 |
|
\begin{equation} |
| 472 |
|
\langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}}, |
| 473 |
< |
\label{eq:configurationIntegral} |
| 473 |
> |
\label{Ineq:configurationIntegral} |
| 474 |
|
\end{equation} |
| 475 |
|
The kinetic energy $K(p^N)$ is factored out in |
| 476 |
< |
Eq.~\ref{eq:configurationIntegral}. $\langle A |
| 476 |
> |
Eq.~\ref{Ineq:configurationIntegral}. $\langle A |
| 477 |
|
\rangle$ is a configuration integral now, and the |
| 478 |
< |
Eq.~\ref{eq:configurationIntegral} is equivalent to |
| 478 |
> |
Eq.~\ref{Ineq:configurationIntegral} is equivalent to |
| 479 |
|
\begin{equation} |
| 480 |
|
\langle A \rangle = \int d \vec q~^N A \rho(q^N). |
| 481 |
< |
\label{eq:configurationAve} |
| 481 |
> |
\label{Ineq:configurationAve} |
| 482 |
|
\end{equation} |
| 483 |
|
|
| 484 |
|
In a Monte Carlo simulation of canonical ensemble, the probability of |
| 486 |
|
probability with time is given by |
| 487 |
|
\begin{equation} |
| 488 |
|
\frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ], |
| 489 |
< |
\label{eq:timeChangeofProb} |
| 489 |
> |
\label{Ineq:timeChangeofProb} |
| 490 |
|
\end{equation} |
| 491 |
|
where $w_{ss'}$ is the tansition probability of going from state $s$ |
| 492 |
|
to state $s'$. Since $\rho_s$ is independent of time at equilibrium, |
| 493 |
|
\begin{equation} |
| 494 |
|
\frac{d \rho_{s}^{equilibrium}}{dt} = 0, |
| 495 |
< |
\label{eq:equiProb} |
| 495 |
> |
\label{Ineq:equiProb} |
| 496 |
|
\end{equation} |
| 497 |
|
which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$ |
| 498 |
|
for all $s'$. So |
| 499 |
|
\begin{equation} |
| 500 |
|
\frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}. |
| 501 |
< |
\label{eq:timeChangeofProb} |
| 501 |
> |
\label{Ineq:relationshipofRhoandW} |
| 502 |
|
\end{equation} |
| 503 |
|
If |
| 504 |
|
\begin{equation} |
| 505 |
|
\frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T}, |
| 506 |
< |
\label{eq:conditionforBoltzmannStatistics} |
| 506 |
> |
\label{Ineq:conditionforBoltzmannStatistics} |
| 507 |
|
\end{equation} |
| 508 |
|
then |
| 509 |
|
\begin{equation} |
| 510 |
|
\frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}. |
| 511 |
< |
\label{eq:satisfyofBoltzmannStatistics} |
| 511 |
> |
\label{Ineq:satisfyofBoltzmannStatistics} |
| 512 |
|
\end{equation} |
| 513 |
< |
Eq.~\ref{eq:satisfyofBoltzmannStatistics} implies that |
| 513 |
> |
Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that |
| 514 |
|
$\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm, |
| 515 |
|
shows how Monte Carlo simulation generates a transition probability |
| 516 |
< |
governed by \ref{eq:conditionforBoltzmannStatistics}, is schemed as |
| 516 |
> |
governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as |
| 517 |
|
\begin{enumerate} |
| 518 |
< |
\item\label{itm:oldEnergy} Choose an particle randomly, calculate the energy. |
| 519 |
< |
\item\label{itm:newEnergy} Make a random displacement for particle, |
| 518 |
> |
\item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy. |
| 519 |
> |
\item\label{Initm:newEnergy} Make a random displacement for particle, |
| 520 |
|
calculate the new energy. |
| 521 |
|
\begin{itemize} |
| 522 |
< |
\item Keep the new configuration and return to step~\ref{itm:oldEnergy} if energy |
| 522 |
> |
\item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy |
| 523 |
|
goes down. |
| 524 |
|
\item Pick a random number between $[0,1]$ if energy goes up. |
| 525 |
|
\begin{itemize} |
| 526 |
|
\item Keep the new configuration and return to |
| 527 |
< |
step~\ref{itm:oldEnergy} if the random number smaller than |
| 527 |
> |
step~\ref{Initm:oldEnergy} if the random number smaller than |
| 528 |
|
$e^{-(U_{new} - U_{old})} / k_B T$. |
| 529 |
|
\item Keep the old configuration and return to |
| 530 |
< |
step~\ref{itm:oldEnergy} if the random number larger than |
| 530 |
> |
step~\ref{Initm:oldEnergy} if the random number larger than |
| 531 |
|
$e^{-(U_{new} - U_{old})} / k_B T$. |
| 532 |
|
\end{itemize} |
| 533 |
|
\end{itemize} |
| 534 |
< |
\item\label{itm:accumulateAvg} Accumulate the average after it converges. |
| 534 |
> |
\item\label{Initm:accumulateAvg} Accumulate the average after it converges. |
| 535 |
|
\end{enumerate} |
| 536 |
|
It is important to notice that the old configuration has to be sampled |
| 537 |
|
again if it is kept. |
| 553 |
|
defined by |
| 554 |
|
\begin{equation} |
| 555 |
|
\frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle, |
| 556 |
< |
\label{eq:temperature} |
| 556 |
> |
\label{Ineq:temperature} |
| 557 |
|
\end{equation} |
| 558 |
|
here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$ |
| 559 |
|
component of the velocity of the particle. The right side of |
| 560 |
< |
Eq.~\ref{eq:temperature} is the average kinetic energy of the |
| 560 |
> |
Eq.~\ref{Ineq:temperature} is the average kinetic energy of the |
| 561 |
|
system. A simple Molecular Dynamics simulation scheme |
| 562 |
|
is:~\cite{Frenkel1996} |
| 563 |
|
\begin{enumerate} |
| 564 |
< |
\item\label{itm:initialize} Assign the initial positions and momenta |
| 564 |
> |
\item\label{Initm:initialize} Assign the initial positions and momenta |
| 565 |
|
for the particles in the system. |
| 566 |
< |
\item\label{itm:calcForce} Calculate the forces. |
| 567 |
< |
\item\label{itm:equationofMotion} Integrate the equation of motion. |
| 566 |
> |
\item\label{Initm:calcForce} Calculate the forces. |
| 567 |
> |
\item\label{Initm:equationofMotion} Integrate the equation of motion. |
| 568 |
|
\begin{itemize} |
| 569 |
< |
\item Return to step~\ref{itm:calcForce} if the equillibrium is |
| 569 |
> |
\item Return to step~\ref{Initm:calcForce} if the equillibrium is |
| 570 |
|
not achieved. |
| 571 |
< |
\item Go to step~\ref{itm:calcAvg} if the equillibrium is |
| 571 |
> |
\item Go to step~\ref{Initm:calcAvg} if the equillibrium is |
| 572 |
|
achieved. |
| 573 |
|
\end{itemize} |
| 574 |
< |
\item\label{itm:calcAvg} Compute the quantities we are interested in. |
| 574 |
> |
\item\label{Initm:calcAvg} Compute the quantities we are interested in. |
| 575 |
|
\end{enumerate} |
| 576 |
|
The initial positions of the particles are chosen as that there is no |
| 577 |
|
overlap for the particles. The initial velocities at first are |
| 578 |
|
distributed randomly to the particles, and then shifted to make the |
| 579 |
|
momentum of the system $0$, at last scaled to the desired temperature |
| 580 |
< |
of the simulation according Eq.~\ref{eq:temperature}. |
| 580 |
> |
of the simulation according Eq.~\ref{Ineq:temperature}. |
| 581 |
|
|
| 582 |
< |
The core of Molecular Dynamics simulations is step~\ref{itm:calcForce} |
| 583 |
< |
and~\ref{itm:equationofMotion}. The calculation of the forces are |
| 582 |
> |
The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce} |
| 583 |
> |
and~\ref{Initm:equationofMotion}. The calculation of the forces are |
| 584 |
|
often involved numerous effort, this is the most time consuming step |
| 585 |
|
in the Molecular Dynamics scheme. The evaluation of the forces is |
| 586 |
|
followed by |
| 587 |
|
\begin{equation} |
| 588 |
|
f(q) = - \frac{\partial U(q)}{\partial q}, |
| 589 |
< |
\label{eq:force} |
| 589 |
> |
\label{Ineq:force} |
| 590 |
|
\end{equation} |
| 591 |
|
$U(q)$ is the potential of the system. Once the forces computed, are |
| 592 |
|
the positions and velocities updated by integrating Newton's equation |
| 593 |
|
of motion, |
| 594 |
|
\begin{equation} |
| 595 |
|
f(q) = \frac{dp}{dt} = \frac{m dv}{dt}. |
| 596 |
< |
\label{eq:newton} |
| 596 |
> |
\label{Ineq:newton} |
| 597 |
|
\end{equation} |
| 598 |
|
Here is an example of integrating algorithms, Verlet algorithm, which |
| 599 |
|
is one of the best algorithms to integrate Newton's equation of |
| 602 |
|
q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 + |
| 603 |
|
\frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} + |
| 604 |
|
\mathcal{O}(\Delta t^4) |
| 605 |
< |
\label{eq:verletFuture} |
| 605 |
> |
\label{Ineq:verletFuture} |
| 606 |
|
\end{equation} |
| 607 |
|
for a later time $t+\Delta t$, and |
| 608 |
|
\begin{equation} |
| 609 |
|
q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 - |
| 610 |
|
\frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} + |
| 611 |
|
\mathcal{O}(\Delta t^4) , |
| 612 |
< |
\label{eq:verletPrevious} |
| 612 |
> |
\label{Ineq:verletPrevious} |
| 613 |
|
\end{equation} |
| 614 |
|
for a previous time $t-\Delta t$. The summation of the |
| 615 |
< |
Eq.~\ref{eq:verletFuture} and~\ref{eq:verletPrevious} gives |
| 615 |
> |
Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives |
| 616 |
|
\begin{equation} |
| 617 |
|
q(t+\Delta t)+q(t-\Delta t) = |
| 618 |
|
2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4), |
| 619 |
< |
\label{eq:verletSum} |
| 619 |
> |
\label{Ineq:verletSum} |
| 620 |
|
\end{equation} |
| 621 |
|
so, the new position can be expressed as |
| 622 |
|
\begin{equation} |
| 623 |
|
q(t+\Delta t) \approx |
| 624 |
|
2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2. |
| 625 |
< |
\label{eq:newPosition} |
| 625 |
> |
\label{Ineq:newPosition} |
| 626 |
|
\end{equation} |
| 627 |
|
The higher order of the $\Delta t$ is omitted. |
| 628 |
|
|
| 650 |
|
Hamiltonian of such a system is written as |
| 651 |
|
\begin{equation} |
| 652 |
|
H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q), |
| 653 |
< |
\label{eq:hamiltonianofCoupling} |
| 653 |
> |
\label{Ineq:hamiltonianofCoupling} |
| 654 |
|
\end{equation} |
| 655 |
|
where $H_B$ is the Hamiltonian of the bath which equals to |
| 656 |
|
\begin{equation} |
| 657 |
|
H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} + |
| 658 |
|
\frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\}, |
| 659 |
< |
\label{eq:hamiltonianofBath} |
| 659 |
> |
\label{Ineq:hamiltonianofBath} |
| 660 |
|
\end{equation} |
| 661 |
|
$\alpha$ is all the degree of freedoms of the bath, $\omega$ is the |
| 662 |
|
bath frequency, and $\Delta U(q)$ is the bilinear coupling given by |
| 663 |
|
\begin{equation} |
| 664 |
|
\Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q, |
| 665 |
< |
\label{eq:systemBathCoupling} |
| 665 |
> |
\label{Ineq:systemBathCoupling} |
| 666 |
|
\end{equation} |
| 667 |
|
where $g$ is the coupling constant. By solving the Hamilton's equation |
| 668 |
|
of motion, the {\it Generalized Langevin Equation} for this system is |
| 669 |
|
derived to |
| 670 |
|
\begin{equation} |
| 671 |
|
m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t), |
| 672 |
< |
\label{eq:gle} |
| 672 |
> |
\label{Ineq:gle} |
| 673 |
|
\end{equation} |
| 674 |
|
with mean force, |
| 675 |
|
\begin{equation} |
| 676 |
|
W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha |
| 677 |
|
\omega_\alpha^2}q^2, |
| 678 |
< |
\label{eq:meanForce} |
| 678 |
> |
\label{Ineq:meanForce} |
| 679 |
|
\end{equation} |
| 680 |
|
being only a dependence of coordinates of the solute particles, {\it |
| 681 |
|
friction kernel}, |
| 682 |
|
\begin{equation} |
| 683 |
|
\xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha |
| 684 |
|
\omega_\alpha} \cos(\omega_\alpha t), |
| 685 |
< |
\label{eq:xiforGLE} |
| 685 |
> |
\label{Ineq:xiforGLE} |
| 686 |
|
\end{equation} |
| 687 |
|
and the random force, |
| 688 |
|
\begin{equation} |
| 689 |
|
R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha |
| 690 |
|
\omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot |
| 691 |
|
q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t), |
| 692 |
< |
\label{eq:randomForceforGLE} |
| 692 |
> |
\label{Ineq:randomForceforGLE} |
| 693 |
|
\end{equation} |
| 694 |
|
as only a dependence of the initial conditions. The relationship of |
| 695 |
|
friction kernel $\xi(t)$ and random force $R(t)$ is given by |
| 696 |
|
\begin{equation} |
| 697 |
|
\xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle |
| 698 |
< |
\label{eq:relationshipofXiandR} |
| 698 |
> |
\label{Ineq:relationshipofXiandR} |
| 699 |
|
\end{equation} |
| 700 |
|
from their definitions. In Langevin limit, the friction is treated |
| 701 |
|
static, which means |
| 702 |
|
\begin{equation} |
| 703 |
|
\xi(t) = 2 \xi_0 \delta(t). |
| 704 |
< |
\label{eq:xiofStaticFriction} |
| 704 |
> |
\label{Ineq:xiofStaticFriction} |
| 705 |
|
\end{equation} |
| 706 |
< |
After substitude $\xi(t)$ into Eq.~\ref{eq:gle} with |
| 707 |
< |
Eq.~\ref{eq:xiofStaticFriction}, {\it Langevin Equation} is extracted |
| 706 |
> |
After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with |
| 707 |
> |
Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted |
| 708 |
|
to |
| 709 |
|
\begin{equation} |
| 710 |
|
m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t). |
| 711 |
< |
\label{eq:langevinEquation} |
| 711 |
> |
\label{Ineq:langevinEquation} |
| 712 |
|
\end{equation} |
| 713 |
|
The applying of Langevin Equation to dynamic simulations is discussed |
| 714 |
|
in Ch.~\ref{chap:ld}. |