ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
(Generate patch)

Comparing trunk/xDissertation/Introduction.tex (file contents):
Revision 3347 by xsun, Fri Feb 29 15:20:55 2008 UTC vs.
Revision 3374 by xsun, Thu Mar 20 22:21:43 2008 UTC

# Line 1 | Line 1
1   \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2  
3   \section{Background on the Problem\label{In:sec:pro}}
4 < Phospholipid molecules are chosen to be studied in this dissertation
5 < because of their critical role as a foundation of the bio-membrane
6 < construction. The self assembled bilayer of the lipids when dispersed
7 < in water is the micro structure of the membrane. The phase behavior of
8 < lipid bilayer is explored experimentally~\cite{Cevc87}, however, fully
9 < understanding on the mechanism is far beyond accomplished.
4 > Phospholipid molecules are the primary topic of this dissertation
5 > because of their critical role as the foundation of biological
6 > membranes. The chemical structure of phospholipids includes the polar
7 > head group which is due to a large charge separation between phosphate
8 > and amino alcohol, and the nonpolar tails that contains fatty acid
9 > chains. Depending on the alcohol which phosphate and fatty acid chains
10 > are esterified to, the phospholipids are divided into
11 > glycerophospholipids and sphingophospholipids.~\cite{Cevc80} The
12 > chemical structures are shown in figure~\ref{Infig:lipid}.
13 > \begin{figure}
14 > \centering
15 > \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
16 > \caption{The chemical structure of glycerophospholipids (left) and
17 > sphingophospholipids (right).\cite{Cevc80}}
18 > \label{Infig:lipid}
19 > \end{figure}
20 > The glycerophospholipid is the dominant phospholipid in membranes. The
21 > types of glycerophospholipids depend on the X group, and the
22 > chains. For example, if X is choline, the lipids are known as
23 > phosphatidylcholine (PC), or if X is ethanolamine, the lipids are
24 > known as phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a
25 > number types of phosphatidycholine with different fatty acids as the
26 > lipid chains.
27 > \begin{table*}
28 > \begin{minipage}{\linewidth}
29 > \begin{center}
30 > \caption{A number types of phosphatidycholine.}
31 > \begin{tabular}{lll}
32 > \hline
33 >  & Fatty acid & Generic Name \\
34 > \hline
35 > \textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH &
36 > \textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
37 > \textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine
38 > \\
39 > \textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
40 > \end{tabular}
41 > \label{Intab:pc}
42 > \end{center}
43 > \end{minipage}
44 > \end{table*}
45 > When dispersed in water, lipids self assemble into a mumber of
46 > topologically distinct bilayer structures. The phase behavior of lipid
47 > bilayers has been explored experimentally~\cite{Cevc80}, however, a
48 > complete understanding of the mechanism and driving forces behind the
49 > various phases has not been achieved.
50  
51   \subsection{Ripple Phase\label{In:ssec:ripple}}
52 < The {\it ripple phase} $P_{\beta'}$ of lipid bilayers, named from the
52 > The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
53   periodic buckling of the membrane, is an intermediate phase which is
54   developed either from heating the gel phase $L_{\beta'}$ or cooling
55 < the fluid phase $L_\alpha$. Although the ripple phase is observed in
56 < different
57 < experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03},
58 < the mechanism of the formation of the ripple phase has never been
59 < explained and the microscopic structure of the ripple phase has never
60 < been elucidated by experiments. Computational simulation is a perfect
61 < tool to study the microscopic properties for a system, however, the
62 < long range dimension of the ripple structure and the long time scale
63 < of the formation of the ripples are crucial obstacles to performing
64 < the actual work. The idea to break through this dilemma forks into:
55 > the fluid phase $L_\alpha$. A Sketch is shown in
56 > figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
57 > \begin{figure}
58 > \centering
59 > \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
60 > \caption{A phase diagram of lipid bilayer. With increasing the
61 > temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple
62 > ($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.~\cite{Cevc80}}
63 > \label{Infig:phaseDiagram}
64 > \end{figure}
65 > Most structural information of the ripple phase has been obtained by
66 > the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
67 > electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
68 > diffraction work by Katsaras {\it et al.} showed that a rich phase
69 > diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
70 > is possible for lecithin bilayers.\cite{Katsaras00} Recently,
71 > Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe
72 > ripple phase morphology in bilayers supported on
73 > mica.~\cite{Kaasgaard03}
74 > \begin{figure}
75 > \centering
76 > \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
77 > \caption{The experimental observed ripple phase. The top image is
78 > obtained by Sun {\it et al.} using X-ray diffraction~\cite{Sun96},
79 > and the bottom one is observed by Kaasgaard {\it et al.} using
80 > AFM.~\cite{Kaasgaard03}}
81 > \label{Infig:ripple}
82 > \end{figure}
83 > Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray
84 > diffraction and AFM. The experimental results provide strong support
85 > for a 2-dimensional triangular packing lattice of the lipid molecules
86 > within the ripple phase.  This is a notable change from the observed
87 > lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
88 > {\it et al.} have recently observed near-hexagonal packing in some
89 > phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
90 > physical mechanism for the formation of the ripple phase has never
91 > been explained and the microscopic structure of the ripple phase has
92 > never been elucidated by experiments. Computational simulation is a
93 > perfect tool to study the microscopic properties for a
94 > system. However, the large length scale the ripple structure and the
95 > long time scale of the formation of the ripples are crucial obstacles
96 > to performing the actual work. The principal ideas explored in this
97 > dissertation are attempts to break the computational task up by
98   \begin{itemize}
99 < \item Simplify the lipid model.
100 < \item Improve the integrating algorithm.
99 > \item Simplifying the lipid model.
100 > \item Improving algorithm for integrating the equations of motion.
101   \end{itemize}
102 < In Ch.~\ref{chap:mc} and~\ref{chap:md}, we use a simple point dipole
103 < model and a coarse-grained model to perform the Monte Carlo and
104 < Molecular Dynamics simulations respectively, and in Ch.~\ref{chap:ld},
105 < we implement a Langevin Dynamics algorithm to exclude the explicit
106 < solvent to improve the efficiency of the simulations.
102 > In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
103 > dipole spin model and a coarse-grained molecualr scale model to
104 > perform the Monte Carlo and Molecular Dynamics simulations
105 > respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
106 > Dynamics algorithm which excludes the explicit solvent to improve the
107 > efficiency of the simulations.
108  
109   \subsection{Lattice Model\label{In:ssec:model}}
110 < The gel like characteristic of the ripple phase ensures the feasiblity
111 < of applying the lattice model to study the system. It is claimed that
112 < the packing of the lipid molecules in ripple phase is
113 < hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models,
114 < {\it i.e.}, Ising model, Heisenberg model and $X-Y$ model, show
115 < {\it frustration} on triangular lattice.
110 > The gel-like characteristic (relatively immobile molecules) exhibited
111 > by the ripple phase makes it feasible to apply a lattice model to
112 > study the system. The popular $2$ dimensional lattice models, {\it
113 > i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
114 > on triangular lattices. The Hamiltonians of these systems are given by
115 > \begin{equation}
116 > H =
117 >  \begin{cases}
118 >    -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\
119 >    -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and
120 > Heisenberg},
121 >  \end{cases}
122 > \end{equation}
123 > where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
124 > $s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins
125 > prefer an aligned structure, and if $J < 0$, spins prefer an
126 > anti-aligned structure.
127 >
128   \begin{figure}
129   \centering
130 < \includegraphics[width=\linewidth]{./figures/frustration.pdf}
131 < \caption{Sketch to illustrate the frustration on triangular
132 < lattice. Spins are represented by arrows, no matter which direction
133 < the spin on the top of triangle points to, the Hamiltonian of the
134 < system is the same, hence there are infinite possibilities for the
49 < packing of the spins.}
50 < \label{fig:frustration}
130 > \includegraphics[width=3in]{./figures/inFrustration.pdf}
131 > \caption{Frustration on triangular lattice, the spins and dipoles are
132 > represented by arrows. The multiple local minima of energy states
133 > induce the frustration for spins and dipoles picking the directions.}
134 > \label{Infig:frustration}
135   \end{figure}
136 < Figure~\ref{fig:frustration} shows an illustration of the frustration
137 < on a triangular lattice. The direction of the spin on top of the
138 < triangle has no effects on the Hamiltonian of the system, therefore
139 < infinite possibilities for the packing of spins induce the frustration
140 < of the lattice.
136 > The spins in figure~\ref{Infig:frustration} shows an illustration of
137 > the frustration for $J < 0$ on a triangular lattice. There are
138 > multiple local minima energy states which are independent of the
139 > direction of the spin on top of the triangle, therefore infinite
140 > possibilities for the packing of spins which induces what is known as
141 > ``complete regular frustration'' which leads to disordered low
142 > temperature phases. The similarity goes to the dipoles on a hexagonal
143 > lattice, which are shown by the dipoles in
144 > figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
145 > to be aligned, however, due to the long wave fluctuation, at low
146 > temperature, the aligned state becomes unstable, vortex is formed and
147 > results in multiple local minima of energy states. The dipole on the
148 > center of the hexagonal lattice is frustrated.
149  
150 < The lack of translational degree of freedom in lattice models prevents
151 < their utilization on investigating the emergence of the surface
152 < buckling which is the imposition of the ripple formation. In this
153 < dissertation, a modified lattice model is introduced to this specific
62 < situation in Ch.~\ref{chap:mc}.
150 > The lack of translational degrees of freedom in lattice models
151 > prevents their utilization in models for surface buckling. In
152 > chapter~\ref{chap:mc}, a modified lattice model is introduced to
153 > tackle this specific situation.
154  
155   \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
156   Statistical mechanics provides a way to calculate the macroscopic
# Line 67 | Line 158 | this dissertation. Tolman gives an excellent introduct
158   computational simulations. This section serves as a brief introduction
159   to key concepts of classical statistical mechanics that we used in
160   this dissertation. Tolman gives an excellent introduction to the
161 < principles of statistical mechanics~\cite{Tolman1979}. A large part of
161 > principles of statistical mechanics.~\cite{Tolman1979} A large part of
162   section~\ref{In:sec:SM} will follow Tolman's notation.
163  
164   \subsection{Ensembles\label{In:ssec:ensemble}}
165   In classical mechanics, the state of the system is completely
166   described by the positions and momenta of all particles. If we have an
167 < $N$ particle system, there are $6N$ coordinates ($3N$ position $(q_1,
167 > $N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1,
168   q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
169   to define the instantaneous state of the system. Each single set of
170   the $6N$ coordinates can be considered as a unique point in a $6N$
# Line 95 | Line 186 | normalize $\rho$ to unity,
186   normalize $\rho$ to unity,
187   \begin{equation}
188   1 = \int d \vec q~^N \int d \vec p~^N \rho,
189 < \label{normalized}
189 > \label{Ineq:normalized}
190   \end{equation}
191   then the value of $\rho$ gives the probability of finding the system
192 < in a unit volume in the phase space.
192 > in a unit volume in phase space.
193  
194   Liouville's theorem describes the change in density $\rho$ with
195 < time. The number of representive points at a given volume in the phase
195 > time. The number of representative points at a given volume in phase
196   space at any instant can be written as:
197   \begin{equation}
198 < \label{eq:deltaN}
198 > \label{Ineq:deltaN}
199   \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
200   \end{equation}
201 < To calculate the change in the number of representive points in this
201 > To calculate the change in the number of representative points in this
202   volume, let us consider a simple condition: the change in the number
203 < of representive points in $q_1$ axis. The rate of the number of the
204 < representive points entering the volume at $q_1$ per unit time is:
203 > of representative points along the $q_1$ axis. The rate of the number
204 > of the representative points entering the volume at $q_1$ per unit time
205 > is:
206   \begin{equation}
207 < \label{eq:deltaNatq1}
207 > \label{Ineq:deltaNatq1}
208   \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
209   \end{equation}
210 < and the rate of the number of representive points leaving the volume
210 > and the rate of the number of representative points leaving the volume
211   at another position $q_1 + \delta q_1$ is:
212   \begin{equation}
213 < \label{eq:deltaNatq1plusdeltaq1}
213 > \label{Ineq:deltaNatq1plusdeltaq1}
214   \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
215   \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
216   \end{equation}
217 < Here the higher order differentials are neglected. So the change of
218 < the number of the representive points is the difference of
219 < eq.~\ref{eq:deltaNatq1} and eq.~\ref{eq:deltaNatq1plusdeltaq1}, which
220 < gives us:
217 > Here the higher order differentials are neglected. So the change in
218 > the number of representative points is the difference between
219 > eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1},
220 > which gives us:
221   \begin{equation}
222 < \label{eq:deltaNatq1axis}
222 > \label{Ineq:deltaNatq1axis}
223   -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
224   \end{equation}
225   where, higher order differetials are neglected. If we sum over all the
226 < axes in the phase space, we can get the change of the number of
227 < representive points in a given volume with time:
226 > axes in the phase space, we can get the change in the number of
227 > representative points in a given volume with time:
228   \begin{equation}
229 < \label{eq:deltaNatGivenVolume}
229 > \label{Ineq:deltaNatGivenVolume}
230   \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
231   {\dot q_i}}{\partial q_i} + \frac{\partial
232   {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
233   q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
234   \end{equation}
235 < From Hamilton's equation of motion,
235 > From Hamilton's equations of motion,
236   \begin{equation}
237   \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
238   {\dot p_i}}{\partial p_i},
239 < \label{eq:canonicalFormOfEquationOfMotion}
239 > \label{Ineq:canonicalFormOfEquationOfMotion}
240   \end{equation}
241   this cancels out the first term on the right side of
242 < eq.~\ref{eq:deltaNatGivenVolume}. If both sides of
243 < eq.~\ref{eq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
242 > eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
243 > eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
244   \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
245 < can derive Liouville's theorem:
245 > arrive at Liouville's theorem:
246   \begin{equation}
247   \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
248   \frac{\partial {\rho}}{\partial
249   q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right).
250 < \label{eq:simpleFormofLiouville}
250 > \label{Ineq:simpleFormofLiouville}
251   \end{equation}
252   This is the basis of statistical mechanics. If we move the right
253 < side of equation~\ref{eq:simpleFormofLiouville} to the left, we
253 > side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we
254   will obtain
255   \begin{equation}
256   \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left(
257   \frac{\partial {\rho}}{\partial
258   q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
259   = 0.
260 < \label{eq:anotherFormofLiouville}
260 > \label{Ineq:anotherFormofLiouville}
261   \end{equation}
262   It is easy to note that the left side of
263 < equation~\ref{eq:anotherFormofLiouville} is the total derivative of
264 < $\rho$ with respect of $t$, which means
263 > equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
264 > $\rho$ with respect to $t$, which means
265   \begin{equation}
266   \frac{d \rho}{dt} = 0,
267 < \label{eq:conservationofRho}
267 > \label{Ineq:conservationofRho}
268   \end{equation}
269   and the rate of density change is zero in the neighborhood of any
270 < selected moving representive points in the phase space.
270 > selected moving representative points in the phase space.
271  
272   The condition of the ensemble is determined by the density
273   distribution. If we consider the density distribution as only a
274   function of $q$ and $p$, which means the rate of change of the phase
275 < space density in the neighborhood of all representive points in the
275 > space density in the neighborhood of all representative points in the
276   phase space is zero,
277   \begin{equation}
278   \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
279 < \label{eq:statEquilibrium}
279 > \label{Ineq:statEquilibrium}
280   \end{equation}
281   We may conclude the ensemble is in {\it statistical equilibrium}. An
282 < ensemble in statistical equilibrium often means the system is also in
282 > ensemble in statistical equilibrium means the system is also in
283   macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
284   \right)_{q, p} = 0$, then
285   \begin{equation}
# Line 195 | Line 287 | q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \
287   \frac{\partial {\rho}}{\partial
288   q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
289   = 0.
290 < \label{constantofMotion}
290 > \label{Ineq:constantofMotion}
291   \end{equation}
292   If $\rho$ is a function only of some constant of the motion, $\rho$ is
293   independent of time. For a conservative system, the energy of the
294 < system is one of the constants of the motion. Here are several
295 < examples: when the density distribution is constant everywhere in the
296 < phase space,
294 > system is one of the constants of the motion. There are many
295 > thermodynamically relevant ensembles: when the density distribution is
296 > constant everywhere in the phase space,
297   \begin{equation}
298   \rho = \mathrm{const.}
299 < \label{eq:uniformEnsemble}
299 > \label{Ineq:uniformEnsemble}
300   \end{equation}
301 < the ensemble is called {\it uniform ensemble}.  Another useful
302 < ensemble is called {\it microcanonical ensemble}, for which:
301 > the ensemble is called {\it uniform ensemble}.
302 >
303 > \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
304 > Another useful ensemble is the {\it microcanonical ensemble}, for
305 > which:
306   \begin{equation}
307   \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
308 < \label{eq:microcanonicalEnsemble}
308 > \label{Ineq:microcanonicalEnsemble}
309   \end{equation}
310   where $\Sigma(N, V, E)$ is a normalization constant parameterized by
311   $N$, the total number of particles, $V$, the total physical volume and
# Line 220 | Line 315 | S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [
315   Hamiltonian of the system. The Gibbs entropy is defined as
316   \begin{equation}
317   S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho],
318 < \label{eq:gibbsEntropy}
318 > \label{Ineq:gibbsEntropy}
319   \end{equation}
320   where $k_B$ is the Boltzmann constant and $C^N$ is a number which
321 < makes the argument of $\ln$ dimensionless, in this case, it is the
322 < total phase space volume of one state. The entropy in microcanonical
321 > makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
322 > total phase space volume of one state. The entropy of a microcanonical
323   ensemble is given by
324   \begin{equation}
325   S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
326 < \label{eq:entropy}
326 > \label{Ineq:entropy}
327   \end{equation}
328 +
329 + \subsubsection{The Canonical Ensemble\label{In:sssec:canonical}}
330   If the density distribution $\rho$ is given by
331   \begin{equation}
332   \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
333 < \label{eq:canonicalEnsemble}
333 > \label{Ineq:canonicalEnsemble}
334   \end{equation}
335   the ensemble is known as the {\it canonical ensemble}. Here,
336   \begin{equation}
337   Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(q^N, p^N) / k_B T},
338 < \label{eq:partitionFunction}
338 > \label{Ineq:partitionFunction}
339   \end{equation}
340 < which is also known as {\it partition function}. $\Gamma$ indicates
341 < that the integral is over all the phase space. In the canonical
340 > which is also known as the canonical{\it partition function}. $\Gamma$
341 > indicates that the integral is over all phase space. In the canonical
342   ensemble, $N$, the total number of particles, $V$, total volume, and
343 < $T$, the temperature are constants. The systems with the lowest
343 > $T$, the temperature, are constants. The systems with the lowest
344   energies hold the largest population. According to maximum principle,
345 < the thermodynamics maximizes the entropy $S$,
345 > thermodynamics maximizes the entropy $S$, implying that
346   \begin{equation}
347   \begin{array}{ccc}
348   \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
349   \end{array}
350 < \label{eq:maximumPrinciple}
350 > \label{Ineq:maximumPrinciple}
351   \end{equation}
352 < From Eq.~\ref{eq:maximumPrinciple} and two constrains of the canonical
353 < ensemble, {\it i.e.}, total probability and average energy conserved,
354 < the partition function is calculated as
352 > From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the
353 > canonical ensemble, {\it i.e.}, total probability and average energy
354 > must be conserved, the partition function can be shown to be
355 > equivalent to
356   \begin{equation}
357   Z_N = e^{-A/k_B T},
358 < \label{eq:partitionFunctionWithFreeEnergy}
358 > \label{Ineq:partitionFunctionWithFreeEnergy}
359   \end{equation}
360   where $A$ is the Helmholtz free energy. The significance of
361 < Eq.~\ref{eq:entropy} and~\ref{eq:partitionFunctionWithFreeEnergy} is
361 > Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
362   that they serve as a connection between macroscopic properties of the
363 < system and the distribution of the microscopic states.
363 > system and the distribution of microscopic states.
364  
365   There is an implicit assumption that our arguments are based on so
366 < far. A representive point in the phase space is equally to be found in
367 < any same extent of the regions. In other words, all energetically
368 < accessible states are represented equally, the probabilities to find
369 < the system in any of the accessible states is equal. This is called
370 < equal a {\it priori} probabilities.
366 > far. A representative point in the phase space is equally likely to be
367 > found in any energetically allowed region. In other words, all
368 > energetically accessible states are represented equally, the
369 > probabilities to find the system in any of the accessible states is
370 > equal. This is called the principle of equal a {\it priori}
371 > probabilities.
372  
373 < \subsection{Statistical Average\label{In:ssec:average}}
374 < Given the density distribution $\rho$ in the phase space, the average
373 > \subsection{Statistical Averages\label{In:ssec:average}}
374 > Given a density distribution $\rho$ in phase space, the average
375   of any quantity ($F(q^N, p^N$)) which depends on the coordinates
376   ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
377   can be calculated based on the definition shown by
378 < Eq.~\ref{eq:statAverage1}
378 > Eq.~\ref{Ineq:statAverage1}
379   \begin{equation}
380 < \langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
381 < F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
382 < \label{eq:statAverage1}
380 > \langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
381 > F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
382 > \label{Ineq:statAverage1}
383   \end{equation}
384   Since the density distribution $\rho$ is normalized to unity, the mean
385   value of $F(q^N, p^N)$ is simplified to
386   \begin{equation}
387 < \langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
388 < p^N, t) \rho,
389 < \label{eq:statAverage2}
387 > \langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
388 > p^N) \rho,
389 > \label{Ineq:statAverage2}
390   \end{equation}
391 < called {\it ensemble average}. However, the quantity is often averaged
392 < for a finite time in real experiments,
391 > called the {\it ensemble average}. However, the quantity is often
392 > averaged for a finite time in real experiments,
393   \begin{equation}
394   \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
395 < \frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt.
396 < \label{eq:timeAverage1}
395 > \frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt.
396 > \label{Ineq:timeAverage1}
397   \end{equation}
398   Usually this time average is independent of $t_0$ in statistical
399 < mechanics, so Eq.~\ref{eq:timeAverage1} becomes
399 > mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
400   \begin{equation}
401   \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
402 < \frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt
403 < \label{eq:timeAverage2}
402 > \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
403 > \label{Ineq:timeAverage2}
404   \end{equation}
405   for an infinite time interval.
406  
407 < {\it ergodic hypothesis}, an important hypothesis from the statistical
408 < mechanics point of view, states that the system will eventually pass
407 > \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
408 > The {\it ergodic hypothesis}, an important hypothesis governing modern
409 > computer simulations states that the system will eventually pass
410   arbitrarily close to any point that is energetically accessible in
411   phase space. Mathematically, this leads to
412   \begin{equation}
413 < \langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t.
414 < \label{eq:ergodicity}
413 > \langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t.
414 > \label{Ineq:ergodicity}
415   \end{equation}
416 < Eq.~\ref{eq:ergodicity} validates the Monte Carlo method which we will
417 < discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity
418 < can be related to the time average measured in the experiments.
416 > Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of
417 > averaging for sufficiently ergodic systems. Also Monte Carlo may be
418 > used to obtain ensemble averages of a quantity which are related to
419 > time averages measured in experiments.
420  
421 < \subsection{Correlation Function\label{In:ssec:corr}}
422 < Thermodynamic properties can be computed by equillibrium statistical
423 < mechanics. On the other hand, {\it Time correlation function} is a
424 < powerful method to understand the evolution of a dynamic system in
425 < non-equillibrium statistical mechanics. Imagine a property $A(q^N,
426 < p^N, t)$ as a function of coordinates $q^N$ and momenta $p^N$ has an
427 < intial value at $t_0$, at a later time $t_0 + \tau$ this value is
428 < changed. If $\tau$ is very small, the change of the value is minor,
429 < and the later value of $A(q^N, p^N, t_0 +
430 < \tau)$ is correlated to its initial value. Howere, when $\tau$ is large,
431 < this correlation is lost. The correlation function is a measurement of
432 < this relationship and is defined by~\cite{Berne90}
421 > \subsection{Correlation Functions\label{In:ssec:corr}}
422 > Thermodynamic properties can be computed by equilibrium statistical
423 > mechanics. On the other hand, {\it Time correlation functions} are a
424 > powerful tool to understand the evolution of a dynamical
425 > systems. Imagine that property $A(q^N, p^N, t)$ as a function of
426 > coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and
427 > at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very
428 > small, the change of the value is minor, and the later value of
429 > $A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However,
430 > when $\tau$ is large, this correlation is lost. A time correlation
431 > function measures this relationship and is defined
432 > by~\cite{Berne90}
433   \begin{equation}
434 < C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty}
434 > C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow
435 > \infty}
436   \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
437 < \label{eq:autocorrelationFunction}
437 > \label{Ineq:autocorrelationFunction}
438   \end{equation}
439 < Eq.~\ref{eq:autocorrelationFunction} is the correlation function of a
440 < single variable, called {\it autocorrelation function}. The defination
441 < of the correlation function for two different variables is similar to
442 < that of autocorrelation function, which is
439 > Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of
440 > a single variable, called an {\it autocorrelation function}. The
441 > definition of the correlation function for two different variables is
442 > similar to that of autocorrelation function, which is
443   \begin{equation}
444 < C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
444 > C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
445   \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
446 < \label{eq:crosscorrelationFunction}
446 > \label{Ineq:crosscorrelationFunction}
447   \end{equation}
448   and called {\it cross correlation function}.
449  
450 < In section~\ref{In:ssec:average} we know from Eq.~\ref{eq:ergodicity}
451 < the relationship between time average and ensemble average. We can put
452 < the correlation function in a classical mechanics form,
450 > We know from the ergodic hypothesis that there is a relationship
451 > between time average and ensemble average. We can put the correlation
452 > function in a classical mechanics form,
453   \begin{equation}
454 < C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p)
455 < \label{eq:autocorrelationFunctionCM}
454 > C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
455 > A[q^N(t+\tau), p^N(t+\tau)] \rho(q, p)
456 > \label{Ineq:autocorrelationFunctionCM}
457   \end{equation}
458   and
459   \begin{equation}
460 < C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau)
461 < \rho(q, p)
462 < \label{eq:crosscorrelationFunctionCM}
460 > C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
461 > B[q^N(t+\tau), p^N(t+\tau)] \rho(q, p)
462 > \label{Ineq:crosscorrelationFunctionCM}
463   \end{equation}
464 < as autocorrelation function and cross correlation function
464 > as the autocorrelation function and cross correlation functions
465   respectively. $\rho(q, p)$ is the density distribution at equillibrium
466 < in phase space. In many cases, the correlation function decay is a
467 < single exponential
466 > in phase space. In many cases, correlation functions decay as a
467 > single exponential in time
468   \begin{equation}
469   C(t) \sim e^{-t / \tau_r},
470 < \label{eq:relaxation}
470 > \label{Ineq:relaxation}
471   \end{equation}
472 < where $\tau_r$ is known as relaxation time which discribes the rate of
472 > where $\tau_r$ is known as relaxation time which describes the rate of
473   the decay.
474  
475 < \section{Methodolody\label{In:sec:method}}
476 < The simulations performed in this dissertation are branched into two
477 < main catalog, Monte Carlo and Molecular Dynamics. There are two main
478 < difference between Monte Carlo and Molecular Dynamics simulations. One
479 < is that the Monte Carlo simulation is time independent, and Molecular
480 < Dynamics simulation is time involved. Another dissimilar is that the
481 < Monte Carlo is a stochastic process, the configuration of the system
482 < is not determinated by its past, however, using Moleuclar Dynamics,
483 < the system is propagated by Newton's equation of motion, the
484 < trajectory of the system evolved in the phase space is determined. A
485 < brief introduction of the two algorithms are given in
486 < section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
487 < Molecular Dynamics, Langevin Dynamics, is introduced by
475 > \section{Methodology\label{In:sec:method}}
476 > The simulations performed in this dissertation branch into two main
477 > categories, Monte Carlo and Molecular Dynamics. There are two main
478 > differences between Monte Carlo and Molecular Dynamics
479 > simulations. One is that the Monte Carlo simulations are time
480 > independent methods of sampling structural features of an ensemble,
481 > while Molecular Dynamics simulations provide dynamic
482 > information. Additionally, Monte Carlo methods are a stochastic
483 > processes, the future configurations of the system are not determined
484 > by its past. However, in Molecular Dynamics, the system is propagated
485 > by Newton's equation of motion, and the trajectory of the system
486 > evolving in phase space is deterministic. Brief introductions of the
487 > two algorithms are given in section~\ref{In:ssec:mc}
488 > and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
489 > Dynamics that includes implicit solvent effects, is introduced by
490   section~\ref{In:ssec:ld}.
491  
492   \subsection{Monte Carlo\label{In:ssec:mc}}
493 < Monte Carlo algorithm was first introduced by Metropolis {\it et
494 < al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
495 < applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
496 < which the $N$, the total number of particles, $V$, total volume, $T$,
497 < temperature are constants. The average energy is given by substituding
498 < Eq.~\ref{eq:canonicalEnsemble} into Eq.~\ref{eq:statAverage2},
493 > A Monte Carlo integration algorithm was first introduced by Metropolis
494 > {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
495 > algorithm is usually applied to the canonical ensemble, a
496 > Boltzmann-weighted ensemble, in which the $N$, the total number of
497 > particles, $V$, total volume, $T$, temperature are constants. An
498 > average in this ensemble is given
499   \begin{equation}
500 < \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
501 < \label{eq:energyofCanonicalEnsemble}
500 > \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
501 > A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
502 > \label{Ineq:energyofCanonicalEnsemble}
503   \end{equation}
504 < So are the other properties of the system. The Hamiltonian is the
505 < summation of Kinetic energy $K(p^N)$ as a function of momenta and
506 < Potential energy $U(q^N)$ as a function of positions,
504 > The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function
505 > of momenta and the potential energy, $U(q^N)$, a function of
506 > positions,
507   \begin{equation}
508   H(q^N, p^N) = K(p^N) + U(q^N).
509 < \label{eq:hamiltonian}
509 > \label{Ineq:hamiltonian}
510   \end{equation}
511 < If the property $A$ is only a function of position ($ A = A(q^N)$),
512 < the mean value of $A$ is reduced to
511 > If the property $A$ is a function only of position ($ A = A(q^N)$),
512 > the mean value of $A$ can be reduced to
513   \begin{equation}
514 < \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
515 < \label{eq:configurationIntegral}
514 > \langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}},
515 > \label{Ineq:configurationIntegral}
516   \end{equation}
517   The kinetic energy $K(p^N)$ is factored out in
518 < Eq.~\ref{eq:configurationIntegral}. $\langle A
519 < \rangle$ is a configuration integral now, and the
520 < Eq.~\ref{eq:configurationIntegral} is equivalent to
518 > Eq.~\ref{Ineq:configurationIntegral}. $\langle A
519 > \rangle$ is now a configuration integral, and
520 > Eq.~\ref{Ineq:configurationIntegral} is equivalent to
521   \begin{equation}
522   \langle A \rangle = \int d \vec q~^N A \rho(q^N).
523 < \label{eq:configurationAve}
523 > \label{Ineq:configurationAve}
524   \end{equation}
525  
526 < In a Monte Carlo simulation of canonical ensemble, the probability of
527 < the system being in a state $s$ is $\rho_s$, the change of this
528 < probability with time is given by
526 > In a Monte Carlo simulation of a system in the canonical ensemble, the
527 > probability of the system being in a state $s$ is $\rho_s$, the change
528 > of this probability with time is given by
529   \begin{equation}
530   \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
531 < \label{eq:timeChangeofProb}
531 > \label{Ineq:timeChangeofProb}
532   \end{equation}
533   where $w_{ss'}$ is the tansition probability of going from state $s$
534   to state $s'$. Since $\rho_s$ is independent of time at equilibrium,
535   \begin{equation}
536   \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
537 < \label{eq:equiProb}
537 > \label{Ineq:equiProb}
538   \end{equation}
539   which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
540   for all $s'$. So
541   \begin{equation}
542   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
543 < \label{eq:timeChangeofProb}
543 > \label{Ineq:relationshipofRhoandW}
544   \end{equation}
545 < If
545 > If the ratio of state populations
546   \begin{equation}
547 < \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
548 < \label{eq:conditionforBoltzmannStatistics}
547 > \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
548 > \label{Ineq:satisfyofBoltzmannStatistics}
549   \end{equation}
550 < then
550 > then the ratio of transition probabilities,
551   \begin{equation}
552 < \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
553 < \label{eq:satisfyofBoltzmannStatistics}
552 > \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
553 > \label{Ineq:conditionforBoltzmannStatistics}
554   \end{equation}
555 < Eq.~\ref{eq:satisfyofBoltzmannStatistics} implies that
556 < $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
557 < shows how Monte Carlo simulation generates a transition probability
452 < governed by \ref{eq:conditionforBoltzmannStatistics}, is schemed as
555 > An algorithm that shows how Monte Carlo simulation generates a
556 > transition probability governed by
557 > \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
558   \begin{enumerate}
559 < \item\label{itm:oldEnergy} Choose an particle randomly, calculate the energy.
560 < \item\label{itm:newEnergy} Make a random displacement for particle,
561 < calculate the new energy.
559 > \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
560 > the energy of the rest of the system due to the location of the particle.
561 > \item\label{Initm:newEnergy} Make a random displacement of the particle,
562 > calculate the new energy taking the movement of the particle into account.
563    \begin{itemize}
564 <     \item Keep the new configuration and return to step~\ref{itm:oldEnergy} if energy
565 < goes down.
566 <     \item Pick a random number between $[0,1]$ if energy goes up.
564 >     \item If the energy goes down, keep the new configuration and return to
565 > step~\ref{Initm:oldEnergy}.
566 >     \item If the energy goes up, pick a random number between $[0,1]$.
567          \begin{itemize}
568 <           \item Keep the new configuration and return to
569 < step~\ref{itm:oldEnergy} if the random number smaller than
570 < $e^{-(U_{new} - U_{old})} / k_B T$.
571 <           \item Keep the old configuration and return to
572 < step~\ref{itm:oldEnergy} if the random number larger than
573 < $e^{-(U_{new} - U_{old})} / k_B T$.
568 >           \item If the random number smaller than
569 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration and return to
570 > step~\ref{Initm:oldEnergy}.
571 >           \item If the random number is larger than
572 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration and return to
573 > step~\ref{Initm:oldEnergy}.
574          \end{itemize}
575    \end{itemize}
576 < \item\label{itm:accumulateAvg} Accumulate the average after it converges.
576 > \item\label{Initm:accumulateAvg} Accumulate the averages based on the
577 > current configuartion.
578 > \item Go to step~\ref{Initm:oldEnergy}.
579   \end{enumerate}
580 < It is important to notice that the old configuration has to be sampled
581 < again if it is kept.
580 > It is important for sampling accurately that the old configuration is
581 > sampled again if it is kept.
582  
583   \subsection{Molecular Dynamics\label{In:ssec:md}}
584   Although some of properites of the system can be calculated from the
585 < ensemble average in Monte Carlo simulations, due to the nature of
586 < lacking in the time dependence, it is impossible to gain information
587 < of those dynamic properties from Monte Carlo simulations. Molecular
588 < Dynamics is a measurement of the evolution of the positions and
589 < momenta of the particles in the system. The evolution of the system
590 < obeys laws of classical mechanics, in most situations, there is no
591 < need for the count of the quantum effects. For a real experiment, the
592 < instantaneous positions and momenta of the particles in the system are
593 < neither important nor measurable, the observable quantities are
594 < usually a average value for a finite time interval. These quantities
595 < are expressed as a function of positions and momenta in Melecular
596 < Dynamics simulations. Like the thermal temperature of the system is
489 < defined by
585 > ensemble average in Monte Carlo simulations, due to the absence of the
586 > time dependence, it is impossible to gain information on dynamic
587 > properties from Monte Carlo simulations. Molecular Dynamics evolves
588 > the positions and momenta of the particles in the system. The
589 > evolution of the system obeys the laws of classical mechanics, and in
590 > most situations, there is no need to account for quantum effects. In a
591 > real experiment, the instantaneous positions and momenta of the
592 > particles in the system are neither important nor measurable, the
593 > observable quantities are usually an average value for a finite time
594 > interval. These quantities are expressed as a function of positions
595 > and momenta in Molecular Dynamics simulations. For example,
596 > temperature of the system is defined by
597   \begin{equation}
598 < \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
599 < \label{eq:temperature}
598 > \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
599 > \label{Ineq:temperature}
600   \end{equation}
601 < here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
602 < component of the velocity of the particle. The right side of
603 < Eq.~\ref{eq:temperature} is the average kinetic energy of the
497 < system. A simple Molecular Dynamics simulation scheme
498 < is:~\cite{Frenkel1996}
499 < \begin{enumerate}
500 < \item\label{itm:initialize} Assign the initial positions and momenta
501 < for the particles in the system.
502 < \item\label{itm:calcForce} Calculate the forces.
503 < \item\label{itm:equationofMotion} Integrate the equation of motion.
504 <  \begin{itemize}
505 <     \item Return to step~\ref{itm:calcForce} if the equillibrium is
506 < not achieved.
507 <     \item Go to step~\ref{itm:calcAvg} if the equillibrium is
508 < achieved.
509 <  \end{itemize}
510 < \item\label{itm:calcAvg} Compute the quantities we are interested in.
511 < \end{enumerate}
512 < The initial positions of the particles are chosen as that there is no
513 < overlap for the particles. The initial velocities at first are
514 < distributed randomly to the particles, and then shifted to make the
515 < momentum of the system $0$, at last scaled to the desired temperature
516 < of the simulation according Eq.~\ref{eq:temperature}.
601 > here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of
602 > particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the
603 > average kinetic energy of the system.
604  
605 < The core of Molecular Dynamics simulations is step~\ref{itm:calcForce}
606 < and~\ref{itm:equationofMotion}. The calculation of the forces are
607 < often involved numerous effort, this is the most time consuming step
608 < in the Molecular Dynamics scheme. The evaluation of the forces is
609 < followed by
605 > The initial positions of the particles are chosen so that there is no
606 > overlap of the particles. The initial velocities at first are
607 > distributed randomly to the particles using a Maxwell-Boltzmann
608 > ditribution, and then shifted to make the total linear momentum of the
609 > system $0$.
610 >
611 > The core of Molecular Dynamics simulations is the calculation of
612 > forces and the integration algorithm. Calculation of the forces often
613 > involves enormous effort. This is the most time consuming step in the
614 > Molecular Dynamics scheme. Evaluation of the forces is mathematically
615 > simple,
616   \begin{equation}
617   f(q) = - \frac{\partial U(q)}{\partial q},
618 < \label{eq:force}
618 > \label{Ineq:force}
619   \end{equation}
620 < $U(q)$ is the potential of the system. Once the forces computed, are
621 < the positions and velocities updated by integrating Newton's equation
622 < of motion,
620 > where $U(q)$ is the potential of the system. However, the numerical
621 > details of this computation are often quite complex. Once the forces
622 > computed, the positions and velocities are updated by integrating
623 > Hamilton's equations of motion,
624 > \begin{eqnarray}
625 > \dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial
626 > U(q_i)}{\partial q_i} = f(q_i) \\
627 > \dot q_i & = & p_i
628 > \label{Ineq:newton}
629 > \end{eqnarray}
630 > The classic example of an integrating algorithm is the Verlet
631 > algorithm, which is one of the simplest algorithms for integrating the
632 > equations of motion. The Taylor expansion of the position at time $t$
633 > is
634   \begin{equation}
531 f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
532 \label{eq:newton}
533 \end{equation}
534 Here is an example of integrating algorithms, Verlet algorithm, which
535 is one of the best algorithms to integrate Newton's equation of
536 motion. The Taylor expension of the position at time $t$ is
537 \begin{equation}
635   q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
636          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
637          \mathcal{O}(\Delta t^4)
638 < \label{eq:verletFuture}
638 > \label{Ineq:verletFuture}
639   \end{equation}
640   for a later time $t+\Delta t$, and
641   \begin{equation}
642   q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
643          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
644          \mathcal{O}(\Delta t^4) ,
645 < \label{eq:verletPrevious}
645 > \label{Ineq:verletPrevious}
646   \end{equation}
647 < for a previous time $t-\Delta t$. The summation of the
648 < Eq.~\ref{eq:verletFuture} and~\ref{eq:verletPrevious} gives
647 > for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture}
648 > and~\ref{Ineq:verletPrevious} gives
649   \begin{equation}
650   q(t+\Delta t)+q(t-\Delta t) =
651          2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
652 < \label{eq:verletSum}
652 > \label{Ineq:verletSum}
653   \end{equation}
654   so, the new position can be expressed as
655   \begin{equation}
656   q(t+\Delta t) \approx
657          2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
658 < \label{eq:newPosition}
658 > \label{Ineq:newPosition}
659   \end{equation}
660 < The higher order of the $\Delta t$ is omitted.
660 > The higher order terms in $\Delta t$ are omitted.
661  
662 < Numerous technics and tricks are applied to Molecular Dynamics
663 < simulation to gain more efficiency or more accuracy. The simulation
664 < engine used in this dissertation for the Molecular Dynamics
665 < simulations is {\sc oopse}, more details of the algorithms and
666 < technics can be found in~\cite{Meineke2005}.
662 > Numerous techniques and tricks have been applied to Molecular Dynamics
663 > simulations to gain greater efficiency or accuracy. The engine used in
664 > this dissertation for the Molecular Dynamics simulations is {\sc
665 > oopse}. More details of the algorithms and techniques used in this
666 > code can be found in Ref.~\cite{Meineke2005}.
667  
668   \subsection{Langevin Dynamics\label{In:ssec:ld}}
669   In many cases, the properites of the solvent in a system, like the
670 < lipid-water system studied in this dissertation, are less important to
671 < the researchers. However, the major computational expense is spent on
672 < the solvent in the Molecular Dynamics simulations because of the large
673 < number of the solvent molecules compared to that of solute molecules,
674 < the ratio of the number of lipid molecules to the number of water
670 > water in the lipid-water system studied in this dissertation, are less
671 > interesting to the researchers than the behavior of the
672 > solute. However, the major computational expense is ofter the
673 > solvent-solvent interation, this is due to the large number of the
674 > solvent molecules when compared to the number of solute molecules, the
675 > ratio of the number of lipid molecules to the number of water
676   molecules is $1:25$ in our lipid-water system. The efficiency of the
677 < Molecular Dynamics simulations is greatly reduced.
677 > Molecular Dynamics simulations is greatly reduced, with up to 85\% of
678 > CPU time spent calculating only water-water interactions.
679  
680 < As an extension of the Molecular Dynamics simulations, the Langevin
681 < Dynamics seeks a way to avoid integrating equation of motion for
682 < solvent particles without losing the Brownian properites of solute
683 < particles. A common approximation is that the coupling of the solute
684 < and solvent is expressed as a set of harmonic oscillators. So the
685 < Hamiltonian of such a system is written as
680 > As an extension of the Molecular Dynamics methodologies, Langevin
681 > Dynamics seeks a way to avoid integrating the equations of motion for
682 > solvent particles without losing the solvent effects on the solute
683 > particles. One common approximation is to express the coupling of the
684 > solute and solvent as a set of harmonic oscillators. The Hamiltonian
685 > of such a system is written as
686   \begin{equation}
687   H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
688 < \label{eq:hamiltonianofCoupling}
688 > \label{Ineq:hamiltonianofCoupling}
689   \end{equation}
690 < where $H_B$ is the Hamiltonian of the bath which equals to
690 > where $H_B$ is the Hamiltonian of the bath which  is a set of N
691 > harmonic oscillators
692   \begin{equation}
693   H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
694   \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
695 < \label{eq:hamiltonianofBath}
695 > \label{Ineq:hamiltonianofBath}
696   \end{equation}
697 < $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
698 < bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
697 > $\alpha$ runs over all the degree of freedoms of the bath,
698 > $\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and
699 > $\Delta U(q)$ is the bilinear coupling given by
700   \begin{equation}
701   \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
702 < \label{eq:systemBathCoupling}
702 > \label{Ineq:systemBathCoupling}
703   \end{equation}
704 < where $g$ is the coupling constant. By solving the Hamilton's equation
705 < of motion, the {\it Generalized Langevin Equation} for this system is
706 < derived to
704 > where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By
705 > solving the Hamilton's equations of motion, the {\it Generalized
706 > Langevin Equation} for this system is derived as
707   \begin{equation}
708   m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
709 < \label{eq:gle}
709 > \label{Ineq:gle}
710   \end{equation}
711   with mean force,
712   \begin{equation}
713   W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
714 < \omega_\alpha^2}q^2,
715 < \label{eq:meanForce}
714 > \omega_\alpha^2}q^2.
715 > \label{Ineq:meanForce}
716   \end{equation}
717 < being only a dependence of coordinates of the solute particles, {\it
718 < friction kernel},
717 > The {\it friction kernel}, $\xi(t)$, depends only on the coordinates
718 > of the solute particles,
719   \begin{equation}
720   \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
721   \omega_\alpha} \cos(\omega_\alpha t),
722 < \label{eq:xiforGLE}
722 > \label{Ineq:xiforGLE}
723   \end{equation}
724 < and the random force,
724 > and a ``random'' force,
725   \begin{equation}
726   R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
727   \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
728   q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
729 < \label{eq:randomForceforGLE}
729 > \label{Ineq:randomForceforGLE}
730   \end{equation}
731 < as only a dependence of the initial conditions. The relationship of
732 < friction kernel $\xi(t)$ and random force $R(t)$ is given by
731 > depends only on the initial conditions. The relationship of friction
732 > kernel $\xi(t)$ and random force $R(t)$ is given by the second
733 > fluctuation dissipation theorem,
734   \begin{equation}
735 < \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
736 < \label{eq:relationshipofXiandR}
735 > \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
736 > \label{Ineq:relationshipofXiandR}
737   \end{equation}
738 < from their definitions. In Langevin limit, the friction is treated
739 < static, which means
738 > In the harmonic bath this relation is exact and provable from the
739 > definitions of these quantities. In the limit of static friction,
740   \begin{equation}
741   \xi(t) = 2 \xi_0 \delta(t).
742 < \label{eq:xiofStaticFriction}
742 > \label{Ineq:xiofStaticFriction}
743   \end{equation}
744 < After substitude $\xi(t)$ into Eq.~\ref{eq:gle} with
745 < Eq.~\ref{eq:xiofStaticFriction}, {\it Langevin Equation} is extracted
746 < to
744 > After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with
745 > Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is
746 > extracted,
747   \begin{equation}
748   m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
749 < \label{eq:langevinEquation}
749 > \label{Ineq:langevinEquation}
750   \end{equation}
751 < The applying of Langevin Equation to dynamic simulations is discussed
752 < in Ch.~\ref{chap:ld}.
751 > Application of the Langevin Equation to dynamic simulations is
752 > discussed in Ch.~\ref{chap:ld}.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines