ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
(Generate patch)

Comparing trunk/xDissertation/Introduction.tex (file contents):
Revision 3365 by xsun, Mon Mar 10 21:52:50 2008 UTC vs.
Revision 3383 by xsun, Wed Apr 16 21:56:34 2008 UTC

# Line 3 | Line 3 | because of their critical role as the foundation of bi
3   \section{Background on the Problem\label{In:sec:pro}}
4   Phospholipid molecules are the primary topic of this dissertation
5   because of their critical role as the foundation of biological
6 < membranes. Lipids, when dispersed in water, self assemble into bilayer
7 < structures. The phase behavior of lipid bilayers have been explored
8 < experimentally~\cite{Cevc87}, however, a complete understanding of the
9 < mechanism for self-assembly has not been achieved.
6 > membranes. The chemical structure of phospholipids includes a head
7 > group with a large dipole moment which is due to the large charge
8 > separation between phosphate and amino alcohol, and a nonpolar tail
9 > that contains fatty acid chains. Depending on the specific alcohol
10 > which the phosphate and fatty acid chains are esterified to, the
11 > phospholipids are divided into glycerophospholipids and
12 > sphingophospholipids.~\cite{Cevc80} The chemical structures are shown
13 > in figure~\ref{Infig:lipid}.
14 > \begin{figure}
15 > \centering
16 > \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
17 > \caption[The chemical structure of lipids]{The chemical structure of
18 > glycerophospholipids (left) and sphingophospholipids
19 > (right).\cite{Cevc80}}
20 > \label{Infig:lipid}
21 > \end{figure}
22 > Glycerophospholipids are the dominant phospholipids in biological
23 > membranes. The type of glycerophospholipid depends on the identity of
24 > the X group, and the chains. For example, if X is choline
25 > [(CH$_3$)$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
26 > phosphatidylcholine (PC), or if X is ethanolamine
27 > [H$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
28 > phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a number
29 > types of phosphatidycholine with different fatty acids as the lipid
30 > chains.
31 > \begin{table*}
32 > \begin{minipage}{\linewidth}
33 > \begin{center}
34 > \caption{A NUMBER TYPES OF PHOSPHATIDYCHOLINE}
35 > \begin{tabular}{lll}
36 > \hline
37 >  & Fatty acid & Generic Name \\
38 > \hline
39 > \textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH &
40 > \textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
41 > \textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine
42 > \\
43 > \textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
44 > \end{tabular}
45 > \label{Intab:pc}
46 > \end{center}
47 > \end{minipage}
48 > \end{table*}
49 > When dispersed in water, lipids self assemble into a number of
50 > topologically distinct bilayer structures. The phase behavior of lipid
51 > bilayers has been explored experimentally~\cite{Cevc80}, however, a
52 > complete understanding of the mechanism and driving forces behind the
53 > various phases has not been achieved.
54  
55 < \subsection{Ripple Phase\label{In:ssec:ripple}}
55 > \subsection{The Ripple Phase\label{In:ssec:ripple}}
56   The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
57   periodic buckling of the membrane, is an intermediate phase which is
58   developed either from heating the gel phase $L_{\beta'}$ or cooling
59 < the fluid phase $L_\alpha$. Although the ripple phase has been
60 < observed in several
61 < experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03},
62 < the physical mechanism for the formation of the ripple phase has never been
63 < explained and the microscopic structure of the ripple phase has never
64 < been elucidated by experiments. Computational simulation is a perfect
65 < tool to study the microscopic properties for a system. However, the
66 < large length scale the ripple structure and the long time scale
67 < of the formation of the ripples are crucial obstacles to performing
68 < the actual work. The principal ideas explored in this dissertation are
69 < attempts to break the computational task up by
59 > the fluid phase $L_\alpha$. A sketch of the phases is shown in
60 > figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
61 > \begin{figure}
62 > \centering
63 > \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
64 > \caption[Phases of PC lipid bilayers]{Phases of PC lipid
65 > bilayers. With increasing temperature, phosphotidylcholine (PC)
66 > bilayers can go through $L_{\beta'} \rightarrow P_{\beta'}$ (gel
67 > $\rightarrow$ ripple) and $P_{\beta'} \rightarrow L_\alpha$ (ripple
68 > $\rightarrow$ fluid) phase transitions.~\cite{Cevc80}}
69 > \label{Infig:phaseDiagram}
70 > \end{figure}
71 > Most structural information about the ripple phase has been obtained
72 > by X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
73 > electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
74 > diffraction work by Katsaras {\it et al.} showed that a rich phase
75 > diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
76 > is possible for lecithin bilayers.\cite{Katsaras00} Recently,
77 > Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe
78 > ripple phase morphology in bilayers supported on
79 > mica.~\cite{Kaasgaard03}
80 > \begin{figure}
81 > \centering
82 > \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
83 > \caption[Experimental observations of the riple phase]{Experimental
84 > observations of the riple phase. The top image is an electrostatic
85 > density map obtained by Sun {\it et al.} using X-ray
86 > diffraction~\cite{Sun96}.  The lower figures are the surface topology
87 > of various ripple domains in bilayers supported in mica. The AFM
88 > images were observed by Kaasgaard {\it et al.}.~\cite{Kaasgaard03}}
89 > \label{Infig:ripple}
90 > \end{figure}
91 > Figure~\ref{Infig:ripple} shows the ripple phase oberved by both X-ray
92 > diffraction and AFM. The experimental results provide strong support
93 > for a 2-dimensional triangular packing lattice of the lipid molecules
94 > within the ripple phase.  This is a notable change from the observed
95 > lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
96 > {\it et al.} have recently observed near-triangular packing in some
97 > phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
98 > physical mechanism for the formation of the ripple phase has never
99 > been explained and the microscopic structure of the ripple phase has
100 > never been elucidated by experiments. Computational simulation is a
101 > very good tool to study the microscopic properties for a
102 > system. However, the large length scale of the ripple structures and
103 > the long time required for the formation of the ripples are crucial
104 > obstacles to performing the actual work. The principal ideas explored
105 > in this dissertation are attempts to break the computational task up
106 > by
107   \begin{itemize}
108   \item Simplifying the lipid model.
109 < \item Improving algorithm for integrating the equations of motion.
109 > \item Improving the algorithm for integrating the equations of motion.
110   \end{itemize}
111   In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
112 < dipole model and a coarse-grained model to perform the Monte Carlo and
113 < Molecular Dynamics simulations respectively, and in
114 < chapter~\ref{chap:ld}, we develop a Langevin Dynamics algorithm which
115 < excludes the explicit solvent to improve the efficiency of the
116 < simulations.
112 > dipole spin model and a coarse-grained molecular scale model to
113 > perform the Monte Carlo and Molecular Dynamics simulations
114 > respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
115 > Dynamics algorithm which excludes the explicit solvent to improve the
116 > efficiency of the simulations.
117  
118 < \subsection{Lattice Model\label{In:ssec:model}}
119 < The gel-like characteristic of the ripple phase makes it feasible to
120 < apply a lattice model to study the system. It is claimed that the
121 < packing of the lipid molecules in ripple phase is
122 < hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models,
123 < {\it i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it
43 < frustration} on triangular lattices. The Hamiltonian of the systems
44 < are given by
118 > \subsection{Lattice Models\label{In:ssec:model}}
119 > The gel-like characteristic (laterally immobile molecules) exhibited
120 > by the ripple phase makes it feasible to apply a lattice model to
121 > study the system. The popular $2$ dimensional lattice models, {\it
122 > i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
123 > on triangular lattices. The Hamiltonians of these systems are given by
124   \begin{equation}
125   H =
126    \begin{cases}
# Line 51 | Line 130 | where $J$ has non zero value only when spins $s_n$ ($\
130    \end{cases}
131   \end{equation}
132   where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
133 < $s_{n'}$ ($\vec s_{n'}$) are the nearest neighbours.
133 > $s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins
134 > prefer an aligned structure, and if $J < 0$, spins prefer an
135 > anti-aligned structure.
136 >
137   \begin{figure}
138   \centering
139 < \includegraphics[width=\linewidth]{./figures/inFrustration.pdf}
140 < \caption{Frustration on a triangular lattice, the spins are
141 < represented by arrows. No matter which direction the spin on the top
142 < of triangle points to, the Hamiltonain of the system goes up.}
139 > \includegraphics[width=3in]{./figures/inFrustration.pdf}
140 > \caption[Frustration on triangular lattice]{Frustration on triangular
141 > lattice, the spins and dipoles are represented by arrows. The multiple
142 > local minima of energy states induce frustration for spins and dipoles
143 > resulting in disordered low-temperature phases.}
144   \label{Infig:frustration}
145   \end{figure}
146 < Figure~\ref{Infig:frustration} shows an illustration of the
147 < frustration on a triangular lattice. When $J < 0$, the spins want to
148 < be anti-aligned, The direction of the spin on top of the triangle will
149 < make the energy go up no matter which direction it picks, therefore
150 < infinite possibilities for the packing of spins induce what is known
151 < as ``complete regular frustration'' which leads to disordered low
152 < temperature phases.
146 > The spins in figure~\ref{Infig:frustration} illustrate frustration for
147 > $J < 0$ on a triangular lattice. There are multiple local minima
148 > energy states which are independent of the direction of the spin on
149 > top of the triangle, therefore infinite possibilities for orienting
150 > large numbers spins. This induces what is known as ``complete regular
151 > frustration'' which leads to disordered low temperature phases. This
152 > behavior extends to dipoles on a triangular lattices, which are shown
153 > in the lower portion of figure~\ref{Infig:frustration}. In this case,
154 > dipole-aligned structures are energetically favorable, however, at low
155 > temperature, vortices are easily formed, and, this results in multiple
156 > local minima of energy states for a central dipole. The dipole on the
157 > center of the hexagonal lattice is therefore frustrated.
158  
159 < The lack of translational degree of freedom in lattice models prevents
160 < their utilization in models for surface buckling which would
161 < correspond to ripple formation. In chapter~\ref{chap:mc}, a modified
162 < lattice model is introduced to tackle this specific situation.
159 > The lack of translational degrees of freedom in lattice models
160 > prevents their utilization in models for surface buckling. In
161 > chapter~\ref{chap:mc}, a modified lattice model is introduced to
162 > tackle this specific situation.
163  
164   \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
165   Statistical mechanics provides a way to calculate the macroscopic
# Line 79 | Line 167 | this dissertation. Tolman gives an excellent introduct
167   computational simulations. This section serves as a brief introduction
168   to key concepts of classical statistical mechanics that we used in
169   this dissertation. Tolman gives an excellent introduction to the
170 < principles of statistical mechanics~\cite{Tolman1979}. A large part of
171 < section~\ref{In:sec:SM} will follow Tolman's notation.
170 > principles of statistical mechanics.~\cite{Tolman1979} A large part of
171 > section~\ref{In:sec:SM} follows Tolman's notation.
172  
173   \subsection{Ensembles\label{In:ssec:ensemble}}
174   In classical mechanics, the state of the system is completely
175   described by the positions and momenta of all particles. If we have an
176 < $N$ particle system, there are $6N$ coordinates ($3N$ position $(q_1,
176 > $N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1,
177   q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
178   to define the instantaneous state of the system. Each single set of
179   the $6N$ coordinates can be considered as a unique point in a $6N$
# Line 110 | Line 198 | then the value of $\rho$ gives the probability of find
198   \label{Ineq:normalized}
199   \end{equation}
200   then the value of $\rho$ gives the probability of finding the system
201 < in a unit volume in the phase space.
201 > in a unit volume in phase space.
202  
203   Liouville's theorem describes the change in density $\rho$ with
204 < time. The number of representive points at a given volume in the phase
204 > time. The number of representative points at a given volume in phase
205   space at any instant can be written as:
206   \begin{equation}
207   \label{Ineq:deltaN}
208   \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
209   \end{equation}
210 < To calculate the change in the number of representive points in this
210 > To calculate the change in the number of representative points in this
211   volume, let us consider a simple condition: the change in the number
212 < of representive points in $q_1$ axis. The rate of the number of the
213 < representive points entering the volume at $q_1$ per unit time is:
212 > of representative points along the $q_1$ axis. The rate of the number
213 > of the representative points entering the volume at $q_1$ per unit time
214 > is:
215   \begin{equation}
216   \label{Ineq:deltaNatq1}
217   \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
218   \end{equation}
219 < and the rate of the number of representive points leaving the volume
219 > and the rate of the number of representative points leaving the volume
220   at another position $q_1 + \delta q_1$ is:
221   \begin{equation}
222   \label{Ineq:deltaNatq1plusdeltaq1}
223   \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
224   \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
225   \end{equation}
226 < Here the higher order differentials are neglected. So the change of
227 < the number of the representive points is the difference of
228 < eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, which
229 < gives us:
226 > Here the higher order differentials are neglected. So the change in
227 > the number of representative points is the difference between
228 > eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1},
229 > which gives us:
230   \begin{equation}
231   \label{Ineq:deltaNatq1axis}
232   -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
233   \end{equation}
234   where, higher order differetials are neglected. If we sum over all the
235 < axes in the phase space, we can get the change of the number of
236 < representive points in a given volume with time:
235 > axes in the phase space, we can get the change in the number of
236 > representative points in a given volume with time:
237   \begin{equation}
238   \label{Ineq:deltaNatGivenVolume}
239   \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
# Line 152 | Line 241 | q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \
241   {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
242   q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
243   \end{equation}
244 < From Hamilton's equation of motion,
244 > From Hamilton's equations of motion,
245   \begin{equation}
246   \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
247   {\dot p_i}}{\partial p_i},
# Line 162 | Line 251 | eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\de
251   eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
252   eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
253   \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
254 < can derive Liouville's theorem:
254 > arrive at Liouville's theorem:
255   \begin{equation}
256   \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
257   \frac{\partial {\rho}}{\partial
# Line 181 | Line 270 | equation~\ref{Ineq:anotherFormofLiouville} is the tota
270   \end{equation}
271   It is easy to note that the left side of
272   equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
273 < $\rho$ with respect of $t$, which means
273 > $\rho$ with respect to $t$, which means
274   \begin{equation}
275   \frac{d \rho}{dt} = 0,
276   \label{Ineq:conservationofRho}
277   \end{equation}
278   and the rate of density change is zero in the neighborhood of any
279 < selected moving representive points in the phase space.
279 > selected moving representative points in the phase space.
280  
281 < The condition of the ensemble is determined by the density
281 > The type of thermodynamic ensemble is determined by the density
282   distribution. If we consider the density distribution as only a
283   function of $q$ and $p$, which means the rate of change of the phase
284 < space density in the neighborhood of all representive points in the
284 > space density in the neighborhood of all representative points in the
285   phase space is zero,
286   \begin{equation}
287   \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
288   \label{Ineq:statEquilibrium}
289   \end{equation}
290   We may conclude the ensemble is in {\it statistical equilibrium}. An
291 < ensemble in statistical equilibrium often means the system is also in
291 > ensemble in statistical equilibrium means the system is also in
292   macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
293   \right)_{q, p} = 0$, then
294   \begin{equation}
# Line 211 | Line 300 | independent of time. For a conservative system, the en
300   \end{equation}
301   If $\rho$ is a function only of some constant of the motion, $\rho$ is
302   independent of time. For a conservative system, the energy of the
303 < system is one of the constants of the motion. Here are several
304 < examples: when the density distribution is constant everywhere in the
305 < phase space,
303 > system is one of the constants of the motion. There are many
304 > thermodynamically relevant ensembles: when the density distribution is
305 > constant everywhere in the phase space,
306   \begin{equation}
307   \rho = \mathrm{const.}
308   \label{Ineq:uniformEnsemble}
309   \end{equation}
310 < the ensemble is called {\it uniform ensemble}.  Another useful
311 < ensemble is called {\it microcanonical ensemble}, for which:
310 > the ensemble is called {\it uniform ensemble}, but this ensemble has
311 > little relevance for physical chemistry. It is an ensemble with
312 > essentially infinite temperature.
313 >
314 > \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
315 > The most useful ensemble for Molecular Dynamics is the {\it
316 > microcanonical ensemble}, for which:
317   \begin{equation}
318   \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
319   \label{Ineq:microcanonicalEnsemble}
# Line 235 | Line 329 | where $k_B$ is the Boltzmann constant and $C^N$ is a n
329   \label{Ineq:gibbsEntropy}
330   \end{equation}
331   where $k_B$ is the Boltzmann constant and $C^N$ is a number which
332 < makes the argument of $\ln$ dimensionless, in this case, it is the
333 < total phase space volume of one state. The entropy in microcanonical
334 < ensemble is given by
332 > makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
333 > total phase space volume of one state which has the same units as
334 > $\Sigma(N, V, E)$. The entropy of a microcanonical ensemble is given
335 > by
336   \begin{equation}
337   S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
338   \label{Ineq:entropy}
339   \end{equation}
340 +
341 + \subsubsection{The Canonical Ensemble\label{In:sssec:canonical}}
342   If the density distribution $\rho$ is given by
343   \begin{equation}
344   \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
# Line 252 | Line 349 | Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(
349   Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(q^N, p^N) / k_B T},
350   \label{Ineq:partitionFunction}
351   \end{equation}
352 < which is also known as {\it partition function}. $\Gamma$ indicates
353 < that the integral is over all the phase space. In the canonical
354 < ensemble, $N$, the total number of particles, $V$, total volume, and
355 < $T$, the temperature are constants. The systems with the lowest
356 < energies hold the largest population. According to maximum principle,
357 < the thermodynamics maximizes the entropy $S$,
352 > which is also known as the canonical {\it partition
353 > function}. $\Gamma$ indicates that the integral is over all phase
354 > space. In the canonical ensemble, $N$, the total number of particles,
355 > $V$, total volume, and $T$, the temperature, are constants. The
356 > systems with the lowest energies hold the largest
357 > population. Thermodynamics maximizes the entropy, $S$, implying that
358   \begin{equation}
359   \begin{array}{ccc}
360   \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
361   \end{array}
362   \label{Ineq:maximumPrinciple}
363   \end{equation}
364 < From Eq.~\ref{Ineq:maximumPrinciple} and two constrains of the canonical
365 < ensemble, {\it i.e.}, total probability and average energy conserved,
366 < the partition function is calculated as
364 > From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the
365 > canonical ensemble, {\it i.e.}, total probability and average energy
366 > must be conserved, the partition function can be shown to be
367 > equivalent to
368   \begin{equation}
369   Z_N = e^{-A/k_B T},
370   \label{Ineq:partitionFunctionWithFreeEnergy}
# Line 274 | Line 372 | that they serve as a connection between macroscopic pr
372   where $A$ is the Helmholtz free energy. The significance of
373   Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
374   that they serve as a connection between macroscopic properties of the
375 < system and the distribution of the microscopic states.
375 > system and the distribution of microscopic states.
376  
377   There is an implicit assumption that our arguments are based on so
378 < far. A representive point in the phase space is equally to be found in
379 < any same extent of the regions. In other words, all energetically
380 < accessible states are represented equally, the probabilities to find
381 < the system in any of the accessible states is equal. This is called
382 < equal a {\it priori} probabilities.
378 > far. Tow representative points in phase space are equally likely to be
379 > found if they have the same energy. In other words, all energetically
380 > accessible states are represented , and the probabilities to find the
381 > system in any of the accessible states is equal to that states
382 > Boltzmann weight. This is called the principle of equal a {\it priori}
383 > probabilities.
384  
385 < \subsection{Statistical Average\label{In:ssec:average}}
386 < Given the density distribution $\rho$ in the phase space, the average
385 > \subsection{Statistical Averages\label{In:ssec:average}}
386 > Given a density distribution $\rho$ in phase space, the average
387   of any quantity ($F(q^N, p^N$)) which depends on the coordinates
388   ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
389   can be calculated based on the definition shown by
390   Eq.~\ref{Ineq:statAverage1}
391   \begin{equation}
392 < \langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
393 < F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
392 > \langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
393 > F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
394   \label{Ineq:statAverage1}
395   \end{equation}
396   Since the density distribution $\rho$ is normalized to unity, the mean
397   value of $F(q^N, p^N)$ is simplified to
398   \begin{equation}
399 < \langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
400 < p^N, t) \rho,
399 > \langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
400 > p^N) \rho,
401   \label{Ineq:statAverage2}
402   \end{equation}
403 < called {\it ensemble average}. However, the quantity is often averaged
404 < for a finite time in real experiments,
403 > called the {\it ensemble average}. However, the quantity is often
404 > averaged for a finite time in real experiments,
405   \begin{equation}
406   \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
407 < \frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt.
407 > \frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt.
408   \label{Ineq:timeAverage1}
409   \end{equation}
410   Usually this time average is independent of $t_0$ in statistical
411   mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
412   \begin{equation}
413   \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
414 < \frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt
414 > \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
415   \label{Ineq:timeAverage2}
416   \end{equation}
417 < for an infinite time interval.
417 > for an finite time interval, $T$.
418  
419 < {\it ergodic hypothesis}, an important hypothesis from the statistical
420 < mechanics point of view, states that the system will eventually pass
419 > \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
420 > The {\it ergodic hypothesis}, an important hypothesis governing modern
421 > computer simulations states that the system will eventually pass
422   arbitrarily close to any point that is energetically accessible in
423   phase space. Mathematically, this leads to
424   \begin{equation}
425 < \langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t.
425 > \langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t.
426   \label{Ineq:ergodicity}
427   \end{equation}
428 < Eq.~\ref{Ineq:ergodicity} validates the Monte Carlo method which we will
429 < discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity
430 < can be related to the time average measured in the experiments.
428 > Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of
429 > averaging for sufficiently ergodic systems. Also Monte Carlo may be
430 > used to obtain ensemble averages of a quantity which are related to
431 > time averages measured in experiments.
432  
433 < \subsection{Correlation Function\label{In:ssec:corr}}
434 < Thermodynamic properties can be computed by equillibrium statistical
435 < mechanics. On the other hand, {\it Time correlation function} is a
436 < powerful method to understand the evolution of a dynamic system in
437 < non-equillibrium statistical mechanics. Imagine a property $A(q^N,
438 < p^N, t)$ as a function of coordinates $q^N$ and momenta $p^N$ has an
439 < intial value at $t_0$, at a later time $t_0 + \tau$ this value is
440 < changed. If $\tau$ is very small, the change of the value is minor,
441 < and the later value of $A(q^N, p^N, t_0 +
442 < \tau)$ is correlated to its initial value. Howere, when $\tau$ is large,
443 < this correlation is lost. The correlation function is a measurement of
444 < this relationship and is defined by~\cite{Berne90}
433 > \subsection{Correlation Functions\label{In:ssec:corr}}
434 > Thermodynamic properties can be computed by equilibrium statistical
435 > mechanics. On the other hand, {\it Time correlation functions} are a
436 > powerful tool to understand the evolution of a dynamical
437 > systems. Imagine that property $A(q^N, p^N, t)$ as a function of
438 > coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and
439 > at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very
440 > small, the change of the value is minor, and the later value of
441 > $A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However,
442 > when $\tau$ is large, this correlation is lost. A time correlation
443 > function measures this relationship and is defined
444 > by~\cite{Berne90}
445   \begin{equation}
446 < C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty}
446 > C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow
447 > \infty}
448   \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
449   \label{Ineq:autocorrelationFunction}
450   \end{equation}
451 < Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of a
452 < single variable, called {\it autocorrelation function}. The defination
453 < of the correlation function for two different variables is similar to
454 < that of autocorrelation function, which is
451 > Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of
452 > a single variable, called an {\it autocorrelation function}. The
453 > definition of the correlation function for two different variables is
454 > similar to that of autocorrelation function, which is
455   \begin{equation}
456 < C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
456 > C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
457   \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
458   \label{Ineq:crosscorrelationFunction}
459   \end{equation}
460 < and called {\it cross correlation function}.
460 > and is called a {\it cross correlation function}.
461  
462 < In section~\ref{In:ssec:average} we know from Eq.~\ref{Ineq:ergodicity}
463 < the relationship between time average and ensemble average. We can put
464 < the correlation function in a classical mechanics form,
462 > We know from the ergodic hypothesis that there is a relationship
463 > between time average and ensemble average. We can put the correlation
464 > function in a classical mechanical form,
465   \begin{equation}
466 < C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p)
466 > C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
467 > A[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
468   \label{Ineq:autocorrelationFunctionCM}
469   \end{equation}
470 < and
470 > where $q^N(\tau)$, $p^N(\tau)$ is the phase space point that follows
471 > classical evolution of the point $q^N$, $p^N$ after a tme $\tau$ has
472 > elapsed, and
473   \begin{equation}
474 < C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau)
475 < \rho(q, p)
474 > C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
475 > B[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
476   \label{Ineq:crosscorrelationFunctionCM}
477   \end{equation}
478 < as autocorrelation function and cross correlation function
478 > as the autocorrelation function and cross correlation functions
479   respectively. $\rho(q, p)$ is the density distribution at equillibrium
480 < in phase space. In many cases, the correlation function decay is a
481 < single exponential
480 > in phase space. In many cases, correlation functions decay as a
481 > single exponential in time
482   \begin{equation}
483   C(t) \sim e^{-t / \tau_r},
484   \label{Ineq:relaxation}
485   \end{equation}
486 < where $\tau_r$ is known as relaxation time which discribes the rate of
486 > where $\tau_r$ is known as relaxation time which describes the rate of
487   the decay.
488  
489 < \section{Methodolody\label{In:sec:method}}
490 < The simulations performed in this dissertation are branched into two
491 < main catalog, Monte Carlo and Molecular Dynamics. There are two main
492 < difference between Monte Carlo and Molecular Dynamics simulations. One
493 < is that the Monte Carlo simulation is time independent, and Molecular
494 < Dynamics simulation is time involved. Another dissimilar is that the
495 < Monte Carlo is a stochastic process, the configuration of the system
496 < is not determinated by its past, however, using Moleuclar Dynamics,
497 < the system is propagated by Newton's equation of motion, the
498 < trajectory of the system evolved in the phase space is determined. A
499 < brief introduction of the two algorithms are given in
500 < section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
501 < Molecular Dynamics, Langevin Dynamics, is introduced by
489 > \section{Methodology\label{In:sec:method}}
490 > The simulations performed in this dissertation branch into two main
491 > categories, Monte Carlo and Molecular Dynamics. There are two main
492 > differences between Monte Carlo and Molecular Dynamics
493 > simulations. One is that the Monte Carlo simulations are time
494 > independent methods of sampling structural features of an ensemble,
495 > while Molecular Dynamics simulations provide dynamic
496 > information. Additionally, Monte Carlo methods are stochastic
497 > processes; the future configurations of the system are not determined
498 > by its past. However, in Molecular Dynamics, the system is propagated
499 > by Hamilton's equations of motion, and the trajectory of the system
500 > evolving in phase space is deterministic. Brief introductions of the
501 > two algorithms are given in section~\ref{In:ssec:mc}
502 > and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
503 > Dynamics that includes implicit solvent effects, is introduced by
504   section~\ref{In:ssec:ld}.
505  
506   \subsection{Monte Carlo\label{In:ssec:mc}}
507 < Monte Carlo algorithm was first introduced by Metropolis {\it et
508 < al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
509 < applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
510 < which the $N$, the total number of particles, $V$, total volume, $T$,
511 < temperature are constants. The average energy is given by substituding
512 < Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2},
507 > A Monte Carlo integration algorithm was first introduced by Metropolis
508 > {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
509 > algorithm is usually applied to the canonical ensemble, a
510 > Boltzmann-weighted ensemble, in which $N$, the total number of
511 > particles, $V$, the total volume, and $T$, the temperature are
512 > constants. An average in this ensemble is given
513   \begin{equation}
514 < \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
514 > \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
515 > A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
516   \label{Ineq:energyofCanonicalEnsemble}
517   \end{equation}
518 < So are the other properties of the system. The Hamiltonian is the
519 < summation of Kinetic energy $K(p^N)$ as a function of momenta and
520 < Potential energy $U(q^N)$ as a function of positions,
518 > The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function
519 > of momenta and the potential energy, $U(q^N)$, a function of
520 > positions,
521   \begin{equation}
522   H(q^N, p^N) = K(p^N) + U(q^N).
523   \label{Ineq:hamiltonian}
524   \end{equation}
525 < If the property $A$ is only a function of position ($ A = A(q^N)$),
526 < the mean value of $A$ is reduced to
525 > If the property $A$ is a function only of position ($ A = A(q^N)$),
526 > the mean value of $A$ can be reduced to
527   \begin{equation}
528 < \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
528 > \langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}},
529   \label{Ineq:configurationIntegral}
530   \end{equation}
531   The kinetic energy $K(p^N)$ is factored out in
532   Eq.~\ref{Ineq:configurationIntegral}. $\langle A
533 < \rangle$ is a configuration integral now, and the
533 > \rangle$ is now a configuration integral, and
534   Eq.~\ref{Ineq:configurationIntegral} is equivalent to
535   \begin{equation}
536 < \langle A \rangle = \int d \vec q~^N A \rho(q^N).
536 > \langle A \rangle = \int d \vec q~^N A \rho(q^N),
537   \label{Ineq:configurationAve}
538   \end{equation}
539 + where $\rho(q^N)$ is a configurational probability
540 + \begin{equation}
541 + \rho(q^N) = \frac{e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}.
542 + \label{Ineq:configurationProb}
543 + \end{equation}
544  
545 < In a Monte Carlo simulation of canonical ensemble, the probability of
546 < the system being in a state $s$ is $\rho_s$, the change of this
547 < probability with time is given by
545 > In a Monte Carlo simulation of a system in the canonical ensemble, the
546 > probability of the system being in a state $s$ is $\rho_s$, the change
547 > of this probability with time is given by
548   \begin{equation}
549   \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
550   \label{Ineq:timeChangeofProb}
# Line 442 | Line 555 | to state $s'$. Since $\rho_s$ is independent of time a
555   \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
556   \label{Ineq:equiProb}
557   \end{equation}
558 < which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
559 < for all $s'$. So
558 > the sum of transition probabilities $\sum_{s'} [ -w_{ss'}\rho_s +
559 > w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So
560   \begin{equation}
561   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
562   \label{Ineq:relationshipofRhoandW}
563   \end{equation}
564 < If
564 > If the ratio of state populations
565   \begin{equation}
453 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
454 \label{Ineq:conditionforBoltzmannStatistics}
455 \end{equation}
456 then
457 \begin{equation}
566   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
567   \label{Ineq:satisfyofBoltzmannStatistics}
568   \end{equation}
569 < Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that
570 < $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
571 < shows how Monte Carlo simulation generates a transition probability
572 < governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as
569 > then the ratio of transition probabilities,
570 > \begin{equation}
571 > \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
572 > \label{Ineq:conditionforBoltzmannStatistics}
573 > \end{equation}
574 > An algorithm that indicates how a Monte Carlo simulation generates a
575 > transition probability governed by
576 > \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
577   \begin{enumerate}
578 < \item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy.
579 < \item\label{Initm:newEnergy} Make a random displacement for particle,
580 < calculate the new energy.
578 > \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
579 > the energy of the rest of the system due to the current location of
580 > the particle.
581 > \item\label{Initm:newEnergy} Make a random displacement of the particle,
582 > calculate the new energy taking the movement of the particle into account.
583    \begin{itemize}
584 <     \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy
585 < goes down.
472 <     \item Pick a random number between $[0,1]$ if energy goes up.
584 >     \item If the energy goes down, keep the new configuration.
585 >     \item If the energy goes up, pick a random number between $[0,1]$.
586          \begin{itemize}
587 <           \item Keep the new configuration and return to
588 < step~\ref{Initm:oldEnergy} if the random number smaller than
589 < $e^{-(U_{new} - U_{old})} / k_B T$.
590 <           \item Keep the old configuration and return to
478 < step~\ref{Initm:oldEnergy} if the random number larger than
479 < $e^{-(U_{new} - U_{old})} / k_B T$.
587 >           \item If the random number smaller than
588 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration.
589 >           \item If the random number is larger than
590 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration.
591          \end{itemize}
592    \end{itemize}
593 < \item\label{Initm:accumulateAvg} Accumulate the average after it converges.
593 > \item\label{Initm:accumulateAvg} Accumulate the averages based on the
594 > current configuration.
595 > \item Go to step~\ref{Initm:oldEnergy}.
596   \end{enumerate}
597 < It is important to notice that the old configuration has to be sampled
598 < again if it is kept.
597 > It is important for sampling accuracy that the old configuration is
598 > sampled again if it is kept.
599  
600   \subsection{Molecular Dynamics\label{In:ssec:md}}
601   Although some of properites of the system can be calculated from the
602 < ensemble average in Monte Carlo simulations, due to the nature of
603 < lacking in the time dependence, it is impossible to gain information
604 < of those dynamic properties from Monte Carlo simulations. Molecular
605 < Dynamics is a measurement of the evolution of the positions and
606 < momenta of the particles in the system. The evolution of the system
607 < obeys laws of classical mechanics, in most situations, there is no
608 < need for the count of the quantum effects. For a real experiment, the
609 < instantaneous positions and momenta of the particles in the system are
610 < neither important nor measurable, the observable quantities are
611 < usually a average value for a finite time interval. These quantities
612 < are expressed as a function of positions and momenta in Melecular
613 < Dynamics simulations. Like the thermal temperature of the system is
501 < defined by
602 > ensemble average in Monte Carlo simulations, due to the absence of the
603 > time dependence, it is impossible to gain information on dynamic
604 > properties from Monte Carlo simulations. Molecular Dynamics evolves
605 > the positions and momenta of the particles in the system. The
606 > evolution of the system obeys the laws of classical mechanics, and in
607 > most situations, there is no need to account for quantum effects. In a
608 > real experiment, the instantaneous positions and momenta of the
609 > particles in the system are ofter neither important nor measurable,
610 > the observable quantities are usually an average value for a finite
611 > time interval. These quantities are expressed as a function of
612 > positions and momenta in Molecular Dynamics simulations. For example,
613 > temperature of the system is defined by
614   \begin{equation}
615 < \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
615 > \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
616   \label{Ineq:temperature}
617   \end{equation}
618 < here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
619 < component of the velocity of the particle. The right side of
620 < Eq.~\ref{Ineq:temperature} is the average kinetic energy of the
509 < system. A simple Molecular Dynamics simulation scheme
510 < is:~\cite{Frenkel1996}
511 < \begin{enumerate}
512 < \item\label{Initm:initialize} Assign the initial positions and momenta
513 < for the particles in the system.
514 < \item\label{Initm:calcForce} Calculate the forces.
515 < \item\label{Initm:equationofMotion} Integrate the equation of motion.
516 <  \begin{itemize}
517 <     \item Return to step~\ref{Initm:calcForce} if the equillibrium is
518 < not achieved.
519 <     \item Go to step~\ref{Initm:calcAvg} if the equillibrium is
520 < achieved.
521 <  \end{itemize}
522 < \item\label{Initm:calcAvg} Compute the quantities we are interested in.
523 < \end{enumerate}
524 < The initial positions of the particles are chosen as that there is no
525 < overlap for the particles. The initial velocities at first are
526 < distributed randomly to the particles, and then shifted to make the
527 < momentum of the system $0$, at last scaled to the desired temperature
528 < of the simulation according Eq.~\ref{Ineq:temperature}.
618 > here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of
619 > particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the
620 > average kinetic energy of the system.
621  
622 < The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce}
623 < and~\ref{Initm:equationofMotion}. The calculation of the forces are
624 < often involved numerous effort, this is the most time consuming step
625 < in the Molecular Dynamics scheme. The evaluation of the forces is
626 < followed by
622 > The initial positions of the particles are chosen so that there is no
623 > overlap of the particles. The initial velocities at first are
624 > distributed randomly to the particles using a Maxwell-Boltzmann
625 > distribution, and then shifted to make the total linear momentum of
626 > the system $0$.
627 >
628 > The core of Molecular Dynamics simulations is the calculation of
629 > forces and the integration algorithm. Calculation of the forces often
630 > involves enormous effort. This is the most time consuming step in the
631 > Molecular Dynamics scheme. Evaluation of the forces is mathematically
632 > simple,
633   \begin{equation}
634   f(q) = - \frac{\partial U(q)}{\partial q},
635   \label{Ineq:force}
636   \end{equation}
637 < $U(q)$ is the potential of the system. Once the forces computed, are
638 < the positions and velocities updated by integrating Newton's equation
639 < of motion,
640 < \begin{equation}
641 < f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
637 > where $U(q)$ is the potential of the system. However, the numerical
638 > details of this computation are often quite complex. Once the forces
639 > computed, the positions and velocities are updated by integrating
640 > Hamilton's equations of motion,
641 > \begin{eqnarray}
642 > \dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial
643 > U(q_i)}{\partial q_i} = f(q_i) \\
644 > \dot q_i & = & p_i
645   \label{Ineq:newton}
646 < \end{equation}
647 < Here is an example of integrating algorithms, Verlet algorithm, which
648 < is one of the best algorithms to integrate Newton's equation of
649 < motion. The Taylor expension of the position at time $t$ is
646 > \end{eqnarray}
647 > The classic example of an integrating algorithm is the Verlet
648 > algorithm, which is one of the simplest algorithms for integrating the
649 > equations of motion. The Taylor expansion of the position at time $t$
650 > is
651   \begin{equation}
652 < q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
652 > q(t+\Delta t)= q(t) + \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 +
653          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
654          \mathcal{O}(\Delta t^4)
655   \label{Ineq:verletFuture}
656   \end{equation}
657   for a later time $t+\Delta t$, and
658   \begin{equation}
659 < q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
659 > q(t-\Delta t)= q(t) - \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 -
660          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
661          \mathcal{O}(\Delta t^4) ,
662   \label{Ineq:verletPrevious}
663   \end{equation}
664 < for a previous time $t-\Delta t$. The summation of the
665 < Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives
664 > for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture}
665 > and~\ref{Ineq:verletPrevious} gives
666   \begin{equation}
667   q(t+\Delta t)+q(t-\Delta t) =
668          2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
# Line 572 | Line 674 | q(t+\Delta t) \approx
674          2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
675   \label{Ineq:newPosition}
676   \end{equation}
677 < The higher order of the $\Delta t$ is omitted.
677 > The higher order terms in $\Delta t$ are omitted.
678  
679 < Numerous technics and tricks are applied to Molecular Dynamics
680 < simulation to gain more efficiency or more accuracy. The simulation
681 < engine used in this dissertation for the Molecular Dynamics
682 < simulations is {\sc oopse}, more details of the algorithms and
683 < technics can be found in~\cite{Meineke2005}.
679 > Numerous techniques and tricks have been applied to Molecular Dynamics
680 > simulations to gain greater efficiency or accuracy. The engine used in
681 > this dissertation for the Molecular Dynamics simulations is {\sc
682 > oopse}. More details of the algorithms and techniques used in this
683 > code can be found in Ref.~\cite{Meineke2005}.
684  
685   \subsection{Langevin Dynamics\label{In:ssec:ld}}
686 < In many cases, the properites of the solvent in a system, like the
687 < lipid-water system studied in this dissertation, are less important to
688 < the researchers. However, the major computational expense is spent on
689 < the solvent in the Molecular Dynamics simulations because of the large
690 < number of the solvent molecules compared to that of solute molecules,
691 < the ratio of the number of lipid molecules to the number of water
692 < molecules is $1:25$ in our lipid-water system. The efficiency of the
693 < Molecular Dynamics simulations is greatly reduced.
686 > In many cases, the properites of the solvent (like the water in the
687 > lipid-water system studied in this dissertation) are less interesting
688 > to the researchers than the behavior of the solute. However, the major
689 > computational expense is ofter the solvent-solvent interactions, this
690 > is due to the large number of the solvent molecules when compared to
691 > the number of solute molecules. The ratio of the number of lipid
692 > molecules to the number of water molecules is $1:25$ in our
693 > lipid-water system. The efficiency of the Molecular Dynamics
694 > simulations is greatly reduced, with up to 85\% of CPU time spent
695 > calculating only water-water interactions.
696  
697 < As an extension of the Molecular Dynamics simulations, the Langevin
698 < Dynamics seeks a way to avoid integrating equation of motion for
699 < solvent particles without losing the Brownian properites of solute
700 < particles. A common approximation is that the coupling of the solute
701 < and solvent is expressed as a set of harmonic oscillators. So the
702 < Hamiltonian of such a system is written as
697 > As an extension of the Molecular Dynamics methodologies, Langevin
698 > Dynamics seeks a way to avoid integrating the equations of motion for
699 > solvent particles without losing the solvent effects on the solute
700 > particles. One common approximation is to express the coupling of the
701 > solute and solvent as a set of harmonic oscillators. The Hamiltonian
702 > of such a system is written as
703   \begin{equation}
704   H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
705   \label{Ineq:hamiltonianofCoupling}
706   \end{equation}
707 < where $H_B$ is the Hamiltonian of the bath which equals to
707 > where $H_B$ is the Hamiltonian of the bath which  is a set of N
708 > harmonic oscillators
709   \begin{equation}
710   H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
711   \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
712   \label{Ineq:hamiltonianofBath}
713   \end{equation}
714 < $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
715 < bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
714 > $\alpha$ runs over all the degree of freedoms of the bath,
715 > $\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and
716 > $\Delta U(q)$ is the bilinear coupling given by
717   \begin{equation}
718   \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
719   \label{Ineq:systemBathCoupling}
720   \end{equation}
721 < where $g$ is the coupling constant. By solving the Hamilton's equation
722 < of motion, the {\it Generalized Langevin Equation} for this system is
723 < derived to
721 > where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By
722 > solving the Hamilton's equations of motion, the {\it Generalized
723 > Langevin Equation} for this system is derived as
724   \begin{equation}
725   m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
726   \label{Ineq:gle}
# Line 622 | Line 728 | W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m
728   with mean force,
729   \begin{equation}
730   W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
731 < \omega_\alpha^2}q^2,
731 > \omega_\alpha^2}q^2.
732   \label{Ineq:meanForce}
733   \end{equation}
734 < being only a dependence of coordinates of the solute particles, {\it
735 < friction kernel},
734 > The {\it friction kernel}, $\xi(t)$, depends only on the coordinates
735 > of the solute particles,
736   \begin{equation}
737   \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
738   \omega_\alpha} \cos(\omega_\alpha t),
739   \label{Ineq:xiforGLE}
740   \end{equation}
741 < and the random force,
741 > and a ``random'' force,
742   \begin{equation}
743   R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
744   \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
745   q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
746   \label{Ineq:randomForceforGLE}
747   \end{equation}
748 < as only a dependence of the initial conditions. The relationship of
749 < friction kernel $\xi(t)$ and random force $R(t)$ is given by
748 > that depends only on the initial conditions. The relationship of
749 > friction kernel $\xi(t)$ and random force $R(t)$ is given by the
750 > second fluctuation dissipation theorem,
751   \begin{equation}
752 < \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
752 > \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
753   \label{Ineq:relationshipofXiandR}
754   \end{equation}
755 < from their definitions. In Langevin limit, the friction is treated
756 < static, which means
755 > In the harmonic bath this relation is exact and provable from the
756 > definitions of these quantities. In the limit of static friction,
757   \begin{equation}
758   \xi(t) = 2 \xi_0 \delta(t).
759   \label{Ineq:xiofStaticFriction}
760   \end{equation}
761 < After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with
762 < Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted
763 < to
761 > After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with
762 > Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is
763 > extracted,
764   \begin{equation}
765   m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
766   \label{Ineq:langevinEquation}
767   \end{equation}
768 < The applying of Langevin Equation to dynamic simulations is discussed
769 < in Ch.~\ref{chap:ld}.
768 > Application of the Langevin Equation to dynamic simulations is
769 > discussed in Ch.~\ref{chap:ld}.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines