ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
(Generate patch)

Comparing trunk/xDissertation/Introduction.tex (file contents):
Revision 3372 by xsun, Wed Mar 19 21:04:47 2008 UTC vs.
Revision 3375 by xsun, Mon Mar 24 20:12:52 2008 UTC

# Line 3 | Line 3 | because of their critical role as the foundation of bi
3   \section{Background on the Problem\label{In:sec:pro}}
4   Phospholipid molecules are the primary topic of this dissertation
5   because of their critical role as the foundation of biological
6 < membranes. Lipids, when dispersed in water, self assemble into a
7 < mumber of topologically distinct bilayer structures. The phase
8 < behavior of lipid bilayers has been explored
9 < experimentally~\cite{Cevc87}, however, a complete understanding of the
10 < mechanism and driving forces behind the various phases has not been
11 < achieved.
6 > membranes. The chemical structure of phospholipids includes a head
7 > group with a large dipole moment which is due to the large charge
8 > separation between phosphate and amino alcohol, and a nonpolar tail
9 > that contains fatty acid chains. Depending on the specific alcohol
10 > which the phosphate and fatty acid chains are esterified to, the
11 > phospholipids are divided into glycerophospholipids and
12 > sphingophospholipids.~\cite{Cevc80} The chemical structures are shown
13 > in figure~\ref{Infig:lipid}.
14 > \begin{figure}
15 > \centering
16 > \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
17 > \caption{The chemical structure of glycerophospholipids (left) and
18 > sphingophospholipids (right).\cite{Cevc80}}
19 > \label{Infig:lipid}
20 > \end{figure}
21 > Glycerophospholipids are the dominant phospholipids in biological
22 > membranes. The type of glycerophospholipid depends on the identity of
23 > the X group, and the chains. For example, if X is choline
24 > [(CH$_3$)$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
25 > phosphatidylcholine (PC), or if X is ethanolamine
26 > [H$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
27 > phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a number
28 > types of phosphatidycholine with different fatty acids as the lipid
29 > chains.
30 > \begin{table*}
31 > \begin{minipage}{\linewidth}
32 > \begin{center}
33 > \caption{A number types of phosphatidycholine.}
34 > \begin{tabular}{lll}
35 > \hline
36 >  & Fatty acid & Generic Name \\
37 > \hline
38 > \textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH &
39 > \textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
40 > \textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine
41 > \\
42 > \textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
43 > \end{tabular}
44 > \label{Intab:pc}
45 > \end{center}
46 > \end{minipage}
47 > \end{table*}
48 > When dispersed in water, lipids self assemble into a mumber of
49 > topologically distinct bilayer structures. The phase behavior of lipid
50 > bilayers has been explored experimentally~\cite{Cevc80}, however, a
51 > complete understanding of the mechanism and driving forces behind the
52 > various phases has not been achieved.
53  
54 < \subsection{Ripple Phase\label{In:ssec:ripple}}
54 > \subsection{The Ripple Phase\label{In:ssec:ripple}}
55   The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
56   periodic buckling of the membrane, is an intermediate phase which is
57   developed either from heating the gel phase $L_{\beta'}$ or cooling
58 < the fluid phase $L_\alpha$. A Sketch is shown in
59 < figure~\ref{Infig:phaseDiagram}.~\cite{Cevc87}
58 > the fluid phase $L_\alpha$. A sketch of the phases is shown in
59 > figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
60   \begin{figure}
61   \centering
62   \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
63 < \caption{A phase diagram of lipid bilayer. With increasing the
64 < temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple
65 < ($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.}
63 > \caption{Phases of PC lipid bilayers. With increasing
64 > temperature, phosphotidylcholine (PC) bilayers can go through
65 > $L_{\beta'} \rightarrow P_{\beta'}$ (gel $\rightarrow$ ripple) and
66 > $P_{\beta'} \rightarrow L_\alpha$ (ripple $\rightarrow$ fluid) phase
67 > transitions.~\cite{Cevc80}}
68   \label{Infig:phaseDiagram}
69   \end{figure}
70 < Most structural information of the ripple phase has been obtained by
71 < the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
70 > Most structural information about the ripple phase has been obtained
71 > by X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
72   electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
73   diffraction work by Katsaras {\it et al.} showed that a rich phase
74   diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
# Line 36 | Line 79 | mica.~\cite{Kaasgaard03}
79   \begin{figure}
80   \centering
81   \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
82 < \caption{The experimental observed ripple phase. The top image is
83 < obtained by X-ray diffraction~\cite{Sun96}, and the bottom one is
84 < observed by AFM.~\cite{Kaasgaard03}}
82 > \caption{Experimental observations of the riple phase. The top image
83 > is an electrostatic density map obtained by Sun {\it et al.} using
84 > X-ray diffraction~\cite{Sun96}.  The lower figures are the surface
85 > topology of various ripple domains in bilayers supported in mica. The
86 > AFM images were observed by Kaasgaard {\it et
87 > al.}.~\cite{Kaasgaard03}}
88   \label{Infig:ripple}
89   \end{figure}
90 < Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray
90 > Figure~\ref{Infig:ripple} shows the ripple phase oberved by both X-ray
91   diffraction and AFM. The experimental results provide strong support
92   for a 2-dimensional triangular packing lattice of the lipid molecules
93   within the ripple phase.  This is a notable change from the observed
94 < lipid packing within the gel phase,~\cite{Cevc87} although Tenchov
95 < {\it et al.} have recently observed near-hexagonal packing in some
94 > lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
95 > {\it et al.} have recently observed near-triangular packing in some
96   phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
97   physical mechanism for the formation of the ripple phase has never
98   been explained and the microscopic structure of the ripple phase has
99   never been elucidated by experiments. Computational simulation is a
100   perfect tool to study the microscopic properties for a
101 < system. However, the large length scale the ripple structure and the
102 < long time scale of the formation of the ripples are crucial obstacles
103 < to performing the actual work. The principal ideas explored in this
104 < dissertation are attempts to break the computational task up by
101 > system. However, the large length scale of the ripple structures and
102 > the long time required for the formation of the ripples are crucial
103 > obstacles to performing the actual work. The principal ideas explored
104 > in this dissertation are attempts to break the computational task up
105 > by
106   \begin{itemize}
107   \item Simplifying the lipid model.
108 < \item Improving algorithm for integrating the equations of motion.
108 > \item Improving the algorithm for integrating the equations of motion.
109   \end{itemize}
110   In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
111 < dipole spin model and a coarse-grained molecualr scale model to
111 > dipole spin model and a coarse-grained molecular scale model to
112   perform the Monte Carlo and Molecular Dynamics simulations
113   respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
114   Dynamics algorithm which excludes the explicit solvent to improve the
115   efficiency of the simulations.
116  
117 < \subsection{Lattice Model\label{In:ssec:model}}
118 < The gel-like characteristic (relatively immobile molecules) exhibited
117 > \subsection{Lattice Models\label{In:ssec:model}}
118 > The gel-like characteristic (laterally immobile molecules) exhibited
119   by the ripple phase makes it feasible to apply a lattice model to
120   study the system. The popular $2$ dimensional lattice models, {\it
121   i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
# Line 91 | Line 138 | represented by arrows. The multiple local minima of en
138   \includegraphics[width=3in]{./figures/inFrustration.pdf}
139   \caption{Frustration on triangular lattice, the spins and dipoles are
140   represented by arrows. The multiple local minima of energy states
141 < induce the frustration for spins and dipoles picking the directions.}
141 > induce frustration for spins and dipoles resulting in disordered
142 > low-temperature phases.}
143   \label{Infig:frustration}
144   \end{figure}
145 < The spins in figure~\ref{Infig:frustration} shows an illustration of
146 < the frustration for $J < 0$ on a triangular lattice. There are
147 < multiple local minima energy states which are independent of the
148 < direction of the spin on top of the triangle, therefore infinite
149 < possibilities for the packing of spins which induces what is known as
150 < ``complete regular frustration'' which leads to disordered low
151 < temperature phases. The similarity goes to the dipoles on a hexagonal
152 < lattice, which are shown by the dipoles in
153 < figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
154 < to be aligned, however, due to the long wave fluctuation, at low
155 < temperature, the aligned state becomes unstable, vortex is formed and
156 < results in multiple local minima of energy states. The dipole on the
109 < center of the hexagonal lattice is frustrated.
145 > The spins in figure~\ref{Infig:frustration} illustrate frustration for
146 > $J < 0$ on a triangular lattice. There are multiple local minima
147 > energy states which are independent of the direction of the spin on
148 > top of the triangle, therefore infinite possibilities for orienting
149 > large numbers spins. This induces what is known as ``complete regular
150 > frustration'' which leads to disordered low temperature phases. This
151 > behavior extends to dipoles on a triangular lattices, which are shown
152 > in the lower portion of figure~\ref{Infig:frustration}. In this case,
153 > dipole-aligned structures are energetically favorable, however, at low
154 > temperature, vortices are easily formed, and, this results in multiple
155 > local minima of energy states for a central dipole. The dipole on the
156 > center of the hexagonal lattice is therefore frustrated.
157  
158   The lack of translational degrees of freedom in lattice models
159   prevents their utilization in models for surface buckling. In
# Line 120 | Line 167 | principles of statistical mechanics.~\cite{Tolman1979}
167   to key concepts of classical statistical mechanics that we used in
168   this dissertation. Tolman gives an excellent introduction to the
169   principles of statistical mechanics.~\cite{Tolman1979} A large part of
170 < section~\ref{In:sec:SM} will follow Tolman's notation.
170 > section~\ref{In:sec:SM} follows Tolman's notation.
171  
172   \subsection{Ensembles\label{In:ssec:ensemble}}
173   In classical mechanics, the state of the system is completely
# Line 230 | Line 277 | selected moving representative points in the phase spa
277   and the rate of density change is zero in the neighborhood of any
278   selected moving representative points in the phase space.
279  
280 < The condition of the ensemble is determined by the density
280 > The type of thermodynamic ensemble is determined by the density
281   distribution. If we consider the density distribution as only a
282   function of $q$ and $p$, which means the rate of change of the phase
283   space density in the neighborhood of all representative points in the
# Line 259 | Line 306 | constant everywhere in the phase space,
306   \rho = \mathrm{const.}
307   \label{Ineq:uniformEnsemble}
308   \end{equation}
309 < the ensemble is called {\it uniform ensemble}.
309 > the ensemble is called {\it uniform ensemble}, but this ensemble has
310 > little relevance for physical chemistry. It is an ensemble with
311 > essentially infinite temperature.
312  
313   \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
314 < Another useful ensemble is the {\it microcanonical ensemble}, for
315 < which:
314 > The most useful ensemble for Molecular Dynamics is the {\it
315 > microcanonical ensemble}, for which:
316   \begin{equation}
317   \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
318   \label{Ineq:microcanonicalEnsemble}
# Line 280 | Line 329 | makes the argument of $\ln$ dimensionless. In this cas
329   \end{equation}
330   where $k_B$ is the Boltzmann constant and $C^N$ is a number which
331   makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
332 < total phase space volume of one state. The entropy of a microcanonical
333 < ensemble is given by
332 > total phase space volume of one state which has the same units as
333 > $\Sigma(N, V, E)$. The entropy of a microcanonical ensemble is given
334 > by
335   \begin{equation}
336   S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
337   \label{Ineq:entropy}
# Line 298 | Line 348 | Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(
348   Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(q^N, p^N) / k_B T},
349   \label{Ineq:partitionFunction}
350   \end{equation}
351 < which is also known as the canonical{\it partition function}. $\Gamma$
352 < indicates that the integral is over all phase space. In the canonical
353 < ensemble, $N$, the total number of particles, $V$, total volume, and
354 < $T$, the temperature, are constants. The systems with the lowest
355 < energies hold the largest population. According to maximum principle,
356 < thermodynamics maximizes the entropy $S$, implying that
351 > which is also known as the canonical {\it partition
352 > function}. $\Gamma$ indicates that the integral is over all phase
353 > space. In the canonical ensemble, $N$, the total number of particles,
354 > $V$, total volume, and $T$, the temperature, are constants. The
355 > systems with the lowest energies hold the largest
356 > population. Thermodynamics maximizes the entropy, $S$, implying that
357   \begin{equation}
358   \begin{array}{ccc}
359   \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
# Line 324 | Line 374 | There is an implicit assumption that our arguments are
374   system and the distribution of microscopic states.
375  
376   There is an implicit assumption that our arguments are based on so
377 < far. A representative point in the phase space is equally likely to be
378 < found in any energetically allowed region. In other words, all
379 < energetically accessible states are represented equally, the
380 < probabilities to find the system in any of the accessible states is
381 < equal. This is called the principle of equal a {\it priori}
377 > far. Tow representative points in phase space are equally likely to be
378 > found if they have the same energy. In other words, all energetically
379 > accessible states are represented , and the probabilities to find the
380 > system in any of the accessible states is equal to that states
381 > Boltzmann weight. This is called the principle of equal a {\it priori}
382   probabilities.
383  
384   \subsection{Statistical Averages\label{In:ssec:average}}
# Line 363 | Line 413 | mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
413   \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
414   \label{Ineq:timeAverage2}
415   \end{equation}
416 < for an infinite time interval.
416 > for an finite time interval, $T$.
417  
418   \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
419   The {\it ergodic hypothesis}, an important hypothesis governing modern
# Line 406 | Line 456 | C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \
456   \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
457   \label{Ineq:crosscorrelationFunction}
458   \end{equation}
459 < and called {\it cross correlation function}.
459 > and is called a {\it cross correlation function}.
460  
461   We know from the ergodic hypothesis that there is a relationship
462   between time average and ensemble average. We can put the correlation
463 < function in a classical mechanics form,
463 > function in a classical mechanical form,
464   \begin{equation}
465 < C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
466 < A[q^N(t+\tau), q^N(t+\tau)] \rho(q, p)
465 > C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
466 > A[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
467   \label{Ineq:autocorrelationFunctionCM}
468   \end{equation}
469 < and
469 > where $q^N(\tau)$, $p^N(\tau)$ is the phase space point that follows
470 > classical evolution of the point $q^N$, $p^N$ after a tme $\tau$ has
471 > elapsed, and
472   \begin{equation}
473 < C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
474 < B[q^N(t+\tau), q^N(t+\tau)] \rho(q, p)
473 > C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
474 > B[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
475   \label{Ineq:crosscorrelationFunctionCM}
476   \end{equation}
477 < as autocorrelation function and cross correlation function
477 > as the autocorrelation function and cross correlation functions
478   respectively. $\rho(q, p)$ is the density distribution at equillibrium
479 < in phase space. In many cases, the correlation function decay is a
480 < single exponential
479 > in phase space. In many cases, correlation functions decay as a
480 > single exponential in time
481   \begin{equation}
482   C(t) \sim e^{-t / \tau_r},
483   \label{Ineq:relaxation}
484   \end{equation}
485 < where $\tau_r$ is known as relaxation time which discribes the rate of
485 > where $\tau_r$ is known as relaxation time which describes the rate of
486   the decay.
487  
488 < \section{Methodolody\label{In:sec:method}}
489 < The simulations performed in this dissertation are branched into two
490 < main catalog, Monte Carlo and Molecular Dynamics. There are two main
491 < difference between Monte Carlo and Molecular Dynamics simulations. One
492 < is that the Monte Carlo simulation is time independent, and Molecular
493 < Dynamics simulation is time involved. Another dissimilar is that the
494 < Monte Carlo is a stochastic process, the configuration of the system
495 < is not determinated by its past, however, using Moleuclar Dynamics,
496 < the system is propagated by Newton's equation of motion, the
497 < trajectory of the system evolved in the phase space is determined. A
498 < brief introduction of the two algorithms are given in
499 < section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
500 < Molecular Dynamics, Langevin Dynamics, is introduced by
488 > \section{Methodology\label{In:sec:method}}
489 > The simulations performed in this dissertation branch into two main
490 > categories, Monte Carlo and Molecular Dynamics. There are two main
491 > differences between Monte Carlo and Molecular Dynamics
492 > simulations. One is that the Monte Carlo simulations are time
493 > independent methods of sampling structural features of an ensemble,
494 > while Molecular Dynamics simulations provide dynamic
495 > information. Additionally, Monte Carlo methods are stochastic
496 > processes; the future configurations of the system are not determined
497 > by its past. However, in Molecular Dynamics, the system is propagated
498 > by Hamilton's equations of motion, and the trajectory of the system
499 > evolving in phase space is deterministic. Brief introductions of the
500 > two algorithms are given in section~\ref{In:ssec:mc}
501 > and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
502 > Dynamics that includes implicit solvent effects, is introduced by
503   section~\ref{In:ssec:ld}.
504  
505   \subsection{Monte Carlo\label{In:ssec:mc}}
506 < Monte Carlo algorithm was first introduced by Metropolis {\it et
507 < al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
508 < applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
509 < which the $N$, the total number of particles, $V$, total volume, $T$,
510 < temperature are constants. The average energy is given by substituding
511 < Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2},
506 > A Monte Carlo integration algorithm was first introduced by Metropolis
507 > {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
508 > algorithm is usually applied to the canonical ensemble, a
509 > Boltzmann-weighted ensemble, in which $N$, the total number of
510 > particles, $V$, the total volume, and $T$, the temperature are
511 > constants. An average in this ensemble is given
512   \begin{equation}
513 < \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
513 > \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
514 > A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
515   \label{Ineq:energyofCanonicalEnsemble}
516   \end{equation}
517 < So are the other properties of the system. The Hamiltonian is the
518 < summation of Kinetic energy $K(p^N)$ as a function of momenta and
519 < Potential energy $U(q^N)$ as a function of positions,
517 > The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function
518 > of momenta and the potential energy, $U(q^N)$, a function of
519 > positions,
520   \begin{equation}
521   H(q^N, p^N) = K(p^N) + U(q^N).
522   \label{Ineq:hamiltonian}
523   \end{equation}
524 < If the property $A$ is only a function of position ($ A = A(q^N)$),
525 < the mean value of $A$ is reduced to
524 > If the property $A$ is a function only of position ($ A = A(q^N)$),
525 > the mean value of $A$ can be reduced to
526   \begin{equation}
527 < \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
527 > \langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}},
528   \label{Ineq:configurationIntegral}
529   \end{equation}
530   The kinetic energy $K(p^N)$ is factored out in
531   Eq.~\ref{Ineq:configurationIntegral}. $\langle A
532 < \rangle$ is a configuration integral now, and the
532 > \rangle$ is now a configuration integral, and
533   Eq.~\ref{Ineq:configurationIntegral} is equivalent to
534   \begin{equation}
535 < \langle A \rangle = \int d \vec q~^N A \rho(q^N).
535 > \langle A \rangle = \int d \vec q~^N A \rho(q^N),
536   \label{Ineq:configurationAve}
537   \end{equation}
538 + where $\rho(q^N)$ is a configurational probability
539 + \begin{equation}
540 + \rho(q^N) = \frac{e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}.
541 + \label{Ineq:configurationProb}
542 + \end{equation}
543  
544 < In a Monte Carlo simulation of canonical ensemble, the probability of
545 < the system being in a state $s$ is $\rho_s$, the change of this
546 < probability with time is given by
544 > In a Monte Carlo simulation of a system in the canonical ensemble, the
545 > probability of the system being in a state $s$ is $\rho_s$, the change
546 > of this probability with time is given by
547   \begin{equation}
548   \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
549   \label{Ineq:timeChangeofProb}
# Line 494 | Line 554 | to state $s'$. Since $\rho_s$ is independent of time a
554   \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
555   \label{Ineq:equiProb}
556   \end{equation}
557 < which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
558 < for all $s'$. So
557 > the sum of transition probabilities $\sum_{s'} [ -w_{ss'}\rho_s +
558 > w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So
559   \begin{equation}
560   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
561   \label{Ineq:relationshipofRhoandW}
562   \end{equation}
563 < If
563 > If the ratio of state populations
564   \begin{equation}
505 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
506 \label{Ineq:conditionforBoltzmannStatistics}
507 \end{equation}
508 then
509 \begin{equation}
565   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
566   \label{Ineq:satisfyofBoltzmannStatistics}
567   \end{equation}
568 < Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that
569 < $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
570 < shows how Monte Carlo simulation generates a transition probability
571 < governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as
568 > then the ratio of transition probabilities,
569 > \begin{equation}
570 > \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
571 > \label{Ineq:conditionforBoltzmannStatistics}
572 > \end{equation}
573 > An algorithm that indicates how a Monte Carlo simulation generates a
574 > transition probability governed by
575 > \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
576   \begin{enumerate}
577 < \item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy.
578 < \item\label{Initm:newEnergy} Make a random displacement for particle,
579 < calculate the new energy.
577 > \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
578 > the energy of the rest of the system due to the current location of
579 > the particle.
580 > \item\label{Initm:newEnergy} Make a random displacement of the particle,
581 > calculate the new energy taking the movement of the particle into account.
582    \begin{itemize}
583 <     \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy
584 < goes down.
524 <     \item Pick a random number between $[0,1]$ if energy goes up.
583 >     \item If the energy goes down, keep the new configuration.
584 >     \item If the energy goes up, pick a random number between $[0,1]$.
585          \begin{itemize}
586 <           \item Keep the new configuration and return to
587 < step~\ref{Initm:oldEnergy} if the random number smaller than
588 < $e^{-(U_{new} - U_{old})} / k_B T$.
589 <           \item Keep the old configuration and return to
530 < step~\ref{Initm:oldEnergy} if the random number larger than
531 < $e^{-(U_{new} - U_{old})} / k_B T$.
586 >           \item If the random number smaller than
587 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration.
588 >           \item If the random number is larger than
589 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration.
590          \end{itemize}
591    \end{itemize}
592 < \item\label{Initm:accumulateAvg} Accumulate the average after it converges.
592 > \item\label{Initm:accumulateAvg} Accumulate the averages based on the
593 > current configuration.
594 > \item Go to step~\ref{Initm:oldEnergy}.
595   \end{enumerate}
596 < It is important to notice that the old configuration has to be sampled
597 < again if it is kept.
596 > It is important for sampling accuracy that the old configuration is
597 > sampled again if it is kept.
598  
599   \subsection{Molecular Dynamics\label{In:ssec:md}}
600   Although some of properites of the system can be calculated from the
601 < ensemble average in Monte Carlo simulations, due to the nature of
602 < lacking in the time dependence, it is impossible to gain information
603 < of those dynamic properties from Monte Carlo simulations. Molecular
604 < Dynamics is a measurement of the evolution of the positions and
605 < momenta of the particles in the system. The evolution of the system
606 < obeys laws of classical mechanics, in most situations, there is no
607 < need for the count of the quantum effects. For a real experiment, the
608 < instantaneous positions and momenta of the particles in the system are
609 < neither important nor measurable, the observable quantities are
610 < usually a average value for a finite time interval. These quantities
611 < are expressed as a function of positions and momenta in Melecular
612 < Dynamics simulations. Like the thermal temperature of the system is
553 < defined by
601 > ensemble average in Monte Carlo simulations, due to the absence of the
602 > time dependence, it is impossible to gain information on dynamic
603 > properties from Monte Carlo simulations. Molecular Dynamics evolves
604 > the positions and momenta of the particles in the system. The
605 > evolution of the system obeys the laws of classical mechanics, and in
606 > most situations, there is no need to account for quantum effects. In a
607 > real experiment, the instantaneous positions and momenta of the
608 > particles in the system are ofter neither important nor measurable,
609 > the observable quantities are usually an average value for a finite
610 > time interval. These quantities are expressed as a function of
611 > positions and momenta in Molecular Dynamics simulations. For example,
612 > temperature of the system is defined by
613   \begin{equation}
614 < \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
614 > \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
615   \label{Ineq:temperature}
616   \end{equation}
617 < here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
618 < component of the velocity of the particle. The right side of
619 < Eq.~\ref{Ineq:temperature} is the average kinetic energy of the
561 < system. A simple Molecular Dynamics simulation scheme
562 < is:~\cite{Frenkel1996}
563 < \begin{enumerate}
564 < \item\label{Initm:initialize} Assign the initial positions and momenta
565 < for the particles in the system.
566 < \item\label{Initm:calcForce} Calculate the forces.
567 < \item\label{Initm:equationofMotion} Integrate the equation of motion.
568 <  \begin{itemize}
569 <     \item Return to step~\ref{Initm:calcForce} if the equillibrium is
570 < not achieved.
571 <     \item Go to step~\ref{Initm:calcAvg} if the equillibrium is
572 < achieved.
573 <  \end{itemize}
574 < \item\label{Initm:calcAvg} Compute the quantities we are interested in.
575 < \end{enumerate}
576 < The initial positions of the particles are chosen as that there is no
577 < overlap for the particles. The initial velocities at first are
578 < distributed randomly to the particles, and then shifted to make the
579 < momentum of the system $0$, at last scaled to the desired temperature
580 < of the simulation according Eq.~\ref{Ineq:temperature}.
617 > here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of
618 > particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the
619 > average kinetic energy of the system.
620  
621 < The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce}
622 < and~\ref{Initm:equationofMotion}. The calculation of the forces are
623 < often involved numerous effort, this is the most time consuming step
624 < in the Molecular Dynamics scheme. The evaluation of the forces is
625 < followed by
621 > The initial positions of the particles are chosen so that there is no
622 > overlap of the particles. The initial velocities at first are
623 > distributed randomly to the particles using a Maxwell-Boltzmann
624 > distribution, and then shifted to make the total linear momentum of
625 > the system $0$.
626 >
627 > The core of Molecular Dynamics simulations is the calculation of
628 > forces and the integration algorithm. Calculation of the forces often
629 > involves enormous effort. This is the most time consuming step in the
630 > Molecular Dynamics scheme. Evaluation of the forces is mathematically
631 > simple,
632   \begin{equation}
633   f(q) = - \frac{\partial U(q)}{\partial q},
634   \label{Ineq:force}
635   \end{equation}
636 < $U(q)$ is the potential of the system. Once the forces computed, are
637 < the positions and velocities updated by integrating Newton's equation
638 < of motion,
639 < \begin{equation}
640 < f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
636 > where $U(q)$ is the potential of the system. However, the numerical
637 > details of this computation are often quite complex. Once the forces
638 > computed, the positions and velocities are updated by integrating
639 > Hamilton's equations of motion,
640 > \begin{eqnarray}
641 > \dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial
642 > U(q_i)}{\partial q_i} = f(q_i) \\
643 > \dot q_i & = & p_i
644   \label{Ineq:newton}
645 < \end{equation}
646 < Here is an example of integrating algorithms, Verlet algorithm, which
647 < is one of the best algorithms to integrate Newton's equation of
648 < motion. The Taylor expension of the position at time $t$ is
645 > \end{eqnarray}
646 > The classic example of an integrating algorithm is the Verlet
647 > algorithm, which is one of the simplest algorithms for integrating the
648 > equations of motion. The Taylor expansion of the position at time $t$
649 > is
650   \begin{equation}
651 < q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
651 > q(t+\Delta t)= q(t) + \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 +
652          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
653          \mathcal{O}(\Delta t^4)
654   \label{Ineq:verletFuture}
655   \end{equation}
656   for a later time $t+\Delta t$, and
657   \begin{equation}
658 < q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
658 > q(t-\Delta t)= q(t) - \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 -
659          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
660          \mathcal{O}(\Delta t^4) ,
661   \label{Ineq:verletPrevious}
662   \end{equation}
663 < for a previous time $t-\Delta t$. The summation of the
664 < Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives
663 > for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture}
664 > and~\ref{Ineq:verletPrevious} gives
665   \begin{equation}
666   q(t+\Delta t)+q(t-\Delta t) =
667          2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
# Line 624 | Line 673 | q(t+\Delta t) \approx
673          2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
674   \label{Ineq:newPosition}
675   \end{equation}
676 < The higher order of the $\Delta t$ is omitted.
676 > The higher order terms in $\Delta t$ are omitted.
677  
678 < Numerous technics and tricks are applied to Molecular Dynamics
679 < simulation to gain more efficiency or more accuracy. The simulation
680 < engine used in this dissertation for the Molecular Dynamics
681 < simulations is {\sc oopse}, more details of the algorithms and
682 < technics can be found in~\cite{Meineke2005}.
678 > Numerous techniques and tricks have been applied to Molecular Dynamics
679 > simulations to gain greater efficiency or accuracy. The engine used in
680 > this dissertation for the Molecular Dynamics simulations is {\sc
681 > oopse}. More details of the algorithms and techniques used in this
682 > code can be found in Ref.~\cite{Meineke2005}.
683  
684   \subsection{Langevin Dynamics\label{In:ssec:ld}}
685 < In many cases, the properites of the solvent in a system, like the
686 < lipid-water system studied in this dissertation, are less important to
687 < the researchers. However, the major computational expense is spent on
688 < the solvent in the Molecular Dynamics simulations because of the large
689 < number of the solvent molecules compared to that of solute molecules,
690 < the ratio of the number of lipid molecules to the number of water
691 < molecules is $1:25$ in our lipid-water system. The efficiency of the
692 < Molecular Dynamics simulations is greatly reduced.
685 > In many cases, the properites of the solvent (like the water in the
686 > lipid-water system studied in this dissertation) are less interesting
687 > to the researchers than the behavior of the solute. However, the major
688 > computational expense is ofter the solvent-solvent interactions, this
689 > is due to the large number of the solvent molecules when compared to
690 > the number of solute molecules. The ratio of the number of lipid
691 > molecules to the number of water molecules is $1:25$ in our
692 > lipid-water system. The efficiency of the Molecular Dynamics
693 > simulations is greatly reduced, with up to 85\% of CPU time spent
694 > calculating only water-water interactions.
695  
696 < As an extension of the Molecular Dynamics simulations, the Langevin
697 < Dynamics seeks a way to avoid integrating equation of motion for
698 < solvent particles without losing the Brownian properites of solute
699 < particles. A common approximation is that the coupling of the solute
700 < and solvent is expressed as a set of harmonic oscillators. So the
701 < Hamiltonian of such a system is written as
696 > As an extension of the Molecular Dynamics methodologies, Langevin
697 > Dynamics seeks a way to avoid integrating the equations of motion for
698 > solvent particles without losing the solvent effects on the solute
699 > particles. One common approximation is to express the coupling of the
700 > solute and solvent as a set of harmonic oscillators. The Hamiltonian
701 > of such a system is written as
702   \begin{equation}
703   H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
704   \label{Ineq:hamiltonianofCoupling}
705   \end{equation}
706 < where $H_B$ is the Hamiltonian of the bath which equals to
706 > where $H_B$ is the Hamiltonian of the bath which  is a set of N
707 > harmonic oscillators
708   \begin{equation}
709   H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
710   \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
711   \label{Ineq:hamiltonianofBath}
712   \end{equation}
713 < $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
714 < bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
713 > $\alpha$ runs over all the degree of freedoms of the bath,
714 > $\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and
715 > $\Delta U(q)$ is the bilinear coupling given by
716   \begin{equation}
717   \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
718   \label{Ineq:systemBathCoupling}
719   \end{equation}
720 < where $g$ is the coupling constant. By solving the Hamilton's equation
721 < of motion, the {\it Generalized Langevin Equation} for this system is
722 < derived to
720 > where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By
721 > solving the Hamilton's equations of motion, the {\it Generalized
722 > Langevin Equation} for this system is derived as
723   \begin{equation}
724   m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
725   \label{Ineq:gle}
# Line 674 | Line 727 | W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m
727   with mean force,
728   \begin{equation}
729   W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
730 < \omega_\alpha^2}q^2,
730 > \omega_\alpha^2}q^2.
731   \label{Ineq:meanForce}
732   \end{equation}
733 < being only a dependence of coordinates of the solute particles, {\it
734 < friction kernel},
733 > The {\it friction kernel}, $\xi(t)$, depends only on the coordinates
734 > of the solute particles,
735   \begin{equation}
736   \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
737   \omega_\alpha} \cos(\omega_\alpha t),
738   \label{Ineq:xiforGLE}
739   \end{equation}
740 < and the random force,
740 > and a ``random'' force,
741   \begin{equation}
742   R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
743   \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
744   q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
745   \label{Ineq:randomForceforGLE}
746   \end{equation}
747 < as only a dependence of the initial conditions. The relationship of
748 < friction kernel $\xi(t)$ and random force $R(t)$ is given by
747 > that depends only on the initial conditions. The relationship of
748 > friction kernel $\xi(t)$ and random force $R(t)$ is given by the
749 > second fluctuation dissipation theorem,
750   \begin{equation}
751 < \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
751 > \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
752   \label{Ineq:relationshipofXiandR}
753   \end{equation}
754 < from their definitions. In Langevin limit, the friction is treated
755 < static, which means
754 > In the harmonic bath this relation is exact and provable from the
755 > definitions of these quantities. In the limit of static friction,
756   \begin{equation}
757   \xi(t) = 2 \xi_0 \delta(t).
758   \label{Ineq:xiofStaticFriction}
759   \end{equation}
760 < After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with
761 < Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted
762 < to
760 > After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with
761 > Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is
762 > extracted,
763   \begin{equation}
764   m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
765   \label{Ineq:langevinEquation}
766   \end{equation}
767 < The applying of Langevin Equation to dynamic simulations is discussed
768 < in Ch.~\ref{chap:ld}.
767 > Application of the Langevin Equation to dynamic simulations is
768 > discussed in Ch.~\ref{chap:ld}.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines