ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
(Generate patch)

Comparing trunk/xDissertation/Introduction.tex (file contents):
Revision 3372 by xsun, Wed Mar 19 21:04:47 2008 UTC vs.
Revision 3383 by xsun, Wed Apr 16 21:56:34 2008 UTC

# Line 3 | Line 3 | because of their critical role as the foundation of bi
3   \section{Background on the Problem\label{In:sec:pro}}
4   Phospholipid molecules are the primary topic of this dissertation
5   because of their critical role as the foundation of biological
6 < membranes. Lipids, when dispersed in water, self assemble into a
7 < mumber of topologically distinct bilayer structures. The phase
8 < behavior of lipid bilayers has been explored
9 < experimentally~\cite{Cevc87}, however, a complete understanding of the
10 < mechanism and driving forces behind the various phases has not been
11 < achieved.
6 > membranes. The chemical structure of phospholipids includes a head
7 > group with a large dipole moment which is due to the large charge
8 > separation between phosphate and amino alcohol, and a nonpolar tail
9 > that contains fatty acid chains. Depending on the specific alcohol
10 > which the phosphate and fatty acid chains are esterified to, the
11 > phospholipids are divided into glycerophospholipids and
12 > sphingophospholipids.~\cite{Cevc80} The chemical structures are shown
13 > in figure~\ref{Infig:lipid}.
14 > \begin{figure}
15 > \centering
16 > \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
17 > \caption[The chemical structure of lipids]{The chemical structure of
18 > glycerophospholipids (left) and sphingophospholipids
19 > (right).\cite{Cevc80}}
20 > \label{Infig:lipid}
21 > \end{figure}
22 > Glycerophospholipids are the dominant phospholipids in biological
23 > membranes. The type of glycerophospholipid depends on the identity of
24 > the X group, and the chains. For example, if X is choline
25 > [(CH$_3$)$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
26 > phosphatidylcholine (PC), or if X is ethanolamine
27 > [H$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
28 > phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a number
29 > types of phosphatidycholine with different fatty acids as the lipid
30 > chains.
31 > \begin{table*}
32 > \begin{minipage}{\linewidth}
33 > \begin{center}
34 > \caption{A NUMBER TYPES OF PHOSPHATIDYCHOLINE}
35 > \begin{tabular}{lll}
36 > \hline
37 >  & Fatty acid & Generic Name \\
38 > \hline
39 > \textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH &
40 > \textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
41 > \textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine
42 > \\
43 > \textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
44 > \end{tabular}
45 > \label{Intab:pc}
46 > \end{center}
47 > \end{minipage}
48 > \end{table*}
49 > When dispersed in water, lipids self assemble into a number of
50 > topologically distinct bilayer structures. The phase behavior of lipid
51 > bilayers has been explored experimentally~\cite{Cevc80}, however, a
52 > complete understanding of the mechanism and driving forces behind the
53 > various phases has not been achieved.
54  
55 < \subsection{Ripple Phase\label{In:ssec:ripple}}
55 > \subsection{The Ripple Phase\label{In:ssec:ripple}}
56   The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
57   periodic buckling of the membrane, is an intermediate phase which is
58   developed either from heating the gel phase $L_{\beta'}$ or cooling
59 < the fluid phase $L_\alpha$. A Sketch is shown in
60 < figure~\ref{Infig:phaseDiagram}.~\cite{Cevc87}
59 > the fluid phase $L_\alpha$. A sketch of the phases is shown in
60 > figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
61   \begin{figure}
62   \centering
63   \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
64 < \caption{A phase diagram of lipid bilayer. With increasing the
65 < temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple
66 < ($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.}
64 > \caption[Phases of PC lipid bilayers]{Phases of PC lipid
65 > bilayers. With increasing temperature, phosphotidylcholine (PC)
66 > bilayers can go through $L_{\beta'} \rightarrow P_{\beta'}$ (gel
67 > $\rightarrow$ ripple) and $P_{\beta'} \rightarrow L_\alpha$ (ripple
68 > $\rightarrow$ fluid) phase transitions.~\cite{Cevc80}}
69   \label{Infig:phaseDiagram}
70   \end{figure}
71 < Most structural information of the ripple phase has been obtained by
72 < the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
71 > Most structural information about the ripple phase has been obtained
72 > by X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
73   electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
74   diffraction work by Katsaras {\it et al.} showed that a rich phase
75   diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
# Line 36 | Line 80 | mica.~\cite{Kaasgaard03}
80   \begin{figure}
81   \centering
82   \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
83 < \caption{The experimental observed ripple phase. The top image is
84 < obtained by X-ray diffraction~\cite{Sun96}, and the bottom one is
85 < observed by AFM.~\cite{Kaasgaard03}}
83 > \caption[Experimental observations of the riple phase]{Experimental
84 > observations of the riple phase. The top image is an electrostatic
85 > density map obtained by Sun {\it et al.} using X-ray
86 > diffraction~\cite{Sun96}.  The lower figures are the surface topology
87 > of various ripple domains in bilayers supported in mica. The AFM
88 > images were observed by Kaasgaard {\it et al.}.~\cite{Kaasgaard03}}
89   \label{Infig:ripple}
90   \end{figure}
91 < Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray
91 > Figure~\ref{Infig:ripple} shows the ripple phase oberved by both X-ray
92   diffraction and AFM. The experimental results provide strong support
93   for a 2-dimensional triangular packing lattice of the lipid molecules
94   within the ripple phase.  This is a notable change from the observed
95 < lipid packing within the gel phase,~\cite{Cevc87} although Tenchov
96 < {\it et al.} have recently observed near-hexagonal packing in some
95 > lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
96 > {\it et al.} have recently observed near-triangular packing in some
97   phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
98   physical mechanism for the formation of the ripple phase has never
99   been explained and the microscopic structure of the ripple phase has
100   never been elucidated by experiments. Computational simulation is a
101 < perfect tool to study the microscopic properties for a
102 < system. However, the large length scale the ripple structure and the
103 < long time scale of the formation of the ripples are crucial obstacles
104 < to performing the actual work. The principal ideas explored in this
105 < dissertation are attempts to break the computational task up by
101 > very good tool to study the microscopic properties for a
102 > system. However, the large length scale of the ripple structures and
103 > the long time required for the formation of the ripples are crucial
104 > obstacles to performing the actual work. The principal ideas explored
105 > in this dissertation are attempts to break the computational task up
106 > by
107   \begin{itemize}
108   \item Simplifying the lipid model.
109 < \item Improving algorithm for integrating the equations of motion.
109 > \item Improving the algorithm for integrating the equations of motion.
110   \end{itemize}
111   In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
112 < dipole spin model and a coarse-grained molecualr scale model to
112 > dipole spin model and a coarse-grained molecular scale model to
113   perform the Monte Carlo and Molecular Dynamics simulations
114   respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
115   Dynamics algorithm which excludes the explicit solvent to improve the
116   efficiency of the simulations.
117  
118 < \subsection{Lattice Model\label{In:ssec:model}}
119 < The gel-like characteristic (relatively immobile molecules) exhibited
118 > \subsection{Lattice Models\label{In:ssec:model}}
119 > The gel-like characteristic (laterally immobile molecules) exhibited
120   by the ripple phase makes it feasible to apply a lattice model to
121   study the system. The popular $2$ dimensional lattice models, {\it
122   i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
# Line 89 | Line 137 | anti-aligned structure.
137   \begin{figure}
138   \centering
139   \includegraphics[width=3in]{./figures/inFrustration.pdf}
140 < \caption{Frustration on triangular lattice, the spins and dipoles are
141 < represented by arrows. The multiple local minima of energy states
142 < induce the frustration for spins and dipoles picking the directions.}
140 > \caption[Frustration on triangular lattice]{Frustration on triangular
141 > lattice, the spins and dipoles are represented by arrows. The multiple
142 > local minima of energy states induce frustration for spins and dipoles
143 > resulting in disordered low-temperature phases.}
144   \label{Infig:frustration}
145   \end{figure}
146 < The spins in figure~\ref{Infig:frustration} shows an illustration of
147 < the frustration for $J < 0$ on a triangular lattice. There are
148 < multiple local minima energy states which are independent of the
149 < direction of the spin on top of the triangle, therefore infinite
150 < possibilities for the packing of spins which induces what is known as
151 < ``complete regular frustration'' which leads to disordered low
152 < temperature phases. The similarity goes to the dipoles on a hexagonal
153 < lattice, which are shown by the dipoles in
154 < figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
155 < to be aligned, however, due to the long wave fluctuation, at low
156 < temperature, the aligned state becomes unstable, vortex is formed and
157 < results in multiple local minima of energy states. The dipole on the
109 < center of the hexagonal lattice is frustrated.
146 > The spins in figure~\ref{Infig:frustration} illustrate frustration for
147 > $J < 0$ on a triangular lattice. There are multiple local minima
148 > energy states which are independent of the direction of the spin on
149 > top of the triangle, therefore infinite possibilities for orienting
150 > large numbers spins. This induces what is known as ``complete regular
151 > frustration'' which leads to disordered low temperature phases. This
152 > behavior extends to dipoles on a triangular lattices, which are shown
153 > in the lower portion of figure~\ref{Infig:frustration}. In this case,
154 > dipole-aligned structures are energetically favorable, however, at low
155 > temperature, vortices are easily formed, and, this results in multiple
156 > local minima of energy states for a central dipole. The dipole on the
157 > center of the hexagonal lattice is therefore frustrated.
158  
159   The lack of translational degrees of freedom in lattice models
160   prevents their utilization in models for surface buckling. In
# Line 120 | Line 168 | principles of statistical mechanics.~\cite{Tolman1979}
168   to key concepts of classical statistical mechanics that we used in
169   this dissertation. Tolman gives an excellent introduction to the
170   principles of statistical mechanics.~\cite{Tolman1979} A large part of
171 < section~\ref{In:sec:SM} will follow Tolman's notation.
171 > section~\ref{In:sec:SM} follows Tolman's notation.
172  
173   \subsection{Ensembles\label{In:ssec:ensemble}}
174   In classical mechanics, the state of the system is completely
# Line 230 | Line 278 | selected moving representative points in the phase spa
278   and the rate of density change is zero in the neighborhood of any
279   selected moving representative points in the phase space.
280  
281 < The condition of the ensemble is determined by the density
281 > The type of thermodynamic ensemble is determined by the density
282   distribution. If we consider the density distribution as only a
283   function of $q$ and $p$, which means the rate of change of the phase
284   space density in the neighborhood of all representative points in the
# Line 259 | Line 307 | constant everywhere in the phase space,
307   \rho = \mathrm{const.}
308   \label{Ineq:uniformEnsemble}
309   \end{equation}
310 < the ensemble is called {\it uniform ensemble}.
310 > the ensemble is called {\it uniform ensemble}, but this ensemble has
311 > little relevance for physical chemistry. It is an ensemble with
312 > essentially infinite temperature.
313  
314   \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
315 < Another useful ensemble is the {\it microcanonical ensemble}, for
316 < which:
315 > The most useful ensemble for Molecular Dynamics is the {\it
316 > microcanonical ensemble}, for which:
317   \begin{equation}
318   \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
319   \label{Ineq:microcanonicalEnsemble}
# Line 280 | Line 330 | makes the argument of $\ln$ dimensionless. In this cas
330   \end{equation}
331   where $k_B$ is the Boltzmann constant and $C^N$ is a number which
332   makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
333 < total phase space volume of one state. The entropy of a microcanonical
334 < ensemble is given by
333 > total phase space volume of one state which has the same units as
334 > $\Sigma(N, V, E)$. The entropy of a microcanonical ensemble is given
335 > by
336   \begin{equation}
337   S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
338   \label{Ineq:entropy}
# Line 298 | Line 349 | Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(
349   Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(q^N, p^N) / k_B T},
350   \label{Ineq:partitionFunction}
351   \end{equation}
352 < which is also known as the canonical{\it partition function}. $\Gamma$
353 < indicates that the integral is over all phase space. In the canonical
354 < ensemble, $N$, the total number of particles, $V$, total volume, and
355 < $T$, the temperature, are constants. The systems with the lowest
356 < energies hold the largest population. According to maximum principle,
357 < thermodynamics maximizes the entropy $S$, implying that
352 > which is also known as the canonical {\it partition
353 > function}. $\Gamma$ indicates that the integral is over all phase
354 > space. In the canonical ensemble, $N$, the total number of particles,
355 > $V$, total volume, and $T$, the temperature, are constants. The
356 > systems with the lowest energies hold the largest
357 > population. Thermodynamics maximizes the entropy, $S$, implying that
358   \begin{equation}
359   \begin{array}{ccc}
360   \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
# Line 324 | Line 375 | There is an implicit assumption that our arguments are
375   system and the distribution of microscopic states.
376  
377   There is an implicit assumption that our arguments are based on so
378 < far. A representative point in the phase space is equally likely to be
379 < found in any energetically allowed region. In other words, all
380 < energetically accessible states are represented equally, the
381 < probabilities to find the system in any of the accessible states is
382 < equal. This is called the principle of equal a {\it priori}
378 > far. Tow representative points in phase space are equally likely to be
379 > found if they have the same energy. In other words, all energetically
380 > accessible states are represented , and the probabilities to find the
381 > system in any of the accessible states is equal to that states
382 > Boltzmann weight. This is called the principle of equal a {\it priori}
383   probabilities.
384  
385   \subsection{Statistical Averages\label{In:ssec:average}}
# Line 363 | Line 414 | mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
414   \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
415   \label{Ineq:timeAverage2}
416   \end{equation}
417 < for an infinite time interval.
417 > for an finite time interval, $T$.
418  
419   \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
420   The {\it ergodic hypothesis}, an important hypothesis governing modern
# Line 406 | Line 457 | C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \
457   \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
458   \label{Ineq:crosscorrelationFunction}
459   \end{equation}
460 < and called {\it cross correlation function}.
460 > and is called a {\it cross correlation function}.
461  
462   We know from the ergodic hypothesis that there is a relationship
463   between time average and ensemble average. We can put the correlation
464 < function in a classical mechanics form,
464 > function in a classical mechanical form,
465   \begin{equation}
466 < C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
467 < A[q^N(t+\tau), q^N(t+\tau)] \rho(q, p)
466 > C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
467 > A[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
468   \label{Ineq:autocorrelationFunctionCM}
469   \end{equation}
470 < and
470 > where $q^N(\tau)$, $p^N(\tau)$ is the phase space point that follows
471 > classical evolution of the point $q^N$, $p^N$ after a tme $\tau$ has
472 > elapsed, and
473   \begin{equation}
474 < C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
475 < B[q^N(t+\tau), q^N(t+\tau)] \rho(q, p)
474 > C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
475 > B[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
476   \label{Ineq:crosscorrelationFunctionCM}
477   \end{equation}
478 < as autocorrelation function and cross correlation function
478 > as the autocorrelation function and cross correlation functions
479   respectively. $\rho(q, p)$ is the density distribution at equillibrium
480 < in phase space. In many cases, the correlation function decay is a
481 < single exponential
480 > in phase space. In many cases, correlation functions decay as a
481 > single exponential in time
482   \begin{equation}
483   C(t) \sim e^{-t / \tau_r},
484   \label{Ineq:relaxation}
485   \end{equation}
486 < where $\tau_r$ is known as relaxation time which discribes the rate of
486 > where $\tau_r$ is known as relaxation time which describes the rate of
487   the decay.
488  
489 < \section{Methodolody\label{In:sec:method}}
490 < The simulations performed in this dissertation are branched into two
491 < main catalog, Monte Carlo and Molecular Dynamics. There are two main
492 < difference between Monte Carlo and Molecular Dynamics simulations. One
493 < is that the Monte Carlo simulation is time independent, and Molecular
494 < Dynamics simulation is time involved. Another dissimilar is that the
495 < Monte Carlo is a stochastic process, the configuration of the system
496 < is not determinated by its past, however, using Moleuclar Dynamics,
497 < the system is propagated by Newton's equation of motion, the
498 < trajectory of the system evolved in the phase space is determined. A
499 < brief introduction of the two algorithms are given in
500 < section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
501 < Molecular Dynamics, Langevin Dynamics, is introduced by
489 > \section{Methodology\label{In:sec:method}}
490 > The simulations performed in this dissertation branch into two main
491 > categories, Monte Carlo and Molecular Dynamics. There are two main
492 > differences between Monte Carlo and Molecular Dynamics
493 > simulations. One is that the Monte Carlo simulations are time
494 > independent methods of sampling structural features of an ensemble,
495 > while Molecular Dynamics simulations provide dynamic
496 > information. Additionally, Monte Carlo methods are stochastic
497 > processes; the future configurations of the system are not determined
498 > by its past. However, in Molecular Dynamics, the system is propagated
499 > by Hamilton's equations of motion, and the trajectory of the system
500 > evolving in phase space is deterministic. Brief introductions of the
501 > two algorithms are given in section~\ref{In:ssec:mc}
502 > and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
503 > Dynamics that includes implicit solvent effects, is introduced by
504   section~\ref{In:ssec:ld}.
505  
506   \subsection{Monte Carlo\label{In:ssec:mc}}
507 < Monte Carlo algorithm was first introduced by Metropolis {\it et
508 < al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
509 < applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
510 < which the $N$, the total number of particles, $V$, total volume, $T$,
511 < temperature are constants. The average energy is given by substituding
512 < Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2},
507 > A Monte Carlo integration algorithm was first introduced by Metropolis
508 > {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
509 > algorithm is usually applied to the canonical ensemble, a
510 > Boltzmann-weighted ensemble, in which $N$, the total number of
511 > particles, $V$, the total volume, and $T$, the temperature are
512 > constants. An average in this ensemble is given
513   \begin{equation}
514 < \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
514 > \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
515 > A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
516   \label{Ineq:energyofCanonicalEnsemble}
517   \end{equation}
518 < So are the other properties of the system. The Hamiltonian is the
519 < summation of Kinetic energy $K(p^N)$ as a function of momenta and
520 < Potential energy $U(q^N)$ as a function of positions,
518 > The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function
519 > of momenta and the potential energy, $U(q^N)$, a function of
520 > positions,
521   \begin{equation}
522   H(q^N, p^N) = K(p^N) + U(q^N).
523   \label{Ineq:hamiltonian}
524   \end{equation}
525 < If the property $A$ is only a function of position ($ A = A(q^N)$),
526 < the mean value of $A$ is reduced to
525 > If the property $A$ is a function only of position ($ A = A(q^N)$),
526 > the mean value of $A$ can be reduced to
527   \begin{equation}
528 < \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
528 > \langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}},
529   \label{Ineq:configurationIntegral}
530   \end{equation}
531   The kinetic energy $K(p^N)$ is factored out in
532   Eq.~\ref{Ineq:configurationIntegral}. $\langle A
533 < \rangle$ is a configuration integral now, and the
533 > \rangle$ is now a configuration integral, and
534   Eq.~\ref{Ineq:configurationIntegral} is equivalent to
535   \begin{equation}
536 < \langle A \rangle = \int d \vec q~^N A \rho(q^N).
536 > \langle A \rangle = \int d \vec q~^N A \rho(q^N),
537   \label{Ineq:configurationAve}
538   \end{equation}
539 + where $\rho(q^N)$ is a configurational probability
540 + \begin{equation}
541 + \rho(q^N) = \frac{e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}.
542 + \label{Ineq:configurationProb}
543 + \end{equation}
544  
545 < In a Monte Carlo simulation of canonical ensemble, the probability of
546 < the system being in a state $s$ is $\rho_s$, the change of this
547 < probability with time is given by
545 > In a Monte Carlo simulation of a system in the canonical ensemble, the
546 > probability of the system being in a state $s$ is $\rho_s$, the change
547 > of this probability with time is given by
548   \begin{equation}
549   \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
550   \label{Ineq:timeChangeofProb}
# Line 494 | Line 555 | to state $s'$. Since $\rho_s$ is independent of time a
555   \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
556   \label{Ineq:equiProb}
557   \end{equation}
558 < which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
559 < for all $s'$. So
558 > the sum of transition probabilities $\sum_{s'} [ -w_{ss'}\rho_s +
559 > w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So
560   \begin{equation}
561   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
562   \label{Ineq:relationshipofRhoandW}
563   \end{equation}
564 < If
564 > If the ratio of state populations
565   \begin{equation}
505 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
506 \label{Ineq:conditionforBoltzmannStatistics}
507 \end{equation}
508 then
509 \begin{equation}
566   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
567   \label{Ineq:satisfyofBoltzmannStatistics}
568   \end{equation}
569 < Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that
570 < $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
571 < shows how Monte Carlo simulation generates a transition probability
572 < governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as
569 > then the ratio of transition probabilities,
570 > \begin{equation}
571 > \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
572 > \label{Ineq:conditionforBoltzmannStatistics}
573 > \end{equation}
574 > An algorithm that indicates how a Monte Carlo simulation generates a
575 > transition probability governed by
576 > \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
577   \begin{enumerate}
578 < \item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy.
579 < \item\label{Initm:newEnergy} Make a random displacement for particle,
580 < calculate the new energy.
578 > \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
579 > the energy of the rest of the system due to the current location of
580 > the particle.
581 > \item\label{Initm:newEnergy} Make a random displacement of the particle,
582 > calculate the new energy taking the movement of the particle into account.
583    \begin{itemize}
584 <     \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy
585 < goes down.
524 <     \item Pick a random number between $[0,1]$ if energy goes up.
584 >     \item If the energy goes down, keep the new configuration.
585 >     \item If the energy goes up, pick a random number between $[0,1]$.
586          \begin{itemize}
587 <           \item Keep the new configuration and return to
588 < step~\ref{Initm:oldEnergy} if the random number smaller than
589 < $e^{-(U_{new} - U_{old})} / k_B T$.
590 <           \item Keep the old configuration and return to
530 < step~\ref{Initm:oldEnergy} if the random number larger than
531 < $e^{-(U_{new} - U_{old})} / k_B T$.
587 >           \item If the random number smaller than
588 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration.
589 >           \item If the random number is larger than
590 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration.
591          \end{itemize}
592    \end{itemize}
593 < \item\label{Initm:accumulateAvg} Accumulate the average after it converges.
593 > \item\label{Initm:accumulateAvg} Accumulate the averages based on the
594 > current configuration.
595 > \item Go to step~\ref{Initm:oldEnergy}.
596   \end{enumerate}
597 < It is important to notice that the old configuration has to be sampled
598 < again if it is kept.
597 > It is important for sampling accuracy that the old configuration is
598 > sampled again if it is kept.
599  
600   \subsection{Molecular Dynamics\label{In:ssec:md}}
601   Although some of properites of the system can be calculated from the
602 < ensemble average in Monte Carlo simulations, due to the nature of
603 < lacking in the time dependence, it is impossible to gain information
604 < of those dynamic properties from Monte Carlo simulations. Molecular
605 < Dynamics is a measurement of the evolution of the positions and
606 < momenta of the particles in the system. The evolution of the system
607 < obeys laws of classical mechanics, in most situations, there is no
608 < need for the count of the quantum effects. For a real experiment, the
609 < instantaneous positions and momenta of the particles in the system are
610 < neither important nor measurable, the observable quantities are
611 < usually a average value for a finite time interval. These quantities
612 < are expressed as a function of positions and momenta in Melecular
613 < Dynamics simulations. Like the thermal temperature of the system is
553 < defined by
602 > ensemble average in Monte Carlo simulations, due to the absence of the
603 > time dependence, it is impossible to gain information on dynamic
604 > properties from Monte Carlo simulations. Molecular Dynamics evolves
605 > the positions and momenta of the particles in the system. The
606 > evolution of the system obeys the laws of classical mechanics, and in
607 > most situations, there is no need to account for quantum effects. In a
608 > real experiment, the instantaneous positions and momenta of the
609 > particles in the system are ofter neither important nor measurable,
610 > the observable quantities are usually an average value for a finite
611 > time interval. These quantities are expressed as a function of
612 > positions and momenta in Molecular Dynamics simulations. For example,
613 > temperature of the system is defined by
614   \begin{equation}
615 < \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
615 > \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
616   \label{Ineq:temperature}
617   \end{equation}
618 < here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
619 < component of the velocity of the particle. The right side of
620 < Eq.~\ref{Ineq:temperature} is the average kinetic energy of the
561 < system. A simple Molecular Dynamics simulation scheme
562 < is:~\cite{Frenkel1996}
563 < \begin{enumerate}
564 < \item\label{Initm:initialize} Assign the initial positions and momenta
565 < for the particles in the system.
566 < \item\label{Initm:calcForce} Calculate the forces.
567 < \item\label{Initm:equationofMotion} Integrate the equation of motion.
568 <  \begin{itemize}
569 <     \item Return to step~\ref{Initm:calcForce} if the equillibrium is
570 < not achieved.
571 <     \item Go to step~\ref{Initm:calcAvg} if the equillibrium is
572 < achieved.
573 <  \end{itemize}
574 < \item\label{Initm:calcAvg} Compute the quantities we are interested in.
575 < \end{enumerate}
576 < The initial positions of the particles are chosen as that there is no
577 < overlap for the particles. The initial velocities at first are
578 < distributed randomly to the particles, and then shifted to make the
579 < momentum of the system $0$, at last scaled to the desired temperature
580 < of the simulation according Eq.~\ref{Ineq:temperature}.
618 > here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of
619 > particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the
620 > average kinetic energy of the system.
621  
622 < The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce}
623 < and~\ref{Initm:equationofMotion}. The calculation of the forces are
624 < often involved numerous effort, this is the most time consuming step
625 < in the Molecular Dynamics scheme. The evaluation of the forces is
626 < followed by
622 > The initial positions of the particles are chosen so that there is no
623 > overlap of the particles. The initial velocities at first are
624 > distributed randomly to the particles using a Maxwell-Boltzmann
625 > distribution, and then shifted to make the total linear momentum of
626 > the system $0$.
627 >
628 > The core of Molecular Dynamics simulations is the calculation of
629 > forces and the integration algorithm. Calculation of the forces often
630 > involves enormous effort. This is the most time consuming step in the
631 > Molecular Dynamics scheme. Evaluation of the forces is mathematically
632 > simple,
633   \begin{equation}
634   f(q) = - \frac{\partial U(q)}{\partial q},
635   \label{Ineq:force}
636   \end{equation}
637 < $U(q)$ is the potential of the system. Once the forces computed, are
638 < the positions and velocities updated by integrating Newton's equation
639 < of motion,
640 < \begin{equation}
641 < f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
637 > where $U(q)$ is the potential of the system. However, the numerical
638 > details of this computation are often quite complex. Once the forces
639 > computed, the positions and velocities are updated by integrating
640 > Hamilton's equations of motion,
641 > \begin{eqnarray}
642 > \dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial
643 > U(q_i)}{\partial q_i} = f(q_i) \\
644 > \dot q_i & = & p_i
645   \label{Ineq:newton}
646 < \end{equation}
647 < Here is an example of integrating algorithms, Verlet algorithm, which
648 < is one of the best algorithms to integrate Newton's equation of
649 < motion. The Taylor expension of the position at time $t$ is
646 > \end{eqnarray}
647 > The classic example of an integrating algorithm is the Verlet
648 > algorithm, which is one of the simplest algorithms for integrating the
649 > equations of motion. The Taylor expansion of the position at time $t$
650 > is
651   \begin{equation}
652 < q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
652 > q(t+\Delta t)= q(t) + \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 +
653          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
654          \mathcal{O}(\Delta t^4)
655   \label{Ineq:verletFuture}
656   \end{equation}
657   for a later time $t+\Delta t$, and
658   \begin{equation}
659 < q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
659 > q(t-\Delta t)= q(t) - \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 -
660          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
661          \mathcal{O}(\Delta t^4) ,
662   \label{Ineq:verletPrevious}
663   \end{equation}
664 < for a previous time $t-\Delta t$. The summation of the
665 < Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives
664 > for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture}
665 > and~\ref{Ineq:verletPrevious} gives
666   \begin{equation}
667   q(t+\Delta t)+q(t-\Delta t) =
668          2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
# Line 624 | Line 674 | q(t+\Delta t) \approx
674          2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
675   \label{Ineq:newPosition}
676   \end{equation}
677 < The higher order of the $\Delta t$ is omitted.
677 > The higher order terms in $\Delta t$ are omitted.
678  
679 < Numerous technics and tricks are applied to Molecular Dynamics
680 < simulation to gain more efficiency or more accuracy. The simulation
681 < engine used in this dissertation for the Molecular Dynamics
682 < simulations is {\sc oopse}, more details of the algorithms and
683 < technics can be found in~\cite{Meineke2005}.
679 > Numerous techniques and tricks have been applied to Molecular Dynamics
680 > simulations to gain greater efficiency or accuracy. The engine used in
681 > this dissertation for the Molecular Dynamics simulations is {\sc
682 > oopse}. More details of the algorithms and techniques used in this
683 > code can be found in Ref.~\cite{Meineke2005}.
684  
685   \subsection{Langevin Dynamics\label{In:ssec:ld}}
686 < In many cases, the properites of the solvent in a system, like the
687 < lipid-water system studied in this dissertation, are less important to
688 < the researchers. However, the major computational expense is spent on
689 < the solvent in the Molecular Dynamics simulations because of the large
690 < number of the solvent molecules compared to that of solute molecules,
691 < the ratio of the number of lipid molecules to the number of water
692 < molecules is $1:25$ in our lipid-water system. The efficiency of the
693 < Molecular Dynamics simulations is greatly reduced.
686 > In many cases, the properites of the solvent (like the water in the
687 > lipid-water system studied in this dissertation) are less interesting
688 > to the researchers than the behavior of the solute. However, the major
689 > computational expense is ofter the solvent-solvent interactions, this
690 > is due to the large number of the solvent molecules when compared to
691 > the number of solute molecules. The ratio of the number of lipid
692 > molecules to the number of water molecules is $1:25$ in our
693 > lipid-water system. The efficiency of the Molecular Dynamics
694 > simulations is greatly reduced, with up to 85\% of CPU time spent
695 > calculating only water-water interactions.
696  
697 < As an extension of the Molecular Dynamics simulations, the Langevin
698 < Dynamics seeks a way to avoid integrating equation of motion for
699 < solvent particles without losing the Brownian properites of solute
700 < particles. A common approximation is that the coupling of the solute
701 < and solvent is expressed as a set of harmonic oscillators. So the
702 < Hamiltonian of such a system is written as
697 > As an extension of the Molecular Dynamics methodologies, Langevin
698 > Dynamics seeks a way to avoid integrating the equations of motion for
699 > solvent particles without losing the solvent effects on the solute
700 > particles. One common approximation is to express the coupling of the
701 > solute and solvent as a set of harmonic oscillators. The Hamiltonian
702 > of such a system is written as
703   \begin{equation}
704   H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
705   \label{Ineq:hamiltonianofCoupling}
706   \end{equation}
707 < where $H_B$ is the Hamiltonian of the bath which equals to
707 > where $H_B$ is the Hamiltonian of the bath which  is a set of N
708 > harmonic oscillators
709   \begin{equation}
710   H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
711   \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
712   \label{Ineq:hamiltonianofBath}
713   \end{equation}
714 < $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
715 < bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
714 > $\alpha$ runs over all the degree of freedoms of the bath,
715 > $\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and
716 > $\Delta U(q)$ is the bilinear coupling given by
717   \begin{equation}
718   \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
719   \label{Ineq:systemBathCoupling}
720   \end{equation}
721 < where $g$ is the coupling constant. By solving the Hamilton's equation
722 < of motion, the {\it Generalized Langevin Equation} for this system is
723 < derived to
721 > where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By
722 > solving the Hamilton's equations of motion, the {\it Generalized
723 > Langevin Equation} for this system is derived as
724   \begin{equation}
725   m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
726   \label{Ineq:gle}
# Line 674 | Line 728 | W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m
728   with mean force,
729   \begin{equation}
730   W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
731 < \omega_\alpha^2}q^2,
731 > \omega_\alpha^2}q^2.
732   \label{Ineq:meanForce}
733   \end{equation}
734 < being only a dependence of coordinates of the solute particles, {\it
735 < friction kernel},
734 > The {\it friction kernel}, $\xi(t)$, depends only on the coordinates
735 > of the solute particles,
736   \begin{equation}
737   \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
738   \omega_\alpha} \cos(\omega_\alpha t),
739   \label{Ineq:xiforGLE}
740   \end{equation}
741 < and the random force,
741 > and a ``random'' force,
742   \begin{equation}
743   R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
744   \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
745   q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
746   \label{Ineq:randomForceforGLE}
747   \end{equation}
748 < as only a dependence of the initial conditions. The relationship of
749 < friction kernel $\xi(t)$ and random force $R(t)$ is given by
748 > that depends only on the initial conditions. The relationship of
749 > friction kernel $\xi(t)$ and random force $R(t)$ is given by the
750 > second fluctuation dissipation theorem,
751   \begin{equation}
752 < \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
752 > \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
753   \label{Ineq:relationshipofXiandR}
754   \end{equation}
755 < from their definitions. In Langevin limit, the friction is treated
756 < static, which means
755 > In the harmonic bath this relation is exact and provable from the
756 > definitions of these quantities. In the limit of static friction,
757   \begin{equation}
758   \xi(t) = 2 \xi_0 \delta(t).
759   \label{Ineq:xiofStaticFriction}
760   \end{equation}
761 < After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with
762 < Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted
763 < to
761 > After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with
762 > Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is
763 > extracted,
764   \begin{equation}
765   m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
766   \label{Ineq:langevinEquation}
767   \end{equation}
768 < The applying of Langevin Equation to dynamic simulations is discussed
769 < in Ch.~\ref{chap:ld}.
768 > Application of the Langevin Equation to dynamic simulations is
769 > discussed in Ch.~\ref{chap:ld}.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines