ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
Revision: 3372
Committed: Wed Mar 19 21:04:47 2008 UTC (17 years, 1 month ago) by xsun
Content type: application/x-tex
File size: 32509 byte(s)
Log Message:
writing.

File Contents

# Content
1 \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2
3 \section{Background on the Problem\label{In:sec:pro}}
4 Phospholipid molecules are the primary topic of this dissertation
5 because of their critical role as the foundation of biological
6 membranes. Lipids, when dispersed in water, self assemble into a
7 mumber of topologically distinct bilayer structures. The phase
8 behavior of lipid bilayers has been explored
9 experimentally~\cite{Cevc87}, however, a complete understanding of the
10 mechanism and driving forces behind the various phases has not been
11 achieved.
12
13 \subsection{Ripple Phase\label{In:ssec:ripple}}
14 The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
15 periodic buckling of the membrane, is an intermediate phase which is
16 developed either from heating the gel phase $L_{\beta'}$ or cooling
17 the fluid phase $L_\alpha$. A Sketch is shown in
18 figure~\ref{Infig:phaseDiagram}.~\cite{Cevc87}
19 \begin{figure}
20 \centering
21 \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
22 \caption{A phase diagram of lipid bilayer. With increasing the
23 temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple
24 ($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.}
25 \label{Infig:phaseDiagram}
26 \end{figure}
27 Most structural information of the ripple phase has been obtained by
28 the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
29 electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
30 diffraction work by Katsaras {\it et al.} showed that a rich phase
31 diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
32 is possible for lecithin bilayers.\cite{Katsaras00} Recently,
33 Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe
34 ripple phase morphology in bilayers supported on
35 mica.~\cite{Kaasgaard03}
36 \begin{figure}
37 \centering
38 \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
39 \caption{The experimental observed ripple phase. The top image is
40 obtained by X-ray diffraction~\cite{Sun96}, and the bottom one is
41 observed by AFM.~\cite{Kaasgaard03}}
42 \label{Infig:ripple}
43 \end{figure}
44 Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray
45 diffraction and AFM. The experimental results provide strong support
46 for a 2-dimensional triangular packing lattice of the lipid molecules
47 within the ripple phase. This is a notable change from the observed
48 lipid packing within the gel phase,~\cite{Cevc87} although Tenchov
49 {\it et al.} have recently observed near-hexagonal packing in some
50 phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
51 physical mechanism for the formation of the ripple phase has never
52 been explained and the microscopic structure of the ripple phase has
53 never been elucidated by experiments. Computational simulation is a
54 perfect tool to study the microscopic properties for a
55 system. However, the large length scale the ripple structure and the
56 long time scale of the formation of the ripples are crucial obstacles
57 to performing the actual work. The principal ideas explored in this
58 dissertation are attempts to break the computational task up by
59 \begin{itemize}
60 \item Simplifying the lipid model.
61 \item Improving algorithm for integrating the equations of motion.
62 \end{itemize}
63 In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
64 dipole spin model and a coarse-grained molecualr scale model to
65 perform the Monte Carlo and Molecular Dynamics simulations
66 respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
67 Dynamics algorithm which excludes the explicit solvent to improve the
68 efficiency of the simulations.
69
70 \subsection{Lattice Model\label{In:ssec:model}}
71 The gel-like characteristic (relatively immobile molecules) exhibited
72 by the ripple phase makes it feasible to apply a lattice model to
73 study the system. The popular $2$ dimensional lattice models, {\it
74 i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
75 on triangular lattices. The Hamiltonians of these systems are given by
76 \begin{equation}
77 H =
78 \begin{cases}
79 -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\
80 -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and
81 Heisenberg},
82 \end{cases}
83 \end{equation}
84 where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
85 $s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins
86 prefer an aligned structure, and if $J < 0$, spins prefer an
87 anti-aligned structure.
88
89 \begin{figure}
90 \centering
91 \includegraphics[width=3in]{./figures/inFrustration.pdf}
92 \caption{Frustration on triangular lattice, the spins and dipoles are
93 represented by arrows. The multiple local minima of energy states
94 induce the frustration for spins and dipoles picking the directions.}
95 \label{Infig:frustration}
96 \end{figure}
97 The spins in figure~\ref{Infig:frustration} shows an illustration of
98 the frustration for $J < 0$ on a triangular lattice. There are
99 multiple local minima energy states which are independent of the
100 direction of the spin on top of the triangle, therefore infinite
101 possibilities for the packing of spins which induces what is known as
102 ``complete regular frustration'' which leads to disordered low
103 temperature phases. The similarity goes to the dipoles on a hexagonal
104 lattice, which are shown by the dipoles in
105 figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
106 to be aligned, however, due to the long wave fluctuation, at low
107 temperature, the aligned state becomes unstable, vortex is formed and
108 results in multiple local minima of energy states. The dipole on the
109 center of the hexagonal lattice is frustrated.
110
111 The lack of translational degrees of freedom in lattice models
112 prevents their utilization in models for surface buckling. In
113 chapter~\ref{chap:mc}, a modified lattice model is introduced to
114 tackle this specific situation.
115
116 \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
117 Statistical mechanics provides a way to calculate the macroscopic
118 properties of a system from the molecular interactions used in
119 computational simulations. This section serves as a brief introduction
120 to key concepts of classical statistical mechanics that we used in
121 this dissertation. Tolman gives an excellent introduction to the
122 principles of statistical mechanics.~\cite{Tolman1979} A large part of
123 section~\ref{In:sec:SM} will follow Tolman's notation.
124
125 \subsection{Ensembles\label{In:ssec:ensemble}}
126 In classical mechanics, the state of the system is completely
127 described by the positions and momenta of all particles. If we have an
128 $N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1,
129 q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
130 to define the instantaneous state of the system. Each single set of
131 the $6N$ coordinates can be considered as a unique point in a $6N$
132 dimensional space where each perpendicular axis is one of
133 $q_{i\alpha}$ or $p_{i\alpha}$ ($i$ is the particle and $\alpha$ is
134 the spatial axis). This $6N$ dimensional space is known as phase
135 space. The instantaneous state of the system is a single point in
136 phase space. A trajectory is a connected path of points in phase
137 space. An ensemble is a collection of systems described by the same
138 macroscopic observables but which have microscopic state
139 distributions. In phase space an ensemble is a collection of a set of
140 representive points. A density distribution $\rho(q^N, p^N)$ of the
141 representive points in phase space describes the condition of an
142 ensemble of identical systems. Since this density may change with
143 time, it is also a function of time. $\rho(q^N, p^N, t)$ describes the
144 ensemble at a time $t$, and $\rho(q^N, p^N, t')$ describes the
145 ensemble at a later time $t'$. For convenience, we will use $\rho$
146 instead of $\rho(q^N, p^N, t)$ in the following disccusion. If we
147 normalize $\rho$ to unity,
148 \begin{equation}
149 1 = \int d \vec q~^N \int d \vec p~^N \rho,
150 \label{Ineq:normalized}
151 \end{equation}
152 then the value of $\rho$ gives the probability of finding the system
153 in a unit volume in phase space.
154
155 Liouville's theorem describes the change in density $\rho$ with
156 time. The number of representative points at a given volume in phase
157 space at any instant can be written as:
158 \begin{equation}
159 \label{Ineq:deltaN}
160 \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
161 \end{equation}
162 To calculate the change in the number of representative points in this
163 volume, let us consider a simple condition: the change in the number
164 of representative points along the $q_1$ axis. The rate of the number
165 of the representative points entering the volume at $q_1$ per unit time
166 is:
167 \begin{equation}
168 \label{Ineq:deltaNatq1}
169 \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
170 \end{equation}
171 and the rate of the number of representative points leaving the volume
172 at another position $q_1 + \delta q_1$ is:
173 \begin{equation}
174 \label{Ineq:deltaNatq1plusdeltaq1}
175 \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
176 \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
177 \end{equation}
178 Here the higher order differentials are neglected. So the change in
179 the number of representative points is the difference between
180 eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1},
181 which gives us:
182 \begin{equation}
183 \label{Ineq:deltaNatq1axis}
184 -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
185 \end{equation}
186 where, higher order differetials are neglected. If we sum over all the
187 axes in the phase space, we can get the change in the number of
188 representative points in a given volume with time:
189 \begin{equation}
190 \label{Ineq:deltaNatGivenVolume}
191 \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
192 {\dot q_i}}{\partial q_i} + \frac{\partial
193 {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
194 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
195 \end{equation}
196 From Hamilton's equations of motion,
197 \begin{equation}
198 \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
199 {\dot p_i}}{\partial p_i},
200 \label{Ineq:canonicalFormOfEquationOfMotion}
201 \end{equation}
202 this cancels out the first term on the right side of
203 eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
204 eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
205 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
206 arrive at Liouville's theorem:
207 \begin{equation}
208 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
209 \frac{\partial {\rho}}{\partial
210 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right).
211 \label{Ineq:simpleFormofLiouville}
212 \end{equation}
213 This is the basis of statistical mechanics. If we move the right
214 side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we
215 will obtain
216 \begin{equation}
217 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left(
218 \frac{\partial {\rho}}{\partial
219 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
220 = 0.
221 \label{Ineq:anotherFormofLiouville}
222 \end{equation}
223 It is easy to note that the left side of
224 equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
225 $\rho$ with respect to $t$, which means
226 \begin{equation}
227 \frac{d \rho}{dt} = 0,
228 \label{Ineq:conservationofRho}
229 \end{equation}
230 and the rate of density change is zero in the neighborhood of any
231 selected moving representative points in the phase space.
232
233 The condition of the ensemble is determined by the density
234 distribution. If we consider the density distribution as only a
235 function of $q$ and $p$, which means the rate of change of the phase
236 space density in the neighborhood of all representative points in the
237 phase space is zero,
238 \begin{equation}
239 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
240 \label{Ineq:statEquilibrium}
241 \end{equation}
242 We may conclude the ensemble is in {\it statistical equilibrium}. An
243 ensemble in statistical equilibrium means the system is also in
244 macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
245 \right)_{q, p} = 0$, then
246 \begin{equation}
247 \sum_{i} \left(
248 \frac{\partial {\rho}}{\partial
249 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
250 = 0.
251 \label{Ineq:constantofMotion}
252 \end{equation}
253 If $\rho$ is a function only of some constant of the motion, $\rho$ is
254 independent of time. For a conservative system, the energy of the
255 system is one of the constants of the motion. There are many
256 thermodynamically relevant ensembles: when the density distribution is
257 constant everywhere in the phase space,
258 \begin{equation}
259 \rho = \mathrm{const.}
260 \label{Ineq:uniformEnsemble}
261 \end{equation}
262 the ensemble is called {\it uniform ensemble}.
263
264 \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
265 Another useful ensemble is the {\it microcanonical ensemble}, for
266 which:
267 \begin{equation}
268 \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
269 \label{Ineq:microcanonicalEnsemble}
270 \end{equation}
271 where $\Sigma(N, V, E)$ is a normalization constant parameterized by
272 $N$, the total number of particles, $V$, the total physical volume and
273 $E$, the total energy. The physical meaning of $\Sigma(N, V, E)$ is
274 the phase space volume accessible to a microcanonical system with
275 energy $E$ evolving under Hamilton's equations. $H(q^N, p^N)$ is the
276 Hamiltonian of the system. The Gibbs entropy is defined as
277 \begin{equation}
278 S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho],
279 \label{Ineq:gibbsEntropy}
280 \end{equation}
281 where $k_B$ is the Boltzmann constant and $C^N$ is a number which
282 makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
283 total phase space volume of one state. The entropy of a microcanonical
284 ensemble is given by
285 \begin{equation}
286 S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
287 \label{Ineq:entropy}
288 \end{equation}
289
290 \subsubsection{The Canonical Ensemble\label{In:sssec:canonical}}
291 If the density distribution $\rho$ is given by
292 \begin{equation}
293 \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
294 \label{Ineq:canonicalEnsemble}
295 \end{equation}
296 the ensemble is known as the {\it canonical ensemble}. Here,
297 \begin{equation}
298 Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T},
299 \label{Ineq:partitionFunction}
300 \end{equation}
301 which is also known as the canonical{\it partition function}. $\Gamma$
302 indicates that the integral is over all phase space. In the canonical
303 ensemble, $N$, the total number of particles, $V$, total volume, and
304 $T$, the temperature, are constants. The systems with the lowest
305 energies hold the largest population. According to maximum principle,
306 thermodynamics maximizes the entropy $S$, implying that
307 \begin{equation}
308 \begin{array}{ccc}
309 \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
310 \end{array}
311 \label{Ineq:maximumPrinciple}
312 \end{equation}
313 From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the
314 canonical ensemble, {\it i.e.}, total probability and average energy
315 must be conserved, the partition function can be shown to be
316 equivalent to
317 \begin{equation}
318 Z_N = e^{-A/k_B T},
319 \label{Ineq:partitionFunctionWithFreeEnergy}
320 \end{equation}
321 where $A$ is the Helmholtz free energy. The significance of
322 Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
323 that they serve as a connection between macroscopic properties of the
324 system and the distribution of microscopic states.
325
326 There is an implicit assumption that our arguments are based on so
327 far. A representative point in the phase space is equally likely to be
328 found in any energetically allowed region. In other words, all
329 energetically accessible states are represented equally, the
330 probabilities to find the system in any of the accessible states is
331 equal. This is called the principle of equal a {\it priori}
332 probabilities.
333
334 \subsection{Statistical Averages\label{In:ssec:average}}
335 Given a density distribution $\rho$ in phase space, the average
336 of any quantity ($F(q^N, p^N$)) which depends on the coordinates
337 ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
338 can be calculated based on the definition shown by
339 Eq.~\ref{Ineq:statAverage1}
340 \begin{equation}
341 \langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
342 F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
343 \label{Ineq:statAverage1}
344 \end{equation}
345 Since the density distribution $\rho$ is normalized to unity, the mean
346 value of $F(q^N, p^N)$ is simplified to
347 \begin{equation}
348 \langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
349 p^N) \rho,
350 \label{Ineq:statAverage2}
351 \end{equation}
352 called the {\it ensemble average}. However, the quantity is often
353 averaged for a finite time in real experiments,
354 \begin{equation}
355 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
356 \frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt.
357 \label{Ineq:timeAverage1}
358 \end{equation}
359 Usually this time average is independent of $t_0$ in statistical
360 mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
361 \begin{equation}
362 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
363 \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
364 \label{Ineq:timeAverage2}
365 \end{equation}
366 for an infinite time interval.
367
368 \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
369 The {\it ergodic hypothesis}, an important hypothesis governing modern
370 computer simulations states that the system will eventually pass
371 arbitrarily close to any point that is energetically accessible in
372 phase space. Mathematically, this leads to
373 \begin{equation}
374 \langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t.
375 \label{Ineq:ergodicity}
376 \end{equation}
377 Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of
378 averaging for sufficiently ergodic systems. Also Monte Carlo may be
379 used to obtain ensemble averages of a quantity which are related to
380 time averages measured in experiments.
381
382 \subsection{Correlation Functions\label{In:ssec:corr}}
383 Thermodynamic properties can be computed by equilibrium statistical
384 mechanics. On the other hand, {\it Time correlation functions} are a
385 powerful tool to understand the evolution of a dynamical
386 systems. Imagine that property $A(q^N, p^N, t)$ as a function of
387 coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and
388 at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very
389 small, the change of the value is minor, and the later value of
390 $A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However,
391 when $\tau$ is large, this correlation is lost. A time correlation
392 function measures this relationship and is defined
393 by~\cite{Berne90}
394 \begin{equation}
395 C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow
396 \infty}
397 \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
398 \label{Ineq:autocorrelationFunction}
399 \end{equation}
400 Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of
401 a single variable, called an {\it autocorrelation function}. The
402 definition of the correlation function for two different variables is
403 similar to that of autocorrelation function, which is
404 \begin{equation}
405 C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
406 \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
407 \label{Ineq:crosscorrelationFunction}
408 \end{equation}
409 and called {\it cross correlation function}.
410
411 We know from the ergodic hypothesis that there is a relationship
412 between time average and ensemble average. We can put the correlation
413 function in a classical mechanics form,
414 \begin{equation}
415 C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
416 A[q^N(t+\tau), q^N(t+\tau)] \rho(q, p)
417 \label{Ineq:autocorrelationFunctionCM}
418 \end{equation}
419 and
420 \begin{equation}
421 C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
422 B[q^N(t+\tau), q^N(t+\tau)] \rho(q, p)
423 \label{Ineq:crosscorrelationFunctionCM}
424 \end{equation}
425 as autocorrelation function and cross correlation function
426 respectively. $\rho(q, p)$ is the density distribution at equillibrium
427 in phase space. In many cases, the correlation function decay is a
428 single exponential
429 \begin{equation}
430 C(t) \sim e^{-t / \tau_r},
431 \label{Ineq:relaxation}
432 \end{equation}
433 where $\tau_r$ is known as relaxation time which discribes the rate of
434 the decay.
435
436 \section{Methodolody\label{In:sec:method}}
437 The simulations performed in this dissertation are branched into two
438 main catalog, Monte Carlo and Molecular Dynamics. There are two main
439 difference between Monte Carlo and Molecular Dynamics simulations. One
440 is that the Monte Carlo simulation is time independent, and Molecular
441 Dynamics simulation is time involved. Another dissimilar is that the
442 Monte Carlo is a stochastic process, the configuration of the system
443 is not determinated by its past, however, using Moleuclar Dynamics,
444 the system is propagated by Newton's equation of motion, the
445 trajectory of the system evolved in the phase space is determined. A
446 brief introduction of the two algorithms are given in
447 section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
448 Molecular Dynamics, Langevin Dynamics, is introduced by
449 section~\ref{In:ssec:ld}.
450
451 \subsection{Monte Carlo\label{In:ssec:mc}}
452 Monte Carlo algorithm was first introduced by Metropolis {\it et
453 al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
454 applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
455 which the $N$, the total number of particles, $V$, total volume, $T$,
456 temperature are constants. The average energy is given by substituding
457 Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2},
458 \begin{equation}
459 \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
460 \label{Ineq:energyofCanonicalEnsemble}
461 \end{equation}
462 So are the other properties of the system. The Hamiltonian is the
463 summation of Kinetic energy $K(p^N)$ as a function of momenta and
464 Potential energy $U(q^N)$ as a function of positions,
465 \begin{equation}
466 H(q^N, p^N) = K(p^N) + U(q^N).
467 \label{Ineq:hamiltonian}
468 \end{equation}
469 If the property $A$ is only a function of position ($ A = A(q^N)$),
470 the mean value of $A$ is reduced to
471 \begin{equation}
472 \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
473 \label{Ineq:configurationIntegral}
474 \end{equation}
475 The kinetic energy $K(p^N)$ is factored out in
476 Eq.~\ref{Ineq:configurationIntegral}. $\langle A
477 \rangle$ is a configuration integral now, and the
478 Eq.~\ref{Ineq:configurationIntegral} is equivalent to
479 \begin{equation}
480 \langle A \rangle = \int d \vec q~^N A \rho(q^N).
481 \label{Ineq:configurationAve}
482 \end{equation}
483
484 In a Monte Carlo simulation of canonical ensemble, the probability of
485 the system being in a state $s$ is $\rho_s$, the change of this
486 probability with time is given by
487 \begin{equation}
488 \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
489 \label{Ineq:timeChangeofProb}
490 \end{equation}
491 where $w_{ss'}$ is the tansition probability of going from state $s$
492 to state $s'$. Since $\rho_s$ is independent of time at equilibrium,
493 \begin{equation}
494 \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
495 \label{Ineq:equiProb}
496 \end{equation}
497 which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
498 for all $s'$. So
499 \begin{equation}
500 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
501 \label{Ineq:relationshipofRhoandW}
502 \end{equation}
503 If
504 \begin{equation}
505 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
506 \label{Ineq:conditionforBoltzmannStatistics}
507 \end{equation}
508 then
509 \begin{equation}
510 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
511 \label{Ineq:satisfyofBoltzmannStatistics}
512 \end{equation}
513 Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that
514 $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
515 shows how Monte Carlo simulation generates a transition probability
516 governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as
517 \begin{enumerate}
518 \item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy.
519 \item\label{Initm:newEnergy} Make a random displacement for particle,
520 calculate the new energy.
521 \begin{itemize}
522 \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy
523 goes down.
524 \item Pick a random number between $[0,1]$ if energy goes up.
525 \begin{itemize}
526 \item Keep the new configuration and return to
527 step~\ref{Initm:oldEnergy} if the random number smaller than
528 $e^{-(U_{new} - U_{old})} / k_B T$.
529 \item Keep the old configuration and return to
530 step~\ref{Initm:oldEnergy} if the random number larger than
531 $e^{-(U_{new} - U_{old})} / k_B T$.
532 \end{itemize}
533 \end{itemize}
534 \item\label{Initm:accumulateAvg} Accumulate the average after it converges.
535 \end{enumerate}
536 It is important to notice that the old configuration has to be sampled
537 again if it is kept.
538
539 \subsection{Molecular Dynamics\label{In:ssec:md}}
540 Although some of properites of the system can be calculated from the
541 ensemble average in Monte Carlo simulations, due to the nature of
542 lacking in the time dependence, it is impossible to gain information
543 of those dynamic properties from Monte Carlo simulations. Molecular
544 Dynamics is a measurement of the evolution of the positions and
545 momenta of the particles in the system. The evolution of the system
546 obeys laws of classical mechanics, in most situations, there is no
547 need for the count of the quantum effects. For a real experiment, the
548 instantaneous positions and momenta of the particles in the system are
549 neither important nor measurable, the observable quantities are
550 usually a average value for a finite time interval. These quantities
551 are expressed as a function of positions and momenta in Melecular
552 Dynamics simulations. Like the thermal temperature of the system is
553 defined by
554 \begin{equation}
555 \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
556 \label{Ineq:temperature}
557 \end{equation}
558 here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
559 component of the velocity of the particle. The right side of
560 Eq.~\ref{Ineq:temperature} is the average kinetic energy of the
561 system. A simple Molecular Dynamics simulation scheme
562 is:~\cite{Frenkel1996}
563 \begin{enumerate}
564 \item\label{Initm:initialize} Assign the initial positions and momenta
565 for the particles in the system.
566 \item\label{Initm:calcForce} Calculate the forces.
567 \item\label{Initm:equationofMotion} Integrate the equation of motion.
568 \begin{itemize}
569 \item Return to step~\ref{Initm:calcForce} if the equillibrium is
570 not achieved.
571 \item Go to step~\ref{Initm:calcAvg} if the equillibrium is
572 achieved.
573 \end{itemize}
574 \item\label{Initm:calcAvg} Compute the quantities we are interested in.
575 \end{enumerate}
576 The initial positions of the particles are chosen as that there is no
577 overlap for the particles. The initial velocities at first are
578 distributed randomly to the particles, and then shifted to make the
579 momentum of the system $0$, at last scaled to the desired temperature
580 of the simulation according Eq.~\ref{Ineq:temperature}.
581
582 The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce}
583 and~\ref{Initm:equationofMotion}. The calculation of the forces are
584 often involved numerous effort, this is the most time consuming step
585 in the Molecular Dynamics scheme. The evaluation of the forces is
586 followed by
587 \begin{equation}
588 f(q) = - \frac{\partial U(q)}{\partial q},
589 \label{Ineq:force}
590 \end{equation}
591 $U(q)$ is the potential of the system. Once the forces computed, are
592 the positions and velocities updated by integrating Newton's equation
593 of motion,
594 \begin{equation}
595 f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
596 \label{Ineq:newton}
597 \end{equation}
598 Here is an example of integrating algorithms, Verlet algorithm, which
599 is one of the best algorithms to integrate Newton's equation of
600 motion. The Taylor expension of the position at time $t$ is
601 \begin{equation}
602 q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
603 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
604 \mathcal{O}(\Delta t^4)
605 \label{Ineq:verletFuture}
606 \end{equation}
607 for a later time $t+\Delta t$, and
608 \begin{equation}
609 q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
610 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
611 \mathcal{O}(\Delta t^4) ,
612 \label{Ineq:verletPrevious}
613 \end{equation}
614 for a previous time $t-\Delta t$. The summation of the
615 Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives
616 \begin{equation}
617 q(t+\Delta t)+q(t-\Delta t) =
618 2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
619 \label{Ineq:verletSum}
620 \end{equation}
621 so, the new position can be expressed as
622 \begin{equation}
623 q(t+\Delta t) \approx
624 2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
625 \label{Ineq:newPosition}
626 \end{equation}
627 The higher order of the $\Delta t$ is omitted.
628
629 Numerous technics and tricks are applied to Molecular Dynamics
630 simulation to gain more efficiency or more accuracy. The simulation
631 engine used in this dissertation for the Molecular Dynamics
632 simulations is {\sc oopse}, more details of the algorithms and
633 technics can be found in~\cite{Meineke2005}.
634
635 \subsection{Langevin Dynamics\label{In:ssec:ld}}
636 In many cases, the properites of the solvent in a system, like the
637 lipid-water system studied in this dissertation, are less important to
638 the researchers. However, the major computational expense is spent on
639 the solvent in the Molecular Dynamics simulations because of the large
640 number of the solvent molecules compared to that of solute molecules,
641 the ratio of the number of lipid molecules to the number of water
642 molecules is $1:25$ in our lipid-water system. The efficiency of the
643 Molecular Dynamics simulations is greatly reduced.
644
645 As an extension of the Molecular Dynamics simulations, the Langevin
646 Dynamics seeks a way to avoid integrating equation of motion for
647 solvent particles without losing the Brownian properites of solute
648 particles. A common approximation is that the coupling of the solute
649 and solvent is expressed as a set of harmonic oscillators. So the
650 Hamiltonian of such a system is written as
651 \begin{equation}
652 H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
653 \label{Ineq:hamiltonianofCoupling}
654 \end{equation}
655 where $H_B$ is the Hamiltonian of the bath which equals to
656 \begin{equation}
657 H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
658 \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
659 \label{Ineq:hamiltonianofBath}
660 \end{equation}
661 $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
662 bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
663 \begin{equation}
664 \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
665 \label{Ineq:systemBathCoupling}
666 \end{equation}
667 where $g$ is the coupling constant. By solving the Hamilton's equation
668 of motion, the {\it Generalized Langevin Equation} for this system is
669 derived to
670 \begin{equation}
671 m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
672 \label{Ineq:gle}
673 \end{equation}
674 with mean force,
675 \begin{equation}
676 W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
677 \omega_\alpha^2}q^2,
678 \label{Ineq:meanForce}
679 \end{equation}
680 being only a dependence of coordinates of the solute particles, {\it
681 friction kernel},
682 \begin{equation}
683 \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
684 \omega_\alpha} \cos(\omega_\alpha t),
685 \label{Ineq:xiforGLE}
686 \end{equation}
687 and the random force,
688 \begin{equation}
689 R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
690 \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
691 q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
692 \label{Ineq:randomForceforGLE}
693 \end{equation}
694 as only a dependence of the initial conditions. The relationship of
695 friction kernel $\xi(t)$ and random force $R(t)$ is given by
696 \begin{equation}
697 \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
698 \label{Ineq:relationshipofXiandR}
699 \end{equation}
700 from their definitions. In Langevin limit, the friction is treated
701 static, which means
702 \begin{equation}
703 \xi(t) = 2 \xi_0 \delta(t).
704 \label{Ineq:xiofStaticFriction}
705 \end{equation}
706 After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with
707 Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted
708 to
709 \begin{equation}
710 m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
711 \label{Ineq:langevinEquation}
712 \end{equation}
713 The applying of Langevin Equation to dynamic simulations is discussed
714 in Ch.~\ref{chap:ld}.