ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
Revision: 3371
Committed: Fri Mar 14 23:16:11 2008 UTC (17 years, 1 month ago) by xsun
Content type: application/x-tex
File size: 30659 byte(s)
Log Message:
writing.

File Contents

# Content
1 \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2
3 \section{Background on the Problem\label{In:sec:pro}}
4 Phospholipid molecules are the primary topic of this dissertation
5 because of their critical role as the foundation of biological
6 membranes. Lipids, when dispersed in water, self assemble into bilayer
7 structures. The phase behavior of lipid bilayers have been explored
8 experimentally~\cite{Cevc87}, however, a complete understanding of the
9 mechanism for self-assembly has not been achieved.
10
11 \subsection{Ripple Phase\label{In:ssec:ripple}}
12 The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
13 periodic buckling of the membrane, is an intermediate phase which is
14 developed either from heating the gel phase $L_{\beta'}$ or cooling
15 the fluid phase $L_\alpha$. Although the ripple phase has been
16 observed in several
17 experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03},
18 the physical mechanism for the formation of the ripple phase has never been
19 explained and the microscopic structure of the ripple phase has never
20 been elucidated by experiments. Computational simulation is a perfect
21 tool to study the microscopic properties for a system. However, the
22 large length scale the ripple structure and the long time scale
23 of the formation of the ripples are crucial obstacles to performing
24 the actual work. The principal ideas explored in this dissertation are
25 attempts to break the computational task up by
26 \begin{itemize}
27 \item Simplifying the lipid model.
28 \item Improving algorithm for integrating the equations of motion.
29 \end{itemize}
30 In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
31 dipole model and a coarse-grained model to perform the Monte Carlo and
32 Molecular Dynamics simulations respectively, and in
33 chapter~\ref{chap:ld}, we develop a Langevin Dynamics algorithm which
34 excludes the explicit solvent to improve the efficiency of the
35 simulations.
36
37 \subsection{Lattice Model\label{In:ssec:model}}
38 The gel-like characteristic of the ripple phase makes it feasible to
39 apply a lattice model to study the system. It is claimed that the
40 packing of the lipid molecules in ripple phase is
41 hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models,
42 {\it i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it
43 frustration} on triangular lattices. The Hamiltonian of the systems
44 are given by
45 \begin{equation}
46 H =
47 \begin{cases}
48 -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\
49 -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and
50 Heisenberg},
51 \end{cases}
52 \end{equation}
53 where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
54 $s_{n'}$ ($\vec s_{n'}$) are the nearest neighbours. When $J > 0$, the
55 spins prefer aligned with each other, and if $J < 0$, the spins want
56 to be anti-aligned.
57
58 \begin{figure}
59 \centering
60 \includegraphics[width=3in]{./figures/inFrustration.pdf}
61 \caption{Frustration on triangular lattice, the spins and dipoles are
62 represented by arrows. The multiple local minima of energy states
63 induce the frustration for spins and dipoles picking the directions.}
64 \label{Infig:frustration}
65 \end{figure}
66 The spins in figure~\ref{Infig:frustration} shows an illustration of
67 the frustration for $J < 0$ on a triangular lattice. There are
68 multiple local minima energy states which are independent of the
69 direction of the spin on top of the triangle, therefore infinite
70 possibilities for the packing of spins induce what is known as
71 ``complete regular frustration'' which leads to disordered low
72 temperature phases. The similarity goes to the dipoles on a hexagonal
73 lattice, which are shown by the dipoles in
74 figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
75 to be aligned, however, due to the long wave fluctuation, at low
76 temperature, the aligned state becomes unstable, vortex is formed and
77 results in multiple local minima of energy states. The dipole on the
78 center of the hexagonal lattice is frustrated.
79
80 The lack of translational degree of freedom in lattice models prevents
81 their utilization in models for surface buckling which would
82 correspond to ripple formation. In chapter~\ref{chap:mc}, a modified
83 lattice model is introduced to tackle this specific situation.
84
85 \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
86 Statistical mechanics provides a way to calculate the macroscopic
87 properties of a system from the molecular interactions used in
88 computational simulations. This section serves as a brief introduction
89 to key concepts of classical statistical mechanics that we used in
90 this dissertation. Tolman gives an excellent introduction to the
91 principles of statistical mechanics~\cite{Tolman1979}. A large part of
92 section~\ref{In:sec:SM} will follow Tolman's notation.
93
94 \subsection{Ensembles\label{In:ssec:ensemble}}
95 In classical mechanics, the state of the system is completely
96 described by the positions and momenta of all particles. If we have an
97 $N$ particle system, there are $6N$ coordinates ($3N$ position $(q_1,
98 q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
99 to define the instantaneous state of the system. Each single set of
100 the $6N$ coordinates can be considered as a unique point in a $6N$
101 dimensional space where each perpendicular axis is one of
102 $q_{i\alpha}$ or $p_{i\alpha}$ ($i$ is the particle and $\alpha$ is
103 the spatial axis). This $6N$ dimensional space is known as phase
104 space. The instantaneous state of the system is a single point in
105 phase space. A trajectory is a connected path of points in phase
106 space. An ensemble is a collection of systems described by the same
107 macroscopic observables but which have microscopic state
108 distributions. In phase space an ensemble is a collection of a set of
109 representive points. A density distribution $\rho(q^N, p^N)$ of the
110 representive points in phase space describes the condition of an
111 ensemble of identical systems. Since this density may change with
112 time, it is also a function of time. $\rho(q^N, p^N, t)$ describes the
113 ensemble at a time $t$, and $\rho(q^N, p^N, t')$ describes the
114 ensemble at a later time $t'$. For convenience, we will use $\rho$
115 instead of $\rho(q^N, p^N, t)$ in the following disccusion. If we
116 normalize $\rho$ to unity,
117 \begin{equation}
118 1 = \int d \vec q~^N \int d \vec p~^N \rho,
119 \label{Ineq:normalized}
120 \end{equation}
121 then the value of $\rho$ gives the probability of finding the system
122 in a unit volume in the phase space.
123
124 Liouville's theorem describes the change in density $\rho$ with
125 time. The number of representive points at a given volume in the phase
126 space at any instant can be written as:
127 \begin{equation}
128 \label{Ineq:deltaN}
129 \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
130 \end{equation}
131 To calculate the change in the number of representive points in this
132 volume, let us consider a simple condition: the change in the number
133 of representive points in $q_1$ axis. The rate of the number of the
134 representive points entering the volume at $q_1$ per unit time is:
135 \begin{equation}
136 \label{Ineq:deltaNatq1}
137 \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
138 \end{equation}
139 and the rate of the number of representive points leaving the volume
140 at another position $q_1 + \delta q_1$ is:
141 \begin{equation}
142 \label{Ineq:deltaNatq1plusdeltaq1}
143 \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
144 \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
145 \end{equation}
146 Here the higher order differentials are neglected. So the change of
147 the number of the representive points is the difference of
148 eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, which
149 gives us:
150 \begin{equation}
151 \label{Ineq:deltaNatq1axis}
152 -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
153 \end{equation}
154 where, higher order differetials are neglected. If we sum over all the
155 axes in the phase space, we can get the change of the number of
156 representive points in a given volume with time:
157 \begin{equation}
158 \label{Ineq:deltaNatGivenVolume}
159 \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
160 {\dot q_i}}{\partial q_i} + \frac{\partial
161 {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
162 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
163 \end{equation}
164 From Hamilton's equation of motion,
165 \begin{equation}
166 \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
167 {\dot p_i}}{\partial p_i},
168 \label{Ineq:canonicalFormOfEquationOfMotion}
169 \end{equation}
170 this cancels out the first term on the right side of
171 eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
172 eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
173 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
174 can derive Liouville's theorem:
175 \begin{equation}
176 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
177 \frac{\partial {\rho}}{\partial
178 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right).
179 \label{Ineq:simpleFormofLiouville}
180 \end{equation}
181 This is the basis of statistical mechanics. If we move the right
182 side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we
183 will obtain
184 \begin{equation}
185 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left(
186 \frac{\partial {\rho}}{\partial
187 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
188 = 0.
189 \label{Ineq:anotherFormofLiouville}
190 \end{equation}
191 It is easy to note that the left side of
192 equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
193 $\rho$ with respect of $t$, which means
194 \begin{equation}
195 \frac{d \rho}{dt} = 0,
196 \label{Ineq:conservationofRho}
197 \end{equation}
198 and the rate of density change is zero in the neighborhood of any
199 selected moving representive points in the phase space.
200
201 The condition of the ensemble is determined by the density
202 distribution. If we consider the density distribution as only a
203 function of $q$ and $p$, which means the rate of change of the phase
204 space density in the neighborhood of all representive points in the
205 phase space is zero,
206 \begin{equation}
207 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
208 \label{Ineq:statEquilibrium}
209 \end{equation}
210 We may conclude the ensemble is in {\it statistical equilibrium}. An
211 ensemble in statistical equilibrium often means the system is also in
212 macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
213 \right)_{q, p} = 0$, then
214 \begin{equation}
215 \sum_{i} \left(
216 \frac{\partial {\rho}}{\partial
217 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
218 = 0.
219 \label{Ineq:constantofMotion}
220 \end{equation}
221 If $\rho$ is a function only of some constant of the motion, $\rho$ is
222 independent of time. For a conservative system, the energy of the
223 system is one of the constants of the motion. Here are several
224 examples: when the density distribution is constant everywhere in the
225 phase space,
226 \begin{equation}
227 \rho = \mathrm{const.}
228 \label{Ineq:uniformEnsemble}
229 \end{equation}
230 the ensemble is called {\it uniform ensemble}. Another useful
231 ensemble is called {\it microcanonical ensemble}, for which:
232 \begin{equation}
233 \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
234 \label{Ineq:microcanonicalEnsemble}
235 \end{equation}
236 where $\Sigma(N, V, E)$ is a normalization constant parameterized by
237 $N$, the total number of particles, $V$, the total physical volume and
238 $E$, the total energy. The physical meaning of $\Sigma(N, V, E)$ is
239 the phase space volume accessible to a microcanonical system with
240 energy $E$ evolving under Hamilton's equations. $H(q^N, p^N)$ is the
241 Hamiltonian of the system. The Gibbs entropy is defined as
242 \begin{equation}
243 S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho],
244 \label{Ineq:gibbsEntropy}
245 \end{equation}
246 where $k_B$ is the Boltzmann constant and $C^N$ is a number which
247 makes the argument of $\ln$ dimensionless, in this case, it is the
248 total phase space volume of one state. The entropy in microcanonical
249 ensemble is given by
250 \begin{equation}
251 S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
252 \label{Ineq:entropy}
253 \end{equation}
254 If the density distribution $\rho$ is given by
255 \begin{equation}
256 \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
257 \label{Ineq:canonicalEnsemble}
258 \end{equation}
259 the ensemble is known as the {\it canonical ensemble}. Here,
260 \begin{equation}
261 Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T},
262 \label{Ineq:partitionFunction}
263 \end{equation}
264 which is also known as {\it partition function}. $\Gamma$ indicates
265 that the integral is over all the phase space. In the canonical
266 ensemble, $N$, the total number of particles, $V$, total volume, and
267 $T$, the temperature are constants. The systems with the lowest
268 energies hold the largest population. According to maximum principle,
269 the thermodynamics maximizes the entropy $S$,
270 \begin{equation}
271 \begin{array}{ccc}
272 \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
273 \end{array}
274 \label{Ineq:maximumPrinciple}
275 \end{equation}
276 From Eq.~\ref{Ineq:maximumPrinciple} and two constrains of the canonical
277 ensemble, {\it i.e.}, total probability and average energy conserved,
278 the partition function is calculated as
279 \begin{equation}
280 Z_N = e^{-A/k_B T},
281 \label{Ineq:partitionFunctionWithFreeEnergy}
282 \end{equation}
283 where $A$ is the Helmholtz free energy. The significance of
284 Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
285 that they serve as a connection between macroscopic properties of the
286 system and the distribution of the microscopic states.
287
288 There is an implicit assumption that our arguments are based on so
289 far. A representive point in the phase space is equally to be found in
290 any same extent of the regions. In other words, all energetically
291 accessible states are represented equally, the probabilities to find
292 the system in any of the accessible states is equal. This is called
293 equal a {\it priori} probabilities.
294
295 \subsection{Statistical Average\label{In:ssec:average}}
296 Given the density distribution $\rho$ in the phase space, the average
297 of any quantity ($F(q^N, p^N$)) which depends on the coordinates
298 ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
299 can be calculated based on the definition shown by
300 Eq.~\ref{Ineq:statAverage1}
301 \begin{equation}
302 \langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
303 F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
304 \label{Ineq:statAverage1}
305 \end{equation}
306 Since the density distribution $\rho$ is normalized to unity, the mean
307 value of $F(q^N, p^N)$ is simplified to
308 \begin{equation}
309 \langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
310 p^N, t) \rho,
311 \label{Ineq:statAverage2}
312 \end{equation}
313 called {\it ensemble average}. However, the quantity is often averaged
314 for a finite time in real experiments,
315 \begin{equation}
316 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
317 \frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt.
318 \label{Ineq:timeAverage1}
319 \end{equation}
320 Usually this time average is independent of $t_0$ in statistical
321 mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
322 \begin{equation}
323 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
324 \frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt
325 \label{Ineq:timeAverage2}
326 \end{equation}
327 for an infinite time interval.
328
329 {\it ergodic hypothesis}, an important hypothesis from the statistical
330 mechanics point of view, states that the system will eventually pass
331 arbitrarily close to any point that is energetically accessible in
332 phase space. Mathematically, this leads to
333 \begin{equation}
334 \langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t.
335 \label{Ineq:ergodicity}
336 \end{equation}
337 Eq.~\ref{Ineq:ergodicity} validates the Monte Carlo method which we will
338 discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity
339 can be related to the time average measured in the experiments.
340
341 \subsection{Correlation Function\label{In:ssec:corr}}
342 Thermodynamic properties can be computed by equillibrium statistical
343 mechanics. On the other hand, {\it Time correlation function} is a
344 powerful method to understand the evolution of a dynamic system in
345 non-equillibrium statistical mechanics. Imagine a property $A(q^N,
346 p^N, t)$ as a function of coordinates $q^N$ and momenta $p^N$ has an
347 intial value at $t_0$, at a later time $t_0 + \tau$ this value is
348 changed. If $\tau$ is very small, the change of the value is minor,
349 and the later value of $A(q^N, p^N, t_0 +
350 \tau)$ is correlated to its initial value. Howere, when $\tau$ is large,
351 this correlation is lost. The correlation function is a measurement of
352 this relationship and is defined by~\cite{Berne90}
353 \begin{equation}
354 C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty}
355 \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
356 \label{Ineq:autocorrelationFunction}
357 \end{equation}
358 Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of a
359 single variable, called {\it autocorrelation function}. The defination
360 of the correlation function for two different variables is similar to
361 that of autocorrelation function, which is
362 \begin{equation}
363 C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
364 \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
365 \label{Ineq:crosscorrelationFunction}
366 \end{equation}
367 and called {\it cross correlation function}.
368
369 In section~\ref{In:ssec:average} we know from Eq.~\ref{Ineq:ergodicity}
370 the relationship between time average and ensemble average. We can put
371 the correlation function in a classical mechanics form,
372 \begin{equation}
373 C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p)
374 \label{Ineq:autocorrelationFunctionCM}
375 \end{equation}
376 and
377 \begin{equation}
378 C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau)
379 \rho(q, p)
380 \label{Ineq:crosscorrelationFunctionCM}
381 \end{equation}
382 as autocorrelation function and cross correlation function
383 respectively. $\rho(q, p)$ is the density distribution at equillibrium
384 in phase space. In many cases, the correlation function decay is a
385 single exponential
386 \begin{equation}
387 C(t) \sim e^{-t / \tau_r},
388 \label{Ineq:relaxation}
389 \end{equation}
390 where $\tau_r$ is known as relaxation time which discribes the rate of
391 the decay.
392
393 \section{Methodolody\label{In:sec:method}}
394 The simulations performed in this dissertation are branched into two
395 main catalog, Monte Carlo and Molecular Dynamics. There are two main
396 difference between Monte Carlo and Molecular Dynamics simulations. One
397 is that the Monte Carlo simulation is time independent, and Molecular
398 Dynamics simulation is time involved. Another dissimilar is that the
399 Monte Carlo is a stochastic process, the configuration of the system
400 is not determinated by its past, however, using Moleuclar Dynamics,
401 the system is propagated by Newton's equation of motion, the
402 trajectory of the system evolved in the phase space is determined. A
403 brief introduction of the two algorithms are given in
404 section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
405 Molecular Dynamics, Langevin Dynamics, is introduced by
406 section~\ref{In:ssec:ld}.
407
408 \subsection{Monte Carlo\label{In:ssec:mc}}
409 Monte Carlo algorithm was first introduced by Metropolis {\it et
410 al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
411 applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
412 which the $N$, the total number of particles, $V$, total volume, $T$,
413 temperature are constants. The average energy is given by substituding
414 Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2},
415 \begin{equation}
416 \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
417 \label{Ineq:energyofCanonicalEnsemble}
418 \end{equation}
419 So are the other properties of the system. The Hamiltonian is the
420 summation of Kinetic energy $K(p^N)$ as a function of momenta and
421 Potential energy $U(q^N)$ as a function of positions,
422 \begin{equation}
423 H(q^N, p^N) = K(p^N) + U(q^N).
424 \label{Ineq:hamiltonian}
425 \end{equation}
426 If the property $A$ is only a function of position ($ A = A(q^N)$),
427 the mean value of $A$ is reduced to
428 \begin{equation}
429 \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
430 \label{Ineq:configurationIntegral}
431 \end{equation}
432 The kinetic energy $K(p^N)$ is factored out in
433 Eq.~\ref{Ineq:configurationIntegral}. $\langle A
434 \rangle$ is a configuration integral now, and the
435 Eq.~\ref{Ineq:configurationIntegral} is equivalent to
436 \begin{equation}
437 \langle A \rangle = \int d \vec q~^N A \rho(q^N).
438 \label{Ineq:configurationAve}
439 \end{equation}
440
441 In a Monte Carlo simulation of canonical ensemble, the probability of
442 the system being in a state $s$ is $\rho_s$, the change of this
443 probability with time is given by
444 \begin{equation}
445 \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
446 \label{Ineq:timeChangeofProb}
447 \end{equation}
448 where $w_{ss'}$ is the tansition probability of going from state $s$
449 to state $s'$. Since $\rho_s$ is independent of time at equilibrium,
450 \begin{equation}
451 \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
452 \label{Ineq:equiProb}
453 \end{equation}
454 which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
455 for all $s'$. So
456 \begin{equation}
457 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
458 \label{Ineq:relationshipofRhoandW}
459 \end{equation}
460 If
461 \begin{equation}
462 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
463 \label{Ineq:conditionforBoltzmannStatistics}
464 \end{equation}
465 then
466 \begin{equation}
467 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
468 \label{Ineq:satisfyofBoltzmannStatistics}
469 \end{equation}
470 Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that
471 $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
472 shows how Monte Carlo simulation generates a transition probability
473 governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as
474 \begin{enumerate}
475 \item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy.
476 \item\label{Initm:newEnergy} Make a random displacement for particle,
477 calculate the new energy.
478 \begin{itemize}
479 \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy
480 goes down.
481 \item Pick a random number between $[0,1]$ if energy goes up.
482 \begin{itemize}
483 \item Keep the new configuration and return to
484 step~\ref{Initm:oldEnergy} if the random number smaller than
485 $e^{-(U_{new} - U_{old})} / k_B T$.
486 \item Keep the old configuration and return to
487 step~\ref{Initm:oldEnergy} if the random number larger than
488 $e^{-(U_{new} - U_{old})} / k_B T$.
489 \end{itemize}
490 \end{itemize}
491 \item\label{Initm:accumulateAvg} Accumulate the average after it converges.
492 \end{enumerate}
493 It is important to notice that the old configuration has to be sampled
494 again if it is kept.
495
496 \subsection{Molecular Dynamics\label{In:ssec:md}}
497 Although some of properites of the system can be calculated from the
498 ensemble average in Monte Carlo simulations, due to the nature of
499 lacking in the time dependence, it is impossible to gain information
500 of those dynamic properties from Monte Carlo simulations. Molecular
501 Dynamics is a measurement of the evolution of the positions and
502 momenta of the particles in the system. The evolution of the system
503 obeys laws of classical mechanics, in most situations, there is no
504 need for the count of the quantum effects. For a real experiment, the
505 instantaneous positions and momenta of the particles in the system are
506 neither important nor measurable, the observable quantities are
507 usually a average value for a finite time interval. These quantities
508 are expressed as a function of positions and momenta in Melecular
509 Dynamics simulations. Like the thermal temperature of the system is
510 defined by
511 \begin{equation}
512 \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
513 \label{Ineq:temperature}
514 \end{equation}
515 here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
516 component of the velocity of the particle. The right side of
517 Eq.~\ref{Ineq:temperature} is the average kinetic energy of the
518 system. A simple Molecular Dynamics simulation scheme
519 is:~\cite{Frenkel1996}
520 \begin{enumerate}
521 \item\label{Initm:initialize} Assign the initial positions and momenta
522 for the particles in the system.
523 \item\label{Initm:calcForce} Calculate the forces.
524 \item\label{Initm:equationofMotion} Integrate the equation of motion.
525 \begin{itemize}
526 \item Return to step~\ref{Initm:calcForce} if the equillibrium is
527 not achieved.
528 \item Go to step~\ref{Initm:calcAvg} if the equillibrium is
529 achieved.
530 \end{itemize}
531 \item\label{Initm:calcAvg} Compute the quantities we are interested in.
532 \end{enumerate}
533 The initial positions of the particles are chosen as that there is no
534 overlap for the particles. The initial velocities at first are
535 distributed randomly to the particles, and then shifted to make the
536 momentum of the system $0$, at last scaled to the desired temperature
537 of the simulation according Eq.~\ref{Ineq:temperature}.
538
539 The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce}
540 and~\ref{Initm:equationofMotion}. The calculation of the forces are
541 often involved numerous effort, this is the most time consuming step
542 in the Molecular Dynamics scheme. The evaluation of the forces is
543 followed by
544 \begin{equation}
545 f(q) = - \frac{\partial U(q)}{\partial q},
546 \label{Ineq:force}
547 \end{equation}
548 $U(q)$ is the potential of the system. Once the forces computed, are
549 the positions and velocities updated by integrating Newton's equation
550 of motion,
551 \begin{equation}
552 f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
553 \label{Ineq:newton}
554 \end{equation}
555 Here is an example of integrating algorithms, Verlet algorithm, which
556 is one of the best algorithms to integrate Newton's equation of
557 motion. The Taylor expension of the position at time $t$ is
558 \begin{equation}
559 q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
560 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
561 \mathcal{O}(\Delta t^4)
562 \label{Ineq:verletFuture}
563 \end{equation}
564 for a later time $t+\Delta t$, and
565 \begin{equation}
566 q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
567 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
568 \mathcal{O}(\Delta t^4) ,
569 \label{Ineq:verletPrevious}
570 \end{equation}
571 for a previous time $t-\Delta t$. The summation of the
572 Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives
573 \begin{equation}
574 q(t+\Delta t)+q(t-\Delta t) =
575 2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
576 \label{Ineq:verletSum}
577 \end{equation}
578 so, the new position can be expressed as
579 \begin{equation}
580 q(t+\Delta t) \approx
581 2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
582 \label{Ineq:newPosition}
583 \end{equation}
584 The higher order of the $\Delta t$ is omitted.
585
586 Numerous technics and tricks are applied to Molecular Dynamics
587 simulation to gain more efficiency or more accuracy. The simulation
588 engine used in this dissertation for the Molecular Dynamics
589 simulations is {\sc oopse}, more details of the algorithms and
590 technics can be found in~\cite{Meineke2005}.
591
592 \subsection{Langevin Dynamics\label{In:ssec:ld}}
593 In many cases, the properites of the solvent in a system, like the
594 lipid-water system studied in this dissertation, are less important to
595 the researchers. However, the major computational expense is spent on
596 the solvent in the Molecular Dynamics simulations because of the large
597 number of the solvent molecules compared to that of solute molecules,
598 the ratio of the number of lipid molecules to the number of water
599 molecules is $1:25$ in our lipid-water system. The efficiency of the
600 Molecular Dynamics simulations is greatly reduced.
601
602 As an extension of the Molecular Dynamics simulations, the Langevin
603 Dynamics seeks a way to avoid integrating equation of motion for
604 solvent particles without losing the Brownian properites of solute
605 particles. A common approximation is that the coupling of the solute
606 and solvent is expressed as a set of harmonic oscillators. So the
607 Hamiltonian of such a system is written as
608 \begin{equation}
609 H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
610 \label{Ineq:hamiltonianofCoupling}
611 \end{equation}
612 where $H_B$ is the Hamiltonian of the bath which equals to
613 \begin{equation}
614 H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
615 \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
616 \label{Ineq:hamiltonianofBath}
617 \end{equation}
618 $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
619 bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
620 \begin{equation}
621 \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
622 \label{Ineq:systemBathCoupling}
623 \end{equation}
624 where $g$ is the coupling constant. By solving the Hamilton's equation
625 of motion, the {\it Generalized Langevin Equation} for this system is
626 derived to
627 \begin{equation}
628 m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
629 \label{Ineq:gle}
630 \end{equation}
631 with mean force,
632 \begin{equation}
633 W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
634 \omega_\alpha^2}q^2,
635 \label{Ineq:meanForce}
636 \end{equation}
637 being only a dependence of coordinates of the solute particles, {\it
638 friction kernel},
639 \begin{equation}
640 \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
641 \omega_\alpha} \cos(\omega_\alpha t),
642 \label{Ineq:xiforGLE}
643 \end{equation}
644 and the random force,
645 \begin{equation}
646 R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
647 \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
648 q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
649 \label{Ineq:randomForceforGLE}
650 \end{equation}
651 as only a dependence of the initial conditions. The relationship of
652 friction kernel $\xi(t)$ and random force $R(t)$ is given by
653 \begin{equation}
654 \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
655 \label{Ineq:relationshipofXiandR}
656 \end{equation}
657 from their definitions. In Langevin limit, the friction is treated
658 static, which means
659 \begin{equation}
660 \xi(t) = 2 \xi_0 \delta(t).
661 \label{Ineq:xiofStaticFriction}
662 \end{equation}
663 After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with
664 Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted
665 to
666 \begin{equation}
667 m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
668 \label{Ineq:langevinEquation}
669 \end{equation}
670 The applying of Langevin Equation to dynamic simulations is discussed
671 in Ch.~\ref{chap:ld}.