ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
Revision: 3365
Committed: Mon Mar 10 21:52:50 2008 UTC (17 years, 1 month ago) by xsun
Content type: application/x-tex
File size: 30144 byte(s)
Log Message:
writing.

File Contents

# Content
1 \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2
3 \section{Background on the Problem\label{In:sec:pro}}
4 Phospholipid molecules are the primary topic of this dissertation
5 because of their critical role as the foundation of biological
6 membranes. Lipids, when dispersed in water, self assemble into bilayer
7 structures. The phase behavior of lipid bilayers have been explored
8 experimentally~\cite{Cevc87}, however, a complete understanding of the
9 mechanism for self-assembly has not been achieved.
10
11 \subsection{Ripple Phase\label{In:ssec:ripple}}
12 The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
13 periodic buckling of the membrane, is an intermediate phase which is
14 developed either from heating the gel phase $L_{\beta'}$ or cooling
15 the fluid phase $L_\alpha$. Although the ripple phase has been
16 observed in several
17 experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03},
18 the physical mechanism for the formation of the ripple phase has never been
19 explained and the microscopic structure of the ripple phase has never
20 been elucidated by experiments. Computational simulation is a perfect
21 tool to study the microscopic properties for a system. However, the
22 large length scale the ripple structure and the long time scale
23 of the formation of the ripples are crucial obstacles to performing
24 the actual work. The principal ideas explored in this dissertation are
25 attempts to break the computational task up by
26 \begin{itemize}
27 \item Simplifying the lipid model.
28 \item Improving algorithm for integrating the equations of motion.
29 \end{itemize}
30 In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
31 dipole model and a coarse-grained model to perform the Monte Carlo and
32 Molecular Dynamics simulations respectively, and in
33 chapter~\ref{chap:ld}, we develop a Langevin Dynamics algorithm which
34 excludes the explicit solvent to improve the efficiency of the
35 simulations.
36
37 \subsection{Lattice Model\label{In:ssec:model}}
38 The gel-like characteristic of the ripple phase makes it feasible to
39 apply a lattice model to study the system. It is claimed that the
40 packing of the lipid molecules in ripple phase is
41 hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models,
42 {\it i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it
43 frustration} on triangular lattices. The Hamiltonian of the systems
44 are given by
45 \begin{equation}
46 H =
47 \begin{cases}
48 -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\
49 -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and
50 Heisenberg},
51 \end{cases}
52 \end{equation}
53 where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
54 $s_{n'}$ ($\vec s_{n'}$) are the nearest neighbours.
55 \begin{figure}
56 \centering
57 \includegraphics[width=\linewidth]{./figures/inFrustration.pdf}
58 \caption{Frustration on a triangular lattice, the spins are
59 represented by arrows. No matter which direction the spin on the top
60 of triangle points to, the Hamiltonain of the system goes up.}
61 \label{Infig:frustration}
62 \end{figure}
63 Figure~\ref{Infig:frustration} shows an illustration of the
64 frustration on a triangular lattice. When $J < 0$, the spins want to
65 be anti-aligned, The direction of the spin on top of the triangle will
66 make the energy go up no matter which direction it picks, therefore
67 infinite possibilities for the packing of spins induce what is known
68 as ``complete regular frustration'' which leads to disordered low
69 temperature phases.
70
71 The lack of translational degree of freedom in lattice models prevents
72 their utilization in models for surface buckling which would
73 correspond to ripple formation. In chapter~\ref{chap:mc}, a modified
74 lattice model is introduced to tackle this specific situation.
75
76 \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
77 Statistical mechanics provides a way to calculate the macroscopic
78 properties of a system from the molecular interactions used in
79 computational simulations. This section serves as a brief introduction
80 to key concepts of classical statistical mechanics that we used in
81 this dissertation. Tolman gives an excellent introduction to the
82 principles of statistical mechanics~\cite{Tolman1979}. A large part of
83 section~\ref{In:sec:SM} will follow Tolman's notation.
84
85 \subsection{Ensembles\label{In:ssec:ensemble}}
86 In classical mechanics, the state of the system is completely
87 described by the positions and momenta of all particles. If we have an
88 $N$ particle system, there are $6N$ coordinates ($3N$ position $(q_1,
89 q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
90 to define the instantaneous state of the system. Each single set of
91 the $6N$ coordinates can be considered as a unique point in a $6N$
92 dimensional space where each perpendicular axis is one of
93 $q_{i\alpha}$ or $p_{i\alpha}$ ($i$ is the particle and $\alpha$ is
94 the spatial axis). This $6N$ dimensional space is known as phase
95 space. The instantaneous state of the system is a single point in
96 phase space. A trajectory is a connected path of points in phase
97 space. An ensemble is a collection of systems described by the same
98 macroscopic observables but which have microscopic state
99 distributions. In phase space an ensemble is a collection of a set of
100 representive points. A density distribution $\rho(q^N, p^N)$ of the
101 representive points in phase space describes the condition of an
102 ensemble of identical systems. Since this density may change with
103 time, it is also a function of time. $\rho(q^N, p^N, t)$ describes the
104 ensemble at a time $t$, and $\rho(q^N, p^N, t')$ describes the
105 ensemble at a later time $t'$. For convenience, we will use $\rho$
106 instead of $\rho(q^N, p^N, t)$ in the following disccusion. If we
107 normalize $\rho$ to unity,
108 \begin{equation}
109 1 = \int d \vec q~^N \int d \vec p~^N \rho,
110 \label{Ineq:normalized}
111 \end{equation}
112 then the value of $\rho$ gives the probability of finding the system
113 in a unit volume in the phase space.
114
115 Liouville's theorem describes the change in density $\rho$ with
116 time. The number of representive points at a given volume in the phase
117 space at any instant can be written as:
118 \begin{equation}
119 \label{Ineq:deltaN}
120 \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
121 \end{equation}
122 To calculate the change in the number of representive points in this
123 volume, let us consider a simple condition: the change in the number
124 of representive points in $q_1$ axis. The rate of the number of the
125 representive points entering the volume at $q_1$ per unit time is:
126 \begin{equation}
127 \label{Ineq:deltaNatq1}
128 \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
129 \end{equation}
130 and the rate of the number of representive points leaving the volume
131 at another position $q_1 + \delta q_1$ is:
132 \begin{equation}
133 \label{Ineq:deltaNatq1plusdeltaq1}
134 \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
135 \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
136 \end{equation}
137 Here the higher order differentials are neglected. So the change of
138 the number of the representive points is the difference of
139 eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, which
140 gives us:
141 \begin{equation}
142 \label{Ineq:deltaNatq1axis}
143 -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
144 \end{equation}
145 where, higher order differetials are neglected. If we sum over all the
146 axes in the phase space, we can get the change of the number of
147 representive points in a given volume with time:
148 \begin{equation}
149 \label{Ineq:deltaNatGivenVolume}
150 \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
151 {\dot q_i}}{\partial q_i} + \frac{\partial
152 {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
153 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
154 \end{equation}
155 From Hamilton's equation of motion,
156 \begin{equation}
157 \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
158 {\dot p_i}}{\partial p_i},
159 \label{Ineq:canonicalFormOfEquationOfMotion}
160 \end{equation}
161 this cancels out the first term on the right side of
162 eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
163 eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
164 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
165 can derive Liouville's theorem:
166 \begin{equation}
167 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
168 \frac{\partial {\rho}}{\partial
169 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right).
170 \label{Ineq:simpleFormofLiouville}
171 \end{equation}
172 This is the basis of statistical mechanics. If we move the right
173 side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we
174 will obtain
175 \begin{equation}
176 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left(
177 \frac{\partial {\rho}}{\partial
178 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
179 = 0.
180 \label{Ineq:anotherFormofLiouville}
181 \end{equation}
182 It is easy to note that the left side of
183 equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
184 $\rho$ with respect of $t$, which means
185 \begin{equation}
186 \frac{d \rho}{dt} = 0,
187 \label{Ineq:conservationofRho}
188 \end{equation}
189 and the rate of density change is zero in the neighborhood of any
190 selected moving representive points in the phase space.
191
192 The condition of the ensemble is determined by the density
193 distribution. If we consider the density distribution as only a
194 function of $q$ and $p$, which means the rate of change of the phase
195 space density in the neighborhood of all representive points in the
196 phase space is zero,
197 \begin{equation}
198 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
199 \label{Ineq:statEquilibrium}
200 \end{equation}
201 We may conclude the ensemble is in {\it statistical equilibrium}. An
202 ensemble in statistical equilibrium often means the system is also in
203 macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
204 \right)_{q, p} = 0$, then
205 \begin{equation}
206 \sum_{i} \left(
207 \frac{\partial {\rho}}{\partial
208 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
209 = 0.
210 \label{Ineq:constantofMotion}
211 \end{equation}
212 If $\rho$ is a function only of some constant of the motion, $\rho$ is
213 independent of time. For a conservative system, the energy of the
214 system is one of the constants of the motion. Here are several
215 examples: when the density distribution is constant everywhere in the
216 phase space,
217 \begin{equation}
218 \rho = \mathrm{const.}
219 \label{Ineq:uniformEnsemble}
220 \end{equation}
221 the ensemble is called {\it uniform ensemble}. Another useful
222 ensemble is called {\it microcanonical ensemble}, for which:
223 \begin{equation}
224 \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
225 \label{Ineq:microcanonicalEnsemble}
226 \end{equation}
227 where $\Sigma(N, V, E)$ is a normalization constant parameterized by
228 $N$, the total number of particles, $V$, the total physical volume and
229 $E$, the total energy. The physical meaning of $\Sigma(N, V, E)$ is
230 the phase space volume accessible to a microcanonical system with
231 energy $E$ evolving under Hamilton's equations. $H(q^N, p^N)$ is the
232 Hamiltonian of the system. The Gibbs entropy is defined as
233 \begin{equation}
234 S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho],
235 \label{Ineq:gibbsEntropy}
236 \end{equation}
237 where $k_B$ is the Boltzmann constant and $C^N$ is a number which
238 makes the argument of $\ln$ dimensionless, in this case, it is the
239 total phase space volume of one state. The entropy in microcanonical
240 ensemble is given by
241 \begin{equation}
242 S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
243 \label{Ineq:entropy}
244 \end{equation}
245 If the density distribution $\rho$ is given by
246 \begin{equation}
247 \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
248 \label{Ineq:canonicalEnsemble}
249 \end{equation}
250 the ensemble is known as the {\it canonical ensemble}. Here,
251 \begin{equation}
252 Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T},
253 \label{Ineq:partitionFunction}
254 \end{equation}
255 which is also known as {\it partition function}. $\Gamma$ indicates
256 that the integral is over all the phase space. In the canonical
257 ensemble, $N$, the total number of particles, $V$, total volume, and
258 $T$, the temperature are constants. The systems with the lowest
259 energies hold the largest population. According to maximum principle,
260 the thermodynamics maximizes the entropy $S$,
261 \begin{equation}
262 \begin{array}{ccc}
263 \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
264 \end{array}
265 \label{Ineq:maximumPrinciple}
266 \end{equation}
267 From Eq.~\ref{Ineq:maximumPrinciple} and two constrains of the canonical
268 ensemble, {\it i.e.}, total probability and average energy conserved,
269 the partition function is calculated as
270 \begin{equation}
271 Z_N = e^{-A/k_B T},
272 \label{Ineq:partitionFunctionWithFreeEnergy}
273 \end{equation}
274 where $A$ is the Helmholtz free energy. The significance of
275 Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
276 that they serve as a connection between macroscopic properties of the
277 system and the distribution of the microscopic states.
278
279 There is an implicit assumption that our arguments are based on so
280 far. A representive point in the phase space is equally to be found in
281 any same extent of the regions. In other words, all energetically
282 accessible states are represented equally, the probabilities to find
283 the system in any of the accessible states is equal. This is called
284 equal a {\it priori} probabilities.
285
286 \subsection{Statistical Average\label{In:ssec:average}}
287 Given the density distribution $\rho$ in the phase space, the average
288 of any quantity ($F(q^N, p^N$)) which depends on the coordinates
289 ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
290 can be calculated based on the definition shown by
291 Eq.~\ref{Ineq:statAverage1}
292 \begin{equation}
293 \langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
294 F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
295 \label{Ineq:statAverage1}
296 \end{equation}
297 Since the density distribution $\rho$ is normalized to unity, the mean
298 value of $F(q^N, p^N)$ is simplified to
299 \begin{equation}
300 \langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
301 p^N, t) \rho,
302 \label{Ineq:statAverage2}
303 \end{equation}
304 called {\it ensemble average}. However, the quantity is often averaged
305 for a finite time in real experiments,
306 \begin{equation}
307 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
308 \frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt.
309 \label{Ineq:timeAverage1}
310 \end{equation}
311 Usually this time average is independent of $t_0$ in statistical
312 mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
313 \begin{equation}
314 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
315 \frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt
316 \label{Ineq:timeAverage2}
317 \end{equation}
318 for an infinite time interval.
319
320 {\it ergodic hypothesis}, an important hypothesis from the statistical
321 mechanics point of view, states that the system will eventually pass
322 arbitrarily close to any point that is energetically accessible in
323 phase space. Mathematically, this leads to
324 \begin{equation}
325 \langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t.
326 \label{Ineq:ergodicity}
327 \end{equation}
328 Eq.~\ref{Ineq:ergodicity} validates the Monte Carlo method which we will
329 discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity
330 can be related to the time average measured in the experiments.
331
332 \subsection{Correlation Function\label{In:ssec:corr}}
333 Thermodynamic properties can be computed by equillibrium statistical
334 mechanics. On the other hand, {\it Time correlation function} is a
335 powerful method to understand the evolution of a dynamic system in
336 non-equillibrium statistical mechanics. Imagine a property $A(q^N,
337 p^N, t)$ as a function of coordinates $q^N$ and momenta $p^N$ has an
338 intial value at $t_0$, at a later time $t_0 + \tau$ this value is
339 changed. If $\tau$ is very small, the change of the value is minor,
340 and the later value of $A(q^N, p^N, t_0 +
341 \tau)$ is correlated to its initial value. Howere, when $\tau$ is large,
342 this correlation is lost. The correlation function is a measurement of
343 this relationship and is defined by~\cite{Berne90}
344 \begin{equation}
345 C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty}
346 \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
347 \label{Ineq:autocorrelationFunction}
348 \end{equation}
349 Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of a
350 single variable, called {\it autocorrelation function}. The defination
351 of the correlation function for two different variables is similar to
352 that of autocorrelation function, which is
353 \begin{equation}
354 C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
355 \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
356 \label{Ineq:crosscorrelationFunction}
357 \end{equation}
358 and called {\it cross correlation function}.
359
360 In section~\ref{In:ssec:average} we know from Eq.~\ref{Ineq:ergodicity}
361 the relationship between time average and ensemble average. We can put
362 the correlation function in a classical mechanics form,
363 \begin{equation}
364 C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p)
365 \label{Ineq:autocorrelationFunctionCM}
366 \end{equation}
367 and
368 \begin{equation}
369 C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau)
370 \rho(q, p)
371 \label{Ineq:crosscorrelationFunctionCM}
372 \end{equation}
373 as autocorrelation function and cross correlation function
374 respectively. $\rho(q, p)$ is the density distribution at equillibrium
375 in phase space. In many cases, the correlation function decay is a
376 single exponential
377 \begin{equation}
378 C(t) \sim e^{-t / \tau_r},
379 \label{Ineq:relaxation}
380 \end{equation}
381 where $\tau_r$ is known as relaxation time which discribes the rate of
382 the decay.
383
384 \section{Methodolody\label{In:sec:method}}
385 The simulations performed in this dissertation are branched into two
386 main catalog, Monte Carlo and Molecular Dynamics. There are two main
387 difference between Monte Carlo and Molecular Dynamics simulations. One
388 is that the Monte Carlo simulation is time independent, and Molecular
389 Dynamics simulation is time involved. Another dissimilar is that the
390 Monte Carlo is a stochastic process, the configuration of the system
391 is not determinated by its past, however, using Moleuclar Dynamics,
392 the system is propagated by Newton's equation of motion, the
393 trajectory of the system evolved in the phase space is determined. A
394 brief introduction of the two algorithms are given in
395 section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the
396 Molecular Dynamics, Langevin Dynamics, is introduced by
397 section~\ref{In:ssec:ld}.
398
399 \subsection{Monte Carlo\label{In:ssec:mc}}
400 Monte Carlo algorithm was first introduced by Metropolis {\it et
401 al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually
402 applied to the canonical ensemble, a Boltzmann-weighted ensemble, in
403 which the $N$, the total number of particles, $V$, total volume, $T$,
404 temperature are constants. The average energy is given by substituding
405 Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2},
406 \begin{equation}
407 \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}.
408 \label{Ineq:energyofCanonicalEnsemble}
409 \end{equation}
410 So are the other properties of the system. The Hamiltonian is the
411 summation of Kinetic energy $K(p^N)$ as a function of momenta and
412 Potential energy $U(q^N)$ as a function of positions,
413 \begin{equation}
414 H(q^N, p^N) = K(p^N) + U(q^N).
415 \label{Ineq:hamiltonian}
416 \end{equation}
417 If the property $A$ is only a function of position ($ A = A(q^N)$),
418 the mean value of $A$ is reduced to
419 \begin{equation}
420 \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}},
421 \label{Ineq:configurationIntegral}
422 \end{equation}
423 The kinetic energy $K(p^N)$ is factored out in
424 Eq.~\ref{Ineq:configurationIntegral}. $\langle A
425 \rangle$ is a configuration integral now, and the
426 Eq.~\ref{Ineq:configurationIntegral} is equivalent to
427 \begin{equation}
428 \langle A \rangle = \int d \vec q~^N A \rho(q^N).
429 \label{Ineq:configurationAve}
430 \end{equation}
431
432 In a Monte Carlo simulation of canonical ensemble, the probability of
433 the system being in a state $s$ is $\rho_s$, the change of this
434 probability with time is given by
435 \begin{equation}
436 \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
437 \label{Ineq:timeChangeofProb}
438 \end{equation}
439 where $w_{ss'}$ is the tansition probability of going from state $s$
440 to state $s'$. Since $\rho_s$ is independent of time at equilibrium,
441 \begin{equation}
442 \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
443 \label{Ineq:equiProb}
444 \end{equation}
445 which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
446 for all $s'$. So
447 \begin{equation}
448 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
449 \label{Ineq:relationshipofRhoandW}
450 \end{equation}
451 If
452 \begin{equation}
453 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
454 \label{Ineq:conditionforBoltzmannStatistics}
455 \end{equation}
456 then
457 \begin{equation}
458 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
459 \label{Ineq:satisfyofBoltzmannStatistics}
460 \end{equation}
461 Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that
462 $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm,
463 shows how Monte Carlo simulation generates a transition probability
464 governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as
465 \begin{enumerate}
466 \item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy.
467 \item\label{Initm:newEnergy} Make a random displacement for particle,
468 calculate the new energy.
469 \begin{itemize}
470 \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy
471 goes down.
472 \item Pick a random number between $[0,1]$ if energy goes up.
473 \begin{itemize}
474 \item Keep the new configuration and return to
475 step~\ref{Initm:oldEnergy} if the random number smaller than
476 $e^{-(U_{new} - U_{old})} / k_B T$.
477 \item Keep the old configuration and return to
478 step~\ref{Initm:oldEnergy} if the random number larger than
479 $e^{-(U_{new} - U_{old})} / k_B T$.
480 \end{itemize}
481 \end{itemize}
482 \item\label{Initm:accumulateAvg} Accumulate the average after it converges.
483 \end{enumerate}
484 It is important to notice that the old configuration has to be sampled
485 again if it is kept.
486
487 \subsection{Molecular Dynamics\label{In:ssec:md}}
488 Although some of properites of the system can be calculated from the
489 ensemble average in Monte Carlo simulations, due to the nature of
490 lacking in the time dependence, it is impossible to gain information
491 of those dynamic properties from Monte Carlo simulations. Molecular
492 Dynamics is a measurement of the evolution of the positions and
493 momenta of the particles in the system. The evolution of the system
494 obeys laws of classical mechanics, in most situations, there is no
495 need for the count of the quantum effects. For a real experiment, the
496 instantaneous positions and momenta of the particles in the system are
497 neither important nor measurable, the observable quantities are
498 usually a average value for a finite time interval. These quantities
499 are expressed as a function of positions and momenta in Melecular
500 Dynamics simulations. Like the thermal temperature of the system is
501 defined by
502 \begin{equation}
503 \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle,
504 \label{Ineq:temperature}
505 \end{equation}
506 here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$
507 component of the velocity of the particle. The right side of
508 Eq.~\ref{Ineq:temperature} is the average kinetic energy of the
509 system. A simple Molecular Dynamics simulation scheme
510 is:~\cite{Frenkel1996}
511 \begin{enumerate}
512 \item\label{Initm:initialize} Assign the initial positions and momenta
513 for the particles in the system.
514 \item\label{Initm:calcForce} Calculate the forces.
515 \item\label{Initm:equationofMotion} Integrate the equation of motion.
516 \begin{itemize}
517 \item Return to step~\ref{Initm:calcForce} if the equillibrium is
518 not achieved.
519 \item Go to step~\ref{Initm:calcAvg} if the equillibrium is
520 achieved.
521 \end{itemize}
522 \item\label{Initm:calcAvg} Compute the quantities we are interested in.
523 \end{enumerate}
524 The initial positions of the particles are chosen as that there is no
525 overlap for the particles. The initial velocities at first are
526 distributed randomly to the particles, and then shifted to make the
527 momentum of the system $0$, at last scaled to the desired temperature
528 of the simulation according Eq.~\ref{Ineq:temperature}.
529
530 The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce}
531 and~\ref{Initm:equationofMotion}. The calculation of the forces are
532 often involved numerous effort, this is the most time consuming step
533 in the Molecular Dynamics scheme. The evaluation of the forces is
534 followed by
535 \begin{equation}
536 f(q) = - \frac{\partial U(q)}{\partial q},
537 \label{Ineq:force}
538 \end{equation}
539 $U(q)$ is the potential of the system. Once the forces computed, are
540 the positions and velocities updated by integrating Newton's equation
541 of motion,
542 \begin{equation}
543 f(q) = \frac{dp}{dt} = \frac{m dv}{dt}.
544 \label{Ineq:newton}
545 \end{equation}
546 Here is an example of integrating algorithms, Verlet algorithm, which
547 is one of the best algorithms to integrate Newton's equation of
548 motion. The Taylor expension of the position at time $t$ is
549 \begin{equation}
550 q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
551 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
552 \mathcal{O}(\Delta t^4)
553 \label{Ineq:verletFuture}
554 \end{equation}
555 for a later time $t+\Delta t$, and
556 \begin{equation}
557 q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
558 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
559 \mathcal{O}(\Delta t^4) ,
560 \label{Ineq:verletPrevious}
561 \end{equation}
562 for a previous time $t-\Delta t$. The summation of the
563 Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives
564 \begin{equation}
565 q(t+\Delta t)+q(t-\Delta t) =
566 2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
567 \label{Ineq:verletSum}
568 \end{equation}
569 so, the new position can be expressed as
570 \begin{equation}
571 q(t+\Delta t) \approx
572 2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
573 \label{Ineq:newPosition}
574 \end{equation}
575 The higher order of the $\Delta t$ is omitted.
576
577 Numerous technics and tricks are applied to Molecular Dynamics
578 simulation to gain more efficiency or more accuracy. The simulation
579 engine used in this dissertation for the Molecular Dynamics
580 simulations is {\sc oopse}, more details of the algorithms and
581 technics can be found in~\cite{Meineke2005}.
582
583 \subsection{Langevin Dynamics\label{In:ssec:ld}}
584 In many cases, the properites of the solvent in a system, like the
585 lipid-water system studied in this dissertation, are less important to
586 the researchers. However, the major computational expense is spent on
587 the solvent in the Molecular Dynamics simulations because of the large
588 number of the solvent molecules compared to that of solute molecules,
589 the ratio of the number of lipid molecules to the number of water
590 molecules is $1:25$ in our lipid-water system. The efficiency of the
591 Molecular Dynamics simulations is greatly reduced.
592
593 As an extension of the Molecular Dynamics simulations, the Langevin
594 Dynamics seeks a way to avoid integrating equation of motion for
595 solvent particles without losing the Brownian properites of solute
596 particles. A common approximation is that the coupling of the solute
597 and solvent is expressed as a set of harmonic oscillators. So the
598 Hamiltonian of such a system is written as
599 \begin{equation}
600 H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
601 \label{Ineq:hamiltonianofCoupling}
602 \end{equation}
603 where $H_B$ is the Hamiltonian of the bath which equals to
604 \begin{equation}
605 H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
606 \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
607 \label{Ineq:hamiltonianofBath}
608 \end{equation}
609 $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the
610 bath frequency, and $\Delta U(q)$ is the bilinear coupling given by
611 \begin{equation}
612 \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
613 \label{Ineq:systemBathCoupling}
614 \end{equation}
615 where $g$ is the coupling constant. By solving the Hamilton's equation
616 of motion, the {\it Generalized Langevin Equation} for this system is
617 derived to
618 \begin{equation}
619 m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
620 \label{Ineq:gle}
621 \end{equation}
622 with mean force,
623 \begin{equation}
624 W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
625 \omega_\alpha^2}q^2,
626 \label{Ineq:meanForce}
627 \end{equation}
628 being only a dependence of coordinates of the solute particles, {\it
629 friction kernel},
630 \begin{equation}
631 \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
632 \omega_\alpha} \cos(\omega_\alpha t),
633 \label{Ineq:xiforGLE}
634 \end{equation}
635 and the random force,
636 \begin{equation}
637 R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
638 \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
639 q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
640 \label{Ineq:randomForceforGLE}
641 \end{equation}
642 as only a dependence of the initial conditions. The relationship of
643 friction kernel $\xi(t)$ and random force $R(t)$ is given by
644 \begin{equation}
645 \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle
646 \label{Ineq:relationshipofXiandR}
647 \end{equation}
648 from their definitions. In Langevin limit, the friction is treated
649 static, which means
650 \begin{equation}
651 \xi(t) = 2 \xi_0 \delta(t).
652 \label{Ineq:xiofStaticFriction}
653 \end{equation}
654 After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with
655 Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted
656 to
657 \begin{equation}
658 m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
659 \label{Ineq:langevinEquation}
660 \end{equation}
661 The applying of Langevin Equation to dynamic simulations is discussed
662 in Ch.~\ref{chap:ld}.