ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
Revision: 3383
Committed: Wed Apr 16 21:56:34 2008 UTC (17 years ago) by xsun
Content type: application/x-tex
File size: 35348 byte(s)
Log Message:
formatting check

File Contents

# Content
1 \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2
3 \section{Background on the Problem\label{In:sec:pro}}
4 Phospholipid molecules are the primary topic of this dissertation
5 because of their critical role as the foundation of biological
6 membranes. The chemical structure of phospholipids includes a head
7 group with a large dipole moment which is due to the large charge
8 separation between phosphate and amino alcohol, and a nonpolar tail
9 that contains fatty acid chains. Depending on the specific alcohol
10 which the phosphate and fatty acid chains are esterified to, the
11 phospholipids are divided into glycerophospholipids and
12 sphingophospholipids.~\cite{Cevc80} The chemical structures are shown
13 in figure~\ref{Infig:lipid}.
14 \begin{figure}
15 \centering
16 \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
17 \caption[The chemical structure of lipids]{The chemical structure of
18 glycerophospholipids (left) and sphingophospholipids
19 (right).\cite{Cevc80}}
20 \label{Infig:lipid}
21 \end{figure}
22 Glycerophospholipids are the dominant phospholipids in biological
23 membranes. The type of glycerophospholipid depends on the identity of
24 the X group, and the chains. For example, if X is choline
25 [(CH$_3$)$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
26 phosphatidylcholine (PC), or if X is ethanolamine
27 [H$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
28 phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a number
29 types of phosphatidycholine with different fatty acids as the lipid
30 chains.
31 \begin{table*}
32 \begin{minipage}{\linewidth}
33 \begin{center}
34 \caption{A NUMBER TYPES OF PHOSPHATIDYCHOLINE}
35 \begin{tabular}{lll}
36 \hline
37 & Fatty acid & Generic Name \\
38 \hline
39 \textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH &
40 \textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
41 \textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine
42 \\
43 \textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
44 \end{tabular}
45 \label{Intab:pc}
46 \end{center}
47 \end{minipage}
48 \end{table*}
49 When dispersed in water, lipids self assemble into a number of
50 topologically distinct bilayer structures. The phase behavior of lipid
51 bilayers has been explored experimentally~\cite{Cevc80}, however, a
52 complete understanding of the mechanism and driving forces behind the
53 various phases has not been achieved.
54
55 \subsection{The Ripple Phase\label{In:ssec:ripple}}
56 The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
57 periodic buckling of the membrane, is an intermediate phase which is
58 developed either from heating the gel phase $L_{\beta'}$ or cooling
59 the fluid phase $L_\alpha$. A sketch of the phases is shown in
60 figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
61 \begin{figure}
62 \centering
63 \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
64 \caption[Phases of PC lipid bilayers]{Phases of PC lipid
65 bilayers. With increasing temperature, phosphotidylcholine (PC)
66 bilayers can go through $L_{\beta'} \rightarrow P_{\beta'}$ (gel
67 $\rightarrow$ ripple) and $P_{\beta'} \rightarrow L_\alpha$ (ripple
68 $\rightarrow$ fluid) phase transitions.~\cite{Cevc80}}
69 \label{Infig:phaseDiagram}
70 \end{figure}
71 Most structural information about the ripple phase has been obtained
72 by X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
73 electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
74 diffraction work by Katsaras {\it et al.} showed that a rich phase
75 diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
76 is possible for lecithin bilayers.\cite{Katsaras00} Recently,
77 Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe
78 ripple phase morphology in bilayers supported on
79 mica.~\cite{Kaasgaard03}
80 \begin{figure}
81 \centering
82 \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
83 \caption[Experimental observations of the riple phase]{Experimental
84 observations of the riple phase. The top image is an electrostatic
85 density map obtained by Sun {\it et al.} using X-ray
86 diffraction~\cite{Sun96}. The lower figures are the surface topology
87 of various ripple domains in bilayers supported in mica. The AFM
88 images were observed by Kaasgaard {\it et al.}.~\cite{Kaasgaard03}}
89 \label{Infig:ripple}
90 \end{figure}
91 Figure~\ref{Infig:ripple} shows the ripple phase oberved by both X-ray
92 diffraction and AFM. The experimental results provide strong support
93 for a 2-dimensional triangular packing lattice of the lipid molecules
94 within the ripple phase. This is a notable change from the observed
95 lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
96 {\it et al.} have recently observed near-triangular packing in some
97 phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
98 physical mechanism for the formation of the ripple phase has never
99 been explained and the microscopic structure of the ripple phase has
100 never been elucidated by experiments. Computational simulation is a
101 very good tool to study the microscopic properties for a
102 system. However, the large length scale of the ripple structures and
103 the long time required for the formation of the ripples are crucial
104 obstacles to performing the actual work. The principal ideas explored
105 in this dissertation are attempts to break the computational task up
106 by
107 \begin{itemize}
108 \item Simplifying the lipid model.
109 \item Improving the algorithm for integrating the equations of motion.
110 \end{itemize}
111 In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
112 dipole spin model and a coarse-grained molecular scale model to
113 perform the Monte Carlo and Molecular Dynamics simulations
114 respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
115 Dynamics algorithm which excludes the explicit solvent to improve the
116 efficiency of the simulations.
117
118 \subsection{Lattice Models\label{In:ssec:model}}
119 The gel-like characteristic (laterally immobile molecules) exhibited
120 by the ripple phase makes it feasible to apply a lattice model to
121 study the system. The popular $2$ dimensional lattice models, {\it
122 i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
123 on triangular lattices. The Hamiltonians of these systems are given by
124 \begin{equation}
125 H =
126 \begin{cases}
127 -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\
128 -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and
129 Heisenberg},
130 \end{cases}
131 \end{equation}
132 where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
133 $s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins
134 prefer an aligned structure, and if $J < 0$, spins prefer an
135 anti-aligned structure.
136
137 \begin{figure}
138 \centering
139 \includegraphics[width=3in]{./figures/inFrustration.pdf}
140 \caption[Frustration on triangular lattice]{Frustration on triangular
141 lattice, the spins and dipoles are represented by arrows. The multiple
142 local minima of energy states induce frustration for spins and dipoles
143 resulting in disordered low-temperature phases.}
144 \label{Infig:frustration}
145 \end{figure}
146 The spins in figure~\ref{Infig:frustration} illustrate frustration for
147 $J < 0$ on a triangular lattice. There are multiple local minima
148 energy states which are independent of the direction of the spin on
149 top of the triangle, therefore infinite possibilities for orienting
150 large numbers spins. This induces what is known as ``complete regular
151 frustration'' which leads to disordered low temperature phases. This
152 behavior extends to dipoles on a triangular lattices, which are shown
153 in the lower portion of figure~\ref{Infig:frustration}. In this case,
154 dipole-aligned structures are energetically favorable, however, at low
155 temperature, vortices are easily formed, and, this results in multiple
156 local minima of energy states for a central dipole. The dipole on the
157 center of the hexagonal lattice is therefore frustrated.
158
159 The lack of translational degrees of freedom in lattice models
160 prevents their utilization in models for surface buckling. In
161 chapter~\ref{chap:mc}, a modified lattice model is introduced to
162 tackle this specific situation.
163
164 \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
165 Statistical mechanics provides a way to calculate the macroscopic
166 properties of a system from the molecular interactions used in
167 computational simulations. This section serves as a brief introduction
168 to key concepts of classical statistical mechanics that we used in
169 this dissertation. Tolman gives an excellent introduction to the
170 principles of statistical mechanics.~\cite{Tolman1979} A large part of
171 section~\ref{In:sec:SM} follows Tolman's notation.
172
173 \subsection{Ensembles\label{In:ssec:ensemble}}
174 In classical mechanics, the state of the system is completely
175 described by the positions and momenta of all particles. If we have an
176 $N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1,
177 q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
178 to define the instantaneous state of the system. Each single set of
179 the $6N$ coordinates can be considered as a unique point in a $6N$
180 dimensional space where each perpendicular axis is one of
181 $q_{i\alpha}$ or $p_{i\alpha}$ ($i$ is the particle and $\alpha$ is
182 the spatial axis). This $6N$ dimensional space is known as phase
183 space. The instantaneous state of the system is a single point in
184 phase space. A trajectory is a connected path of points in phase
185 space. An ensemble is a collection of systems described by the same
186 macroscopic observables but which have microscopic state
187 distributions. In phase space an ensemble is a collection of a set of
188 representive points. A density distribution $\rho(q^N, p^N)$ of the
189 representive points in phase space describes the condition of an
190 ensemble of identical systems. Since this density may change with
191 time, it is also a function of time. $\rho(q^N, p^N, t)$ describes the
192 ensemble at a time $t$, and $\rho(q^N, p^N, t')$ describes the
193 ensemble at a later time $t'$. For convenience, we will use $\rho$
194 instead of $\rho(q^N, p^N, t)$ in the following disccusion. If we
195 normalize $\rho$ to unity,
196 \begin{equation}
197 1 = \int d \vec q~^N \int d \vec p~^N \rho,
198 \label{Ineq:normalized}
199 \end{equation}
200 then the value of $\rho$ gives the probability of finding the system
201 in a unit volume in phase space.
202
203 Liouville's theorem describes the change in density $\rho$ with
204 time. The number of representative points at a given volume in phase
205 space at any instant can be written as:
206 \begin{equation}
207 \label{Ineq:deltaN}
208 \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
209 \end{equation}
210 To calculate the change in the number of representative points in this
211 volume, let us consider a simple condition: the change in the number
212 of representative points along the $q_1$ axis. The rate of the number
213 of the representative points entering the volume at $q_1$ per unit time
214 is:
215 \begin{equation}
216 \label{Ineq:deltaNatq1}
217 \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
218 \end{equation}
219 and the rate of the number of representative points leaving the volume
220 at another position $q_1 + \delta q_1$ is:
221 \begin{equation}
222 \label{Ineq:deltaNatq1plusdeltaq1}
223 \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
224 \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
225 \end{equation}
226 Here the higher order differentials are neglected. So the change in
227 the number of representative points is the difference between
228 eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1},
229 which gives us:
230 \begin{equation}
231 \label{Ineq:deltaNatq1axis}
232 -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
233 \end{equation}
234 where, higher order differetials are neglected. If we sum over all the
235 axes in the phase space, we can get the change in the number of
236 representative points in a given volume with time:
237 \begin{equation}
238 \label{Ineq:deltaNatGivenVolume}
239 \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
240 {\dot q_i}}{\partial q_i} + \frac{\partial
241 {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
242 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
243 \end{equation}
244 From Hamilton's equations of motion,
245 \begin{equation}
246 \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
247 {\dot p_i}}{\partial p_i},
248 \label{Ineq:canonicalFormOfEquationOfMotion}
249 \end{equation}
250 this cancels out the first term on the right side of
251 eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
252 eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
253 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
254 arrive at Liouville's theorem:
255 \begin{equation}
256 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
257 \frac{\partial {\rho}}{\partial
258 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right).
259 \label{Ineq:simpleFormofLiouville}
260 \end{equation}
261 This is the basis of statistical mechanics. If we move the right
262 side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we
263 will obtain
264 \begin{equation}
265 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left(
266 \frac{\partial {\rho}}{\partial
267 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
268 = 0.
269 \label{Ineq:anotherFormofLiouville}
270 \end{equation}
271 It is easy to note that the left side of
272 equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
273 $\rho$ with respect to $t$, which means
274 \begin{equation}
275 \frac{d \rho}{dt} = 0,
276 \label{Ineq:conservationofRho}
277 \end{equation}
278 and the rate of density change is zero in the neighborhood of any
279 selected moving representative points in the phase space.
280
281 The type of thermodynamic ensemble is determined by the density
282 distribution. If we consider the density distribution as only a
283 function of $q$ and $p$, which means the rate of change of the phase
284 space density in the neighborhood of all representative points in the
285 phase space is zero,
286 \begin{equation}
287 \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
288 \label{Ineq:statEquilibrium}
289 \end{equation}
290 We may conclude the ensemble is in {\it statistical equilibrium}. An
291 ensemble in statistical equilibrium means the system is also in
292 macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
293 \right)_{q, p} = 0$, then
294 \begin{equation}
295 \sum_{i} \left(
296 \frac{\partial {\rho}}{\partial
297 q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
298 = 0.
299 \label{Ineq:constantofMotion}
300 \end{equation}
301 If $\rho$ is a function only of some constant of the motion, $\rho$ is
302 independent of time. For a conservative system, the energy of the
303 system is one of the constants of the motion. There are many
304 thermodynamically relevant ensembles: when the density distribution is
305 constant everywhere in the phase space,
306 \begin{equation}
307 \rho = \mathrm{const.}
308 \label{Ineq:uniformEnsemble}
309 \end{equation}
310 the ensemble is called {\it uniform ensemble}, but this ensemble has
311 little relevance for physical chemistry. It is an ensemble with
312 essentially infinite temperature.
313
314 \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
315 The most useful ensemble for Molecular Dynamics is the {\it
316 microcanonical ensemble}, for which:
317 \begin{equation}
318 \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
319 \label{Ineq:microcanonicalEnsemble}
320 \end{equation}
321 where $\Sigma(N, V, E)$ is a normalization constant parameterized by
322 $N$, the total number of particles, $V$, the total physical volume and
323 $E$, the total energy. The physical meaning of $\Sigma(N, V, E)$ is
324 the phase space volume accessible to a microcanonical system with
325 energy $E$ evolving under Hamilton's equations. $H(q^N, p^N)$ is the
326 Hamiltonian of the system. The Gibbs entropy is defined as
327 \begin{equation}
328 S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho],
329 \label{Ineq:gibbsEntropy}
330 \end{equation}
331 where $k_B$ is the Boltzmann constant and $C^N$ is a number which
332 makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
333 total phase space volume of one state which has the same units as
334 $\Sigma(N, V, E)$. The entropy of a microcanonical ensemble is given
335 by
336 \begin{equation}
337 S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
338 \label{Ineq:entropy}
339 \end{equation}
340
341 \subsubsection{The Canonical Ensemble\label{In:sssec:canonical}}
342 If the density distribution $\rho$ is given by
343 \begin{equation}
344 \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
345 \label{Ineq:canonicalEnsemble}
346 \end{equation}
347 the ensemble is known as the {\it canonical ensemble}. Here,
348 \begin{equation}
349 Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T},
350 \label{Ineq:partitionFunction}
351 \end{equation}
352 which is also known as the canonical {\it partition
353 function}. $\Gamma$ indicates that the integral is over all phase
354 space. In the canonical ensemble, $N$, the total number of particles,
355 $V$, total volume, and $T$, the temperature, are constants. The
356 systems with the lowest energies hold the largest
357 population. Thermodynamics maximizes the entropy, $S$, implying that
358 \begin{equation}
359 \begin{array}{ccc}
360 \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
361 \end{array}
362 \label{Ineq:maximumPrinciple}
363 \end{equation}
364 From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the
365 canonical ensemble, {\it i.e.}, total probability and average energy
366 must be conserved, the partition function can be shown to be
367 equivalent to
368 \begin{equation}
369 Z_N = e^{-A/k_B T},
370 \label{Ineq:partitionFunctionWithFreeEnergy}
371 \end{equation}
372 where $A$ is the Helmholtz free energy. The significance of
373 Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
374 that they serve as a connection between macroscopic properties of the
375 system and the distribution of microscopic states.
376
377 There is an implicit assumption that our arguments are based on so
378 far. Tow representative points in phase space are equally likely to be
379 found if they have the same energy. In other words, all energetically
380 accessible states are represented , and the probabilities to find the
381 system in any of the accessible states is equal to that states
382 Boltzmann weight. This is called the principle of equal a {\it priori}
383 probabilities.
384
385 \subsection{Statistical Averages\label{In:ssec:average}}
386 Given a density distribution $\rho$ in phase space, the average
387 of any quantity ($F(q^N, p^N$)) which depends on the coordinates
388 ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
389 can be calculated based on the definition shown by
390 Eq.~\ref{Ineq:statAverage1}
391 \begin{equation}
392 \langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
393 F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
394 \label{Ineq:statAverage1}
395 \end{equation}
396 Since the density distribution $\rho$ is normalized to unity, the mean
397 value of $F(q^N, p^N)$ is simplified to
398 \begin{equation}
399 \langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
400 p^N) \rho,
401 \label{Ineq:statAverage2}
402 \end{equation}
403 called the {\it ensemble average}. However, the quantity is often
404 averaged for a finite time in real experiments,
405 \begin{equation}
406 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
407 \frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt.
408 \label{Ineq:timeAverage1}
409 \end{equation}
410 Usually this time average is independent of $t_0$ in statistical
411 mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
412 \begin{equation}
413 \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
414 \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
415 \label{Ineq:timeAverage2}
416 \end{equation}
417 for an finite time interval, $T$.
418
419 \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
420 The {\it ergodic hypothesis}, an important hypothesis governing modern
421 computer simulations states that the system will eventually pass
422 arbitrarily close to any point that is energetically accessible in
423 phase space. Mathematically, this leads to
424 \begin{equation}
425 \langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t.
426 \label{Ineq:ergodicity}
427 \end{equation}
428 Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of
429 averaging for sufficiently ergodic systems. Also Monte Carlo may be
430 used to obtain ensemble averages of a quantity which are related to
431 time averages measured in experiments.
432
433 \subsection{Correlation Functions\label{In:ssec:corr}}
434 Thermodynamic properties can be computed by equilibrium statistical
435 mechanics. On the other hand, {\it Time correlation functions} are a
436 powerful tool to understand the evolution of a dynamical
437 systems. Imagine that property $A(q^N, p^N, t)$ as a function of
438 coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and
439 at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very
440 small, the change of the value is minor, and the later value of
441 $A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However,
442 when $\tau$ is large, this correlation is lost. A time correlation
443 function measures this relationship and is defined
444 by~\cite{Berne90}
445 \begin{equation}
446 C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow
447 \infty}
448 \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
449 \label{Ineq:autocorrelationFunction}
450 \end{equation}
451 Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of
452 a single variable, called an {\it autocorrelation function}. The
453 definition of the correlation function for two different variables is
454 similar to that of autocorrelation function, which is
455 \begin{equation}
456 C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
457 \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
458 \label{Ineq:crosscorrelationFunction}
459 \end{equation}
460 and is called a {\it cross correlation function}.
461
462 We know from the ergodic hypothesis that there is a relationship
463 between time average and ensemble average. We can put the correlation
464 function in a classical mechanical form,
465 \begin{equation}
466 C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
467 A[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
468 \label{Ineq:autocorrelationFunctionCM}
469 \end{equation}
470 where $q^N(\tau)$, $p^N(\tau)$ is the phase space point that follows
471 classical evolution of the point $q^N$, $p^N$ after a tme $\tau$ has
472 elapsed, and
473 \begin{equation}
474 C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
475 B[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
476 \label{Ineq:crosscorrelationFunctionCM}
477 \end{equation}
478 as the autocorrelation function and cross correlation functions
479 respectively. $\rho(q, p)$ is the density distribution at equillibrium
480 in phase space. In many cases, correlation functions decay as a
481 single exponential in time
482 \begin{equation}
483 C(t) \sim e^{-t / \tau_r},
484 \label{Ineq:relaxation}
485 \end{equation}
486 where $\tau_r$ is known as relaxation time which describes the rate of
487 the decay.
488
489 \section{Methodology\label{In:sec:method}}
490 The simulations performed in this dissertation branch into two main
491 categories, Monte Carlo and Molecular Dynamics. There are two main
492 differences between Monte Carlo and Molecular Dynamics
493 simulations. One is that the Monte Carlo simulations are time
494 independent methods of sampling structural features of an ensemble,
495 while Molecular Dynamics simulations provide dynamic
496 information. Additionally, Monte Carlo methods are stochastic
497 processes; the future configurations of the system are not determined
498 by its past. However, in Molecular Dynamics, the system is propagated
499 by Hamilton's equations of motion, and the trajectory of the system
500 evolving in phase space is deterministic. Brief introductions of the
501 two algorithms are given in section~\ref{In:ssec:mc}
502 and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
503 Dynamics that includes implicit solvent effects, is introduced by
504 section~\ref{In:ssec:ld}.
505
506 \subsection{Monte Carlo\label{In:ssec:mc}}
507 A Monte Carlo integration algorithm was first introduced by Metropolis
508 {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
509 algorithm is usually applied to the canonical ensemble, a
510 Boltzmann-weighted ensemble, in which $N$, the total number of
511 particles, $V$, the total volume, and $T$, the temperature are
512 constants. An average in this ensemble is given
513 \begin{equation}
514 \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
515 A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
516 \label{Ineq:energyofCanonicalEnsemble}
517 \end{equation}
518 The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function
519 of momenta and the potential energy, $U(q^N)$, a function of
520 positions,
521 \begin{equation}
522 H(q^N, p^N) = K(p^N) + U(q^N).
523 \label{Ineq:hamiltonian}
524 \end{equation}
525 If the property $A$ is a function only of position ($ A = A(q^N)$),
526 the mean value of $A$ can be reduced to
527 \begin{equation}
528 \langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}},
529 \label{Ineq:configurationIntegral}
530 \end{equation}
531 The kinetic energy $K(p^N)$ is factored out in
532 Eq.~\ref{Ineq:configurationIntegral}. $\langle A
533 \rangle$ is now a configuration integral, and
534 Eq.~\ref{Ineq:configurationIntegral} is equivalent to
535 \begin{equation}
536 \langle A \rangle = \int d \vec q~^N A \rho(q^N),
537 \label{Ineq:configurationAve}
538 \end{equation}
539 where $\rho(q^N)$ is a configurational probability
540 \begin{equation}
541 \rho(q^N) = \frac{e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}.
542 \label{Ineq:configurationProb}
543 \end{equation}
544
545 In a Monte Carlo simulation of a system in the canonical ensemble, the
546 probability of the system being in a state $s$ is $\rho_s$, the change
547 of this probability with time is given by
548 \begin{equation}
549 \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
550 \label{Ineq:timeChangeofProb}
551 \end{equation}
552 where $w_{ss'}$ is the tansition probability of going from state $s$
553 to state $s'$. Since $\rho_s$ is independent of time at equilibrium,
554 \begin{equation}
555 \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
556 \label{Ineq:equiProb}
557 \end{equation}
558 the sum of transition probabilities $\sum_{s'} [ -w_{ss'}\rho_s +
559 w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So
560 \begin{equation}
561 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
562 \label{Ineq:relationshipofRhoandW}
563 \end{equation}
564 If the ratio of state populations
565 \begin{equation}
566 \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
567 \label{Ineq:satisfyofBoltzmannStatistics}
568 \end{equation}
569 then the ratio of transition probabilities,
570 \begin{equation}
571 \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
572 \label{Ineq:conditionforBoltzmannStatistics}
573 \end{equation}
574 An algorithm that indicates how a Monte Carlo simulation generates a
575 transition probability governed by
576 \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
577 \begin{enumerate}
578 \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
579 the energy of the rest of the system due to the current location of
580 the particle.
581 \item\label{Initm:newEnergy} Make a random displacement of the particle,
582 calculate the new energy taking the movement of the particle into account.
583 \begin{itemize}
584 \item If the energy goes down, keep the new configuration.
585 \item If the energy goes up, pick a random number between $[0,1]$.
586 \begin{itemize}
587 \item If the random number smaller than
588 $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration.
589 \item If the random number is larger than
590 $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration.
591 \end{itemize}
592 \end{itemize}
593 \item\label{Initm:accumulateAvg} Accumulate the averages based on the
594 current configuration.
595 \item Go to step~\ref{Initm:oldEnergy}.
596 \end{enumerate}
597 It is important for sampling accuracy that the old configuration is
598 sampled again if it is kept.
599
600 \subsection{Molecular Dynamics\label{In:ssec:md}}
601 Although some of properites of the system can be calculated from the
602 ensemble average in Monte Carlo simulations, due to the absence of the
603 time dependence, it is impossible to gain information on dynamic
604 properties from Monte Carlo simulations. Molecular Dynamics evolves
605 the positions and momenta of the particles in the system. The
606 evolution of the system obeys the laws of classical mechanics, and in
607 most situations, there is no need to account for quantum effects. In a
608 real experiment, the instantaneous positions and momenta of the
609 particles in the system are ofter neither important nor measurable,
610 the observable quantities are usually an average value for a finite
611 time interval. These quantities are expressed as a function of
612 positions and momenta in Molecular Dynamics simulations. For example,
613 temperature of the system is defined by
614 \begin{equation}
615 \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
616 \label{Ineq:temperature}
617 \end{equation}
618 here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of
619 particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the
620 average kinetic energy of the system.
621
622 The initial positions of the particles are chosen so that there is no
623 overlap of the particles. The initial velocities at first are
624 distributed randomly to the particles using a Maxwell-Boltzmann
625 distribution, and then shifted to make the total linear momentum of
626 the system $0$.
627
628 The core of Molecular Dynamics simulations is the calculation of
629 forces and the integration algorithm. Calculation of the forces often
630 involves enormous effort. This is the most time consuming step in the
631 Molecular Dynamics scheme. Evaluation of the forces is mathematically
632 simple,
633 \begin{equation}
634 f(q) = - \frac{\partial U(q)}{\partial q},
635 \label{Ineq:force}
636 \end{equation}
637 where $U(q)$ is the potential of the system. However, the numerical
638 details of this computation are often quite complex. Once the forces
639 computed, the positions and velocities are updated by integrating
640 Hamilton's equations of motion,
641 \begin{eqnarray}
642 \dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial
643 U(q_i)}{\partial q_i} = f(q_i) \\
644 \dot q_i & = & p_i
645 \label{Ineq:newton}
646 \end{eqnarray}
647 The classic example of an integrating algorithm is the Verlet
648 algorithm, which is one of the simplest algorithms for integrating the
649 equations of motion. The Taylor expansion of the position at time $t$
650 is
651 \begin{equation}
652 q(t+\Delta t)= q(t) + \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 +
653 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
654 \mathcal{O}(\Delta t^4)
655 \label{Ineq:verletFuture}
656 \end{equation}
657 for a later time $t+\Delta t$, and
658 \begin{equation}
659 q(t-\Delta t)= q(t) - \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 -
660 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
661 \mathcal{O}(\Delta t^4) ,
662 \label{Ineq:verletPrevious}
663 \end{equation}
664 for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture}
665 and~\ref{Ineq:verletPrevious} gives
666 \begin{equation}
667 q(t+\Delta t)+q(t-\Delta t) =
668 2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
669 \label{Ineq:verletSum}
670 \end{equation}
671 so, the new position can be expressed as
672 \begin{equation}
673 q(t+\Delta t) \approx
674 2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
675 \label{Ineq:newPosition}
676 \end{equation}
677 The higher order terms in $\Delta t$ are omitted.
678
679 Numerous techniques and tricks have been applied to Molecular Dynamics
680 simulations to gain greater efficiency or accuracy. The engine used in
681 this dissertation for the Molecular Dynamics simulations is {\sc
682 oopse}. More details of the algorithms and techniques used in this
683 code can be found in Ref.~\cite{Meineke2005}.
684
685 \subsection{Langevin Dynamics\label{In:ssec:ld}}
686 In many cases, the properites of the solvent (like the water in the
687 lipid-water system studied in this dissertation) are less interesting
688 to the researchers than the behavior of the solute. However, the major
689 computational expense is ofter the solvent-solvent interactions, this
690 is due to the large number of the solvent molecules when compared to
691 the number of solute molecules. The ratio of the number of lipid
692 molecules to the number of water molecules is $1:25$ in our
693 lipid-water system. The efficiency of the Molecular Dynamics
694 simulations is greatly reduced, with up to 85\% of CPU time spent
695 calculating only water-water interactions.
696
697 As an extension of the Molecular Dynamics methodologies, Langevin
698 Dynamics seeks a way to avoid integrating the equations of motion for
699 solvent particles without losing the solvent effects on the solute
700 particles. One common approximation is to express the coupling of the
701 solute and solvent as a set of harmonic oscillators. The Hamiltonian
702 of such a system is written as
703 \begin{equation}
704 H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
705 \label{Ineq:hamiltonianofCoupling}
706 \end{equation}
707 where $H_B$ is the Hamiltonian of the bath which is a set of N
708 harmonic oscillators
709 \begin{equation}
710 H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
711 \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
712 \label{Ineq:hamiltonianofBath}
713 \end{equation}
714 $\alpha$ runs over all the degree of freedoms of the bath,
715 $\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and
716 $\Delta U(q)$ is the bilinear coupling given by
717 \begin{equation}
718 \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
719 \label{Ineq:systemBathCoupling}
720 \end{equation}
721 where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By
722 solving the Hamilton's equations of motion, the {\it Generalized
723 Langevin Equation} for this system is derived as
724 \begin{equation}
725 m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
726 \label{Ineq:gle}
727 \end{equation}
728 with mean force,
729 \begin{equation}
730 W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
731 \omega_\alpha^2}q^2.
732 \label{Ineq:meanForce}
733 \end{equation}
734 The {\it friction kernel}, $\xi(t)$, depends only on the coordinates
735 of the solute particles,
736 \begin{equation}
737 \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
738 \omega_\alpha} \cos(\omega_\alpha t),
739 \label{Ineq:xiforGLE}
740 \end{equation}
741 and a ``random'' force,
742 \begin{equation}
743 R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
744 \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
745 q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
746 \label{Ineq:randomForceforGLE}
747 \end{equation}
748 that depends only on the initial conditions. The relationship of
749 friction kernel $\xi(t)$ and random force $R(t)$ is given by the
750 second fluctuation dissipation theorem,
751 \begin{equation}
752 \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
753 \label{Ineq:relationshipofXiandR}
754 \end{equation}
755 In the harmonic bath this relation is exact and provable from the
756 definitions of these quantities. In the limit of static friction,
757 \begin{equation}
758 \xi(t) = 2 \xi_0 \delta(t).
759 \label{Ineq:xiofStaticFriction}
760 \end{equation}
761 After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with
762 Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is
763 extracted,
764 \begin{equation}
765 m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
766 \label{Ineq:langevinEquation}
767 \end{equation}
768 Application of the Langevin Equation to dynamic simulations is
769 discussed in Ch.~\ref{chap:ld}.