ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
(Generate patch)

Comparing trunk/xDissertation/Introduction.tex (file contents):
Revision 3336 by xsun, Wed Jan 30 16:01:02 2008 UTC vs.
Revision 3374 by xsun, Thu Mar 20 22:21:43 2008 UTC

# Line 1 | Line 1
1   \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2 +
3 + \section{Background on the Problem\label{In:sec:pro}}
4 + Phospholipid molecules are the primary topic of this dissertation
5 + because of their critical role as the foundation of biological
6 + membranes. The chemical structure of phospholipids includes the polar
7 + head group which is due to a large charge separation between phosphate
8 + and amino alcohol, and the nonpolar tails that contains fatty acid
9 + chains. Depending on the alcohol which phosphate and fatty acid chains
10 + are esterified to, the phospholipids are divided into
11 + glycerophospholipids and sphingophospholipids.~\cite{Cevc80} The
12 + chemical structures are shown in figure~\ref{Infig:lipid}.
13 + \begin{figure}
14 + \centering
15 + \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
16 + \caption{The chemical structure of glycerophospholipids (left) and
17 + sphingophospholipids (right).\cite{Cevc80}}
18 + \label{Infig:lipid}
19 + \end{figure}
20 + The glycerophospholipid is the dominant phospholipid in membranes. The
21 + types of glycerophospholipids depend on the X group, and the
22 + chains. For example, if X is choline, the lipids are known as
23 + phosphatidylcholine (PC), or if X is ethanolamine, the lipids are
24 + known as phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a
25 + number types of phosphatidycholine with different fatty acids as the
26 + lipid chains.
27 + \begin{table*}
28 + \begin{minipage}{\linewidth}
29 + \begin{center}
30 + \caption{A number types of phosphatidycholine.}
31 + \begin{tabular}{lll}
32 + \hline
33 +  & Fatty acid & Generic Name \\
34 + \hline
35 + \textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH &
36 + \textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
37 + \textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine
38 + \\
39 + \textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\
40 + \end{tabular}
41 + \label{Intab:pc}
42 + \end{center}
43 + \end{minipage}
44 + \end{table*}
45 + When dispersed in water, lipids self assemble into a mumber of
46 + topologically distinct bilayer structures. The phase behavior of lipid
47 + bilayers has been explored experimentally~\cite{Cevc80}, however, a
48 + complete understanding of the mechanism and driving forces behind the
49 + various phases has not been achieved.
50 +
51 + \subsection{Ripple Phase\label{In:ssec:ripple}}
52 + The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
53 + periodic buckling of the membrane, is an intermediate phase which is
54 + developed either from heating the gel phase $L_{\beta'}$ or cooling
55 + the fluid phase $L_\alpha$. A Sketch is shown in
56 + figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
57 + \begin{figure}
58 + \centering
59 + \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
60 + \caption{A phase diagram of lipid bilayer. With increasing the
61 + temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple
62 + ($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.~\cite{Cevc80}}
63 + \label{Infig:phaseDiagram}
64 + \end{figure}
65 + Most structural information of the ripple phase has been obtained by
66 + the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
67 + electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
68 + diffraction work by Katsaras {\it et al.} showed that a rich phase
69 + diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
70 + is possible for lecithin bilayers.\cite{Katsaras00} Recently,
71 + Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe
72 + ripple phase morphology in bilayers supported on
73 + mica.~\cite{Kaasgaard03}
74 + \begin{figure}
75 + \centering
76 + \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
77 + \caption{The experimental observed ripple phase. The top image is
78 + obtained by Sun {\it et al.} using X-ray diffraction~\cite{Sun96},
79 + and the bottom one is observed by Kaasgaard {\it et al.} using
80 + AFM.~\cite{Kaasgaard03}}
81 + \label{Infig:ripple}
82 + \end{figure}
83 + Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray
84 + diffraction and AFM. The experimental results provide strong support
85 + for a 2-dimensional triangular packing lattice of the lipid molecules
86 + within the ripple phase.  This is a notable change from the observed
87 + lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
88 + {\it et al.} have recently observed near-hexagonal packing in some
89 + phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
90 + physical mechanism for the formation of the ripple phase has never
91 + been explained and the microscopic structure of the ripple phase has
92 + never been elucidated by experiments. Computational simulation is a
93 + perfect tool to study the microscopic properties for a
94 + system. However, the large length scale the ripple structure and the
95 + long time scale of the formation of the ripples are crucial obstacles
96 + to performing the actual work. The principal ideas explored in this
97 + dissertation are attempts to break the computational task up by
98 + \begin{itemize}
99 + \item Simplifying the lipid model.
100 + \item Improving algorithm for integrating the equations of motion.
101 + \end{itemize}
102 + In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
103 + dipole spin model and a coarse-grained molecualr scale model to
104 + perform the Monte Carlo and Molecular Dynamics simulations
105 + respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
106 + Dynamics algorithm which excludes the explicit solvent to improve the
107 + efficiency of the simulations.
108 +
109 + \subsection{Lattice Model\label{In:ssec:model}}
110 + The gel-like characteristic (relatively immobile molecules) exhibited
111 + by the ripple phase makes it feasible to apply a lattice model to
112 + study the system. The popular $2$ dimensional lattice models, {\it
113 + i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
114 + on triangular lattices. The Hamiltonians of these systems are given by
115 + \begin{equation}
116 + H =
117 +  \begin{cases}
118 +    -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\
119 +    -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and
120 + Heisenberg},
121 +  \end{cases}
122 + \end{equation}
123 + where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and
124 + $s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins
125 + prefer an aligned structure, and if $J < 0$, spins prefer an
126 + anti-aligned structure.
127 +
128 + \begin{figure}
129 + \centering
130 + \includegraphics[width=3in]{./figures/inFrustration.pdf}
131 + \caption{Frustration on triangular lattice, the spins and dipoles are
132 + represented by arrows. The multiple local minima of energy states
133 + induce the frustration for spins and dipoles picking the directions.}
134 + \label{Infig:frustration}
135 + \end{figure}
136 + The spins in figure~\ref{Infig:frustration} shows an illustration of
137 + the frustration for $J < 0$ on a triangular lattice. There are
138 + multiple local minima energy states which are independent of the
139 + direction of the spin on top of the triangle, therefore infinite
140 + possibilities for the packing of spins which induces what is known as
141 + ``complete regular frustration'' which leads to disordered low
142 + temperature phases. The similarity goes to the dipoles on a hexagonal
143 + lattice, which are shown by the dipoles in
144 + figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
145 + to be aligned, however, due to the long wave fluctuation, at low
146 + temperature, the aligned state becomes unstable, vortex is formed and
147 + results in multiple local minima of energy states. The dipole on the
148 + center of the hexagonal lattice is frustrated.
149 +
150 + The lack of translational degrees of freedom in lattice models
151 + prevents their utilization in models for surface buckling. In
152 + chapter~\ref{chap:mc}, a modified lattice model is introduced to
153 + tackle this specific situation.
154 +
155 + \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}}
156 + Statistical mechanics provides a way to calculate the macroscopic
157 + properties of a system from the molecular interactions used in
158 + computational simulations. This section serves as a brief introduction
159 + to key concepts of classical statistical mechanics that we used in
160 + this dissertation. Tolman gives an excellent introduction to the
161 + principles of statistical mechanics.~\cite{Tolman1979} A large part of
162 + section~\ref{In:sec:SM} will follow Tolman's notation.
163 +
164 + \subsection{Ensembles\label{In:ssec:ensemble}}
165 + In classical mechanics, the state of the system is completely
166 + described by the positions and momenta of all particles. If we have an
167 + $N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1,
168 + q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$)
169 + to define the instantaneous state of the system. Each single set of
170 + the $6N$ coordinates can be considered as a unique point in a $6N$
171 + dimensional space where each perpendicular axis is one of
172 + $q_{i\alpha}$ or $p_{i\alpha}$ ($i$ is the particle and $\alpha$ is
173 + the spatial axis). This $6N$ dimensional space is known as phase
174 + space. The instantaneous state of the system is a single point in
175 + phase space. A trajectory is a connected path of points in phase
176 + space. An ensemble is a collection of systems described by the same
177 + macroscopic observables but which have microscopic state
178 + distributions. In phase space an ensemble is a collection of a set of
179 + representive points. A density distribution $\rho(q^N, p^N)$ of the
180 + representive points in phase space describes the condition of an
181 + ensemble of identical systems. Since this density may change with
182 + time, it is also a function of time. $\rho(q^N, p^N, t)$ describes the
183 + ensemble at a time $t$, and $\rho(q^N, p^N, t')$ describes the
184 + ensemble at a later time $t'$. For convenience, we will use $\rho$
185 + instead of $\rho(q^N, p^N, t)$ in the following disccusion. If we
186 + normalize $\rho$ to unity,
187 + \begin{equation}
188 + 1 = \int d \vec q~^N \int d \vec p~^N \rho,
189 + \label{Ineq:normalized}
190 + \end{equation}
191 + then the value of $\rho$ gives the probability of finding the system
192 + in a unit volume in phase space.
193 +
194 + Liouville's theorem describes the change in density $\rho$ with
195 + time. The number of representative points at a given volume in phase
196 + space at any instant can be written as:
197 + \begin{equation}
198 + \label{Ineq:deltaN}
199 + \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
200 + \end{equation}
201 + To calculate the change in the number of representative points in this
202 + volume, let us consider a simple condition: the change in the number
203 + of representative points along the $q_1$ axis. The rate of the number
204 + of the representative points entering the volume at $q_1$ per unit time
205 + is:
206 + \begin{equation}
207 + \label{Ineq:deltaNatq1}
208 + \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
209 + \end{equation}
210 + and the rate of the number of representative points leaving the volume
211 + at another position $q_1 + \delta q_1$ is:
212 + \begin{equation}
213 + \label{Ineq:deltaNatq1plusdeltaq1}
214 + \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 +
215 + \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
216 + \end{equation}
217 + Here the higher order differentials are neglected. So the change in
218 + the number of representative points is the difference between
219 + eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1},
220 + which gives us:
221 + \begin{equation}
222 + \label{Ineq:deltaNatq1axis}
223 + -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N,
224 + \end{equation}
225 + where, higher order differetials are neglected. If we sum over all the
226 + axes in the phase space, we can get the change in the number of
227 + representative points in a given volume with time:
228 + \begin{equation}
229 + \label{Ineq:deltaNatGivenVolume}
230 + \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial
231 + {\dot q_i}}{\partial q_i} + \frac{\partial
232 + {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial
233 + q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N.
234 + \end{equation}
235 + From Hamilton's equations of motion,
236 + \begin{equation}
237 + \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial
238 + {\dot p_i}}{\partial p_i},
239 + \label{Ineq:canonicalFormOfEquationOfMotion}
240 + \end{equation}
241 + this cancels out the first term on the right side of
242 + eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of
243 + eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2
244 + \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we
245 + arrive at Liouville's theorem:
246 + \begin{equation}
247 + \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left(
248 + \frac{\partial {\rho}}{\partial
249 + q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right).
250 + \label{Ineq:simpleFormofLiouville}
251 + \end{equation}
252 + This is the basis of statistical mechanics. If we move the right
253 + side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we
254 + will obtain
255 + \begin{equation}
256 + \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left(
257 + \frac{\partial {\rho}}{\partial
258 + q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
259 + = 0.
260 + \label{Ineq:anotherFormofLiouville}
261 + \end{equation}
262 + It is easy to note that the left side of
263 + equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of
264 + $\rho$ with respect to $t$, which means
265 + \begin{equation}
266 + \frac{d \rho}{dt} = 0,
267 + \label{Ineq:conservationofRho}
268 + \end{equation}
269 + and the rate of density change is zero in the neighborhood of any
270 + selected moving representative points in the phase space.
271 +
272 + The condition of the ensemble is determined by the density
273 + distribution. If we consider the density distribution as only a
274 + function of $q$ and $p$, which means the rate of change of the phase
275 + space density in the neighborhood of all representative points in the
276 + phase space is zero,
277 + \begin{equation}
278 + \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0.
279 + \label{Ineq:statEquilibrium}
280 + \end{equation}
281 + We may conclude the ensemble is in {\it statistical equilibrium}. An
282 + ensemble in statistical equilibrium means the system is also in
283 + macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t}
284 + \right)_{q, p} = 0$, then
285 + \begin{equation}
286 + \sum_{i} \left(
287 + \frac{\partial {\rho}}{\partial
288 + q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right)
289 + = 0.
290 + \label{Ineq:constantofMotion}
291 + \end{equation}
292 + If $\rho$ is a function only of some constant of the motion, $\rho$ is
293 + independent of time. For a conservative system, the energy of the
294 + system is one of the constants of the motion. There are many
295 + thermodynamically relevant ensembles: when the density distribution is
296 + constant everywhere in the phase space,
297 + \begin{equation}
298 + \rho = \mathrm{const.}
299 + \label{Ineq:uniformEnsemble}
300 + \end{equation}
301 + the ensemble is called {\it uniform ensemble}.
302 +
303 + \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
304 + Another useful ensemble is the {\it microcanonical ensemble}, for
305 + which:
306 + \begin{equation}
307 + \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
308 + \label{Ineq:microcanonicalEnsemble}
309 + \end{equation}
310 + where $\Sigma(N, V, E)$ is a normalization constant parameterized by
311 + $N$, the total number of particles, $V$, the total physical volume and
312 + $E$, the total energy. The physical meaning of $\Sigma(N, V, E)$ is
313 + the phase space volume accessible to a microcanonical system with
314 + energy $E$ evolving under Hamilton's equations. $H(q^N, p^N)$ is the
315 + Hamiltonian of the system. The Gibbs entropy is defined as
316 + \begin{equation}
317 + S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho],
318 + \label{Ineq:gibbsEntropy}
319 + \end{equation}
320 + where $k_B$ is the Boltzmann constant and $C^N$ is a number which
321 + makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
322 + total phase space volume of one state. The entropy of a microcanonical
323 + ensemble is given by
324 + \begin{equation}
325 + S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
326 + \label{Ineq:entropy}
327 + \end{equation}
328 +
329 + \subsubsection{The Canonical Ensemble\label{In:sssec:canonical}}
330 + If the density distribution $\rho$ is given by
331 + \begin{equation}
332 + \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T},
333 + \label{Ineq:canonicalEnsemble}
334 + \end{equation}
335 + the ensemble is known as the {\it canonical ensemble}. Here,
336 + \begin{equation}
337 + Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(q^N, p^N) / k_B T},
338 + \label{Ineq:partitionFunction}
339 + \end{equation}
340 + which is also known as the canonical{\it partition function}. $\Gamma$
341 + indicates that the integral is over all phase space. In the canonical
342 + ensemble, $N$, the total number of particles, $V$, total volume, and
343 + $T$, the temperature, are constants. The systems with the lowest
344 + energies hold the largest population. According to maximum principle,
345 + thermodynamics maximizes the entropy $S$, implying that
346 + \begin{equation}
347 + \begin{array}{ccc}
348 + \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
349 + \end{array}
350 + \label{Ineq:maximumPrinciple}
351 + \end{equation}
352 + From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the
353 + canonical ensemble, {\it i.e.}, total probability and average energy
354 + must be conserved, the partition function can be shown to be
355 + equivalent to
356 + \begin{equation}
357 + Z_N = e^{-A/k_B T},
358 + \label{Ineq:partitionFunctionWithFreeEnergy}
359 + \end{equation}
360 + where $A$ is the Helmholtz free energy. The significance of
361 + Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is
362 + that they serve as a connection between macroscopic properties of the
363 + system and the distribution of microscopic states.
364 +
365 + There is an implicit assumption that our arguments are based on so
366 + far. A representative point in the phase space is equally likely to be
367 + found in any energetically allowed region. In other words, all
368 + energetically accessible states are represented equally, the
369 + probabilities to find the system in any of the accessible states is
370 + equal. This is called the principle of equal a {\it priori}
371 + probabilities.
372 +
373 + \subsection{Statistical Averages\label{In:ssec:average}}
374 + Given a density distribution $\rho$ in phase space, the average
375 + of any quantity ($F(q^N, p^N$)) which depends on the coordinates
376 + ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble
377 + can be calculated based on the definition shown by
378 + Eq.~\ref{Ineq:statAverage1}
379 + \begin{equation}
380 + \langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N
381 + F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}.
382 + \label{Ineq:statAverage1}
383 + \end{equation}
384 + Since the density distribution $\rho$ is normalized to unity, the mean
385 + value of $F(q^N, p^N)$ is simplified to
386 + \begin{equation}
387 + \langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N,
388 + p^N) \rho,
389 + \label{Ineq:statAverage2}
390 + \end{equation}
391 + called the {\it ensemble average}. However, the quantity is often
392 + averaged for a finite time in real experiments,
393 + \begin{equation}
394 + \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
395 + \frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt.
396 + \label{Ineq:timeAverage1}
397 + \end{equation}
398 + Usually this time average is independent of $t_0$ in statistical
399 + mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
400 + \begin{equation}
401 + \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty}
402 + \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
403 + \label{Ineq:timeAverage2}
404 + \end{equation}
405 + for an infinite time interval.
406 +
407 + \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
408 + The {\it ergodic hypothesis}, an important hypothesis governing modern
409 + computer simulations states that the system will eventually pass
410 + arbitrarily close to any point that is energetically accessible in
411 + phase space. Mathematically, this leads to
412 + \begin{equation}
413 + \langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t.
414 + \label{Ineq:ergodicity}
415 + \end{equation}
416 + Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of
417 + averaging for sufficiently ergodic systems. Also Monte Carlo may be
418 + used to obtain ensemble averages of a quantity which are related to
419 + time averages measured in experiments.
420 +
421 + \subsection{Correlation Functions\label{In:ssec:corr}}
422 + Thermodynamic properties can be computed by equilibrium statistical
423 + mechanics. On the other hand, {\it Time correlation functions} are a
424 + powerful tool to understand the evolution of a dynamical
425 + systems. Imagine that property $A(q^N, p^N, t)$ as a function of
426 + coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and
427 + at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very
428 + small, the change of the value is minor, and the later value of
429 + $A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However,
430 + when $\tau$ is large, this correlation is lost. A time correlation
431 + function measures this relationship and is defined
432 + by~\cite{Berne90}
433 + \begin{equation}
434 + C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow
435 + \infty}
436 + \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau).
437 + \label{Ineq:autocorrelationFunction}
438 + \end{equation}
439 + Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of
440 + a single variable, called an {\it autocorrelation function}. The
441 + definition of the correlation function for two different variables is
442 + similar to that of autocorrelation function, which is
443 + \begin{equation}
444 + C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty}
445 + \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
446 + \label{Ineq:crosscorrelationFunction}
447 + \end{equation}
448 + and called {\it cross correlation function}.
449 +
450 + We know from the ergodic hypothesis that there is a relationship
451 + between time average and ensemble average. We can put the correlation
452 + function in a classical mechanics form,
453 + \begin{equation}
454 + C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
455 + A[q^N(t+\tau), p^N(t+\tau)] \rho(q, p)
456 + \label{Ineq:autocorrelationFunctionCM}
457 + \end{equation}
458 + and
459 + \begin{equation}
460 + C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
461 + B[q^N(t+\tau), p^N(t+\tau)] \rho(q, p)
462 + \label{Ineq:crosscorrelationFunctionCM}
463 + \end{equation}
464 + as the autocorrelation function and cross correlation functions
465 + respectively. $\rho(q, p)$ is the density distribution at equillibrium
466 + in phase space. In many cases, correlation functions decay as a
467 + single exponential in time
468 + \begin{equation}
469 + C(t) \sim e^{-t / \tau_r},
470 + \label{Ineq:relaxation}
471 + \end{equation}
472 + where $\tau_r$ is known as relaxation time which describes the rate of
473 + the decay.
474 +
475 + \section{Methodology\label{In:sec:method}}
476 + The simulations performed in this dissertation branch into two main
477 + categories, Monte Carlo and Molecular Dynamics. There are two main
478 + differences between Monte Carlo and Molecular Dynamics
479 + simulations. One is that the Monte Carlo simulations are time
480 + independent methods of sampling structural features of an ensemble,
481 + while Molecular Dynamics simulations provide dynamic
482 + information. Additionally, Monte Carlo methods are a stochastic
483 + processes, the future configurations of the system are not determined
484 + by its past. However, in Molecular Dynamics, the system is propagated
485 + by Newton's equation of motion, and the trajectory of the system
486 + evolving in phase space is deterministic. Brief introductions of the
487 + two algorithms are given in section~\ref{In:ssec:mc}
488 + and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
489 + Dynamics that includes implicit solvent effects, is introduced by
490 + section~\ref{In:ssec:ld}.
491 +
492 + \subsection{Monte Carlo\label{In:ssec:mc}}
493 + A Monte Carlo integration algorithm was first introduced by Metropolis
494 + {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
495 + algorithm is usually applied to the canonical ensemble, a
496 + Boltzmann-weighted ensemble, in which the $N$, the total number of
497 + particles, $V$, total volume, $T$, temperature are constants. An
498 + average in this ensemble is given
499 + \begin{equation}
500 + \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
501 + A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
502 + \label{Ineq:energyofCanonicalEnsemble}
503 + \end{equation}
504 + The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function
505 + of momenta and the potential energy, $U(q^N)$, a function of
506 + positions,
507 + \begin{equation}
508 + H(q^N, p^N) = K(p^N) + U(q^N).
509 + \label{Ineq:hamiltonian}
510 + \end{equation}
511 + If the property $A$ is a function only of position ($ A = A(q^N)$),
512 + the mean value of $A$ can be reduced to
513 + \begin{equation}
514 + \langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}},
515 + \label{Ineq:configurationIntegral}
516 + \end{equation}
517 + The kinetic energy $K(p^N)$ is factored out in
518 + Eq.~\ref{Ineq:configurationIntegral}. $\langle A
519 + \rangle$ is now a configuration integral, and
520 + Eq.~\ref{Ineq:configurationIntegral} is equivalent to
521 + \begin{equation}
522 + \langle A \rangle = \int d \vec q~^N A \rho(q^N).
523 + \label{Ineq:configurationAve}
524 + \end{equation}
525 +
526 + In a Monte Carlo simulation of a system in the canonical ensemble, the
527 + probability of the system being in a state $s$ is $\rho_s$, the change
528 + of this probability with time is given by
529 + \begin{equation}
530 + \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ],
531 + \label{Ineq:timeChangeofProb}
532 + \end{equation}
533 + where $w_{ss'}$ is the tansition probability of going from state $s$
534 + to state $s'$. Since $\rho_s$ is independent of time at equilibrium,
535 + \begin{equation}
536 + \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
537 + \label{Ineq:equiProb}
538 + \end{equation}
539 + which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
540 + for all $s'$. So
541 + \begin{equation}
542 + \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
543 + \label{Ineq:relationshipofRhoandW}
544 + \end{equation}
545 + If the ratio of state populations
546 + \begin{equation}
547 + \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}.
548 + \label{Ineq:satisfyofBoltzmannStatistics}
549 + \end{equation}
550 + then the ratio of transition probabilities,
551 + \begin{equation}
552 + \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
553 + \label{Ineq:conditionforBoltzmannStatistics}
554 + \end{equation}
555 + An algorithm that shows how Monte Carlo simulation generates a
556 + transition probability governed by
557 + \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
558 + \begin{enumerate}
559 + \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
560 + the energy of the rest of the system due to the location of the particle.
561 + \item\label{Initm:newEnergy} Make a random displacement of the particle,
562 + calculate the new energy taking the movement of the particle into account.
563 +  \begin{itemize}
564 +     \item If the energy goes down, keep the new configuration and return to
565 + step~\ref{Initm:oldEnergy}.
566 +     \item If the energy goes up, pick a random number between $[0,1]$.
567 +        \begin{itemize}
568 +           \item If the random number smaller than
569 + $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration and return to
570 + step~\ref{Initm:oldEnergy}.
571 +           \item If the random number is larger than
572 + $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration and return to
573 + step~\ref{Initm:oldEnergy}.
574 +        \end{itemize}
575 +  \end{itemize}
576 + \item\label{Initm:accumulateAvg} Accumulate the averages based on the
577 + current configuartion.
578 + \item Go to step~\ref{Initm:oldEnergy}.
579 + \end{enumerate}
580 + It is important for sampling accurately that the old configuration is
581 + sampled again if it is kept.
582 +
583 + \subsection{Molecular Dynamics\label{In:ssec:md}}
584 + Although some of properites of the system can be calculated from the
585 + ensemble average in Monte Carlo simulations, due to the absence of the
586 + time dependence, it is impossible to gain information on dynamic
587 + properties from Monte Carlo simulations. Molecular Dynamics evolves
588 + the positions and momenta of the particles in the system. The
589 + evolution of the system obeys the laws of classical mechanics, and in
590 + most situations, there is no need to account for quantum effects. In a
591 + real experiment, the instantaneous positions and momenta of the
592 + particles in the system are neither important nor measurable, the
593 + observable quantities are usually an average value for a finite time
594 + interval. These quantities are expressed as a function of positions
595 + and momenta in Molecular Dynamics simulations. For example,
596 + temperature of the system is defined by
597 + \begin{equation}
598 + \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
599 + \label{Ineq:temperature}
600 + \end{equation}
601 + here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of
602 + particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the
603 + average kinetic energy of the system.
604 +
605 + The initial positions of the particles are chosen so that there is no
606 + overlap of the particles. The initial velocities at first are
607 + distributed randomly to the particles using a Maxwell-Boltzmann
608 + ditribution, and then shifted to make the total linear momentum of the
609 + system $0$.
610 +
611 + The core of Molecular Dynamics simulations is the calculation of
612 + forces and the integration algorithm. Calculation of the forces often
613 + involves enormous effort. This is the most time consuming step in the
614 + Molecular Dynamics scheme. Evaluation of the forces is mathematically
615 + simple,
616 + \begin{equation}
617 + f(q) = - \frac{\partial U(q)}{\partial q},
618 + \label{Ineq:force}
619 + \end{equation}
620 + where $U(q)$ is the potential of the system. However, the numerical
621 + details of this computation are often quite complex. Once the forces
622 + computed, the positions and velocities are updated by integrating
623 + Hamilton's equations of motion,
624 + \begin{eqnarray}
625 + \dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial
626 + U(q_i)}{\partial q_i} = f(q_i) \\
627 + \dot q_i & = & p_i
628 + \label{Ineq:newton}
629 + \end{eqnarray}
630 + The classic example of an integrating algorithm is the Verlet
631 + algorithm, which is one of the simplest algorithms for integrating the
632 + equations of motion. The Taylor expansion of the position at time $t$
633 + is
634 + \begin{equation}
635 + q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
636 +        \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
637 +        \mathcal{O}(\Delta t^4)
638 + \label{Ineq:verletFuture}
639 + \end{equation}
640 + for a later time $t+\Delta t$, and
641 + \begin{equation}
642 + q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
643 +        \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
644 +        \mathcal{O}(\Delta t^4) ,
645 + \label{Ineq:verletPrevious}
646 + \end{equation}
647 + for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture}
648 + and~\ref{Ineq:verletPrevious} gives
649 + \begin{equation}
650 + q(t+\Delta t)+q(t-\Delta t) =
651 +        2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4),
652 + \label{Ineq:verletSum}
653 + \end{equation}
654 + so, the new position can be expressed as
655 + \begin{equation}
656 + q(t+\Delta t) \approx
657 +        2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2.
658 + \label{Ineq:newPosition}
659 + \end{equation}
660 + The higher order terms in $\Delta t$ are omitted.
661 +
662 + Numerous techniques and tricks have been applied to Molecular Dynamics
663 + simulations to gain greater efficiency or accuracy. The engine used in
664 + this dissertation for the Molecular Dynamics simulations is {\sc
665 + oopse}. More details of the algorithms and techniques used in this
666 + code can be found in Ref.~\cite{Meineke2005}.
667 +
668 + \subsection{Langevin Dynamics\label{In:ssec:ld}}
669 + In many cases, the properites of the solvent in a system, like the
670 + water in the lipid-water system studied in this dissertation, are less
671 + interesting to the researchers than the behavior of the
672 + solute. However, the major computational expense is ofter the
673 + solvent-solvent interation, this is due to the large number of the
674 + solvent molecules when compared to the number of solute molecules, the
675 + ratio of the number of lipid molecules to the number of water
676 + molecules is $1:25$ in our lipid-water system. The efficiency of the
677 + Molecular Dynamics simulations is greatly reduced, with up to 85\% of
678 + CPU time spent calculating only water-water interactions.
679 +
680 + As an extension of the Molecular Dynamics methodologies, Langevin
681 + Dynamics seeks a way to avoid integrating the equations of motion for
682 + solvent particles without losing the solvent effects on the solute
683 + particles. One common approximation is to express the coupling of the
684 + solute and solvent as a set of harmonic oscillators. The Hamiltonian
685 + of such a system is written as
686 + \begin{equation}
687 + H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q),
688 + \label{Ineq:hamiltonianofCoupling}
689 + \end{equation}
690 + where $H_B$ is the Hamiltonian of the bath which  is a set of N
691 + harmonic oscillators
692 + \begin{equation}
693 + H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} +
694 + \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\},
695 + \label{Ineq:hamiltonianofBath}
696 + \end{equation}
697 + $\alpha$ runs over all the degree of freedoms of the bath,
698 + $\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and
699 + $\Delta U(q)$ is the bilinear coupling given by
700 + \begin{equation}
701 + \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q,
702 + \label{Ineq:systemBathCoupling}
703 + \end{equation}
704 + where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By
705 + solving the Hamilton's equations of motion, the {\it Generalized
706 + Langevin Equation} for this system is derived as
707 + \begin{equation}
708 + m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t),
709 + \label{Ineq:gle}
710 + \end{equation}
711 + with mean force,
712 + \begin{equation}
713 + W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha
714 + \omega_\alpha^2}q^2.
715 + \label{Ineq:meanForce}
716 + \end{equation}
717 + The {\it friction kernel}, $\xi(t)$, depends only on the coordinates
718 + of the solute particles,
719 + \begin{equation}
720 + \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha
721 + \omega_\alpha} \cos(\omega_\alpha t),
722 + \label{Ineq:xiforGLE}
723 + \end{equation}
724 + and a ``random'' force,
725 + \begin{equation}
726 + R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha
727 + \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot
728 + q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
729 + \label{Ineq:randomForceforGLE}
730 + \end{equation}
731 + depends only on the initial conditions. The relationship of friction
732 + kernel $\xi(t)$ and random force $R(t)$ is given by the second
733 + fluctuation dissipation theorem,
734 + \begin{equation}
735 + \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
736 + \label{Ineq:relationshipofXiandR}
737 + \end{equation}
738 + In the harmonic bath this relation is exact and provable from the
739 + definitions of these quantities. In the limit of static friction,
740 + \begin{equation}
741 + \xi(t) = 2 \xi_0 \delta(t).
742 + \label{Ineq:xiofStaticFriction}
743 + \end{equation}
744 + After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with
745 + Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is
746 + extracted,
747 + \begin{equation}
748 + m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t).
749 + \label{Ineq:langevinEquation}
750 + \end{equation}
751 + Application of the Langevin Equation to dynamic simulations is
752 + discussed in Ch.~\ref{chap:ld}.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines