ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 1042
Committed: Mon Feb 9 19:37:45 2004 UTC (21 years, 2 months ago) by mmeineke
Content type: application/x-tex
File size: 42965 byte(s)
Log Message:
added some revisions to the intro

File Contents

# Content
1
2
3 \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4
5
6 The techniques used in the course of this research fall under the two
7 main classes of molecular simulation: Molecular Dynamics and Monte
8 Carlo. Molecular Dynamics simulations integrate the equations of
9 motion for a given system of particles, allowing the researcher to
10 gain insight into the time dependent evolution of a system. Diffusion
11 phenomena are readily studied with this simulation technique, making
12 Molecular Dynamics the main simulation technique used in this
13 research. Other aspects of the research fall under the Monte Carlo
14 class of simulations. In Monte Carlo, the configuration space
15 available to the collection of particles is sampled stochastically, or
16 randomly. Each configuration is chosen with a given probability based
17 on the Maxwell Boltzmann distribution. These types of simulations are
18 best used to probe properties of a system that are dependent only on
19 the state of the system. Structural information about a system is most
20 readily obtained through these types of methods.
21
22 Although the two techniques employed seem dissimilar, they are both
23 linked by the overarching principles of Statistical
24 Mechanics. Statistical Meachanics governs the behavior of
25 both classes of simulations and dictates what each method can and
26 cannot do. When investigating a system, one most first analyze what
27 thermodynamic properties of the system are being probed, then chose
28 which method best suits that objective.
29
30 \section{\label{introSec:statThermo}Statistical Mechanics}
31
32 The following section serves as a brief introduction to some of the
33 Statistical Mechanics concepts present in this dissertation. What
34 follows is a brief derivation of Boltzmann weighted statistics and an
35 explanation of how one can use the information to calculate an
36 observable for a system. This section then concludes with a brief
37 discussion of the ergodic hypothesis and its relevance to this
38 research.
39
40 \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41
42 Consider a system, $\gamma$, with total energy $E_{\gamma}$. Let
43 $\Omega(E_{\gamma})$ represent the number of degenerate ways
44 $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
45 collection of positions and conjugate momenta of system $\gamma$, can
46 be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
47 of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
48 E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
49 expressed as,
50 \begin{equation}
51 \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
52 \Omega(E_{\gamma}) \times \ldots \Omega(E_n)
53 \label{introEq:SM0.1}
54 \end{equation}
55 This multiplicative combination of degeneracies is illustrated in
56 Fig.~\ref{introFig:degenProd}.
57
58 \begin{figure}
59 \centering
60 \includegraphics[width=\linewidth]{omegaFig.eps}
61 \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
62 \label{introFig:degenProd}
63 \end{figure}
64
65 Next, consider the specific case of $\gamma$ in contact with a
66 bath. Exchange of energy is allowed between the bath and the system,
67 subject to the constraint that the total energy
68 ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
69 is the total energy of both systems, can be represented as
70 \begin{equation}
71 \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
72 \label{introEq:SM1}
73 \end{equation}
74 Or additively as
75 \begin{equation}
76 \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
77 \label{introEq:SM2}
78 \end{equation}
79
80 The solution to Eq.~\ref{introEq:SM2} maximizes the number of
81 degenerate configurations in $E$. \cite{Frenkel1996}
82 This gives
83 \begin{equation}
84 \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
85 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
86 +
87 \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
88 \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
89 \label{introEq:SM3}
90 \end{equation}
91 Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
92 $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
93 $-1$. Eq.~\ref{introEq:SM3} becomes
94 \begin{equation}
95 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
96 \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
97 \label{introEq:SM4}
98 \end{equation}
99
100 At this point, one can draw a relationship between the maximization of
101 degeneracy in Eq.~\ref{introEq:SM3} and the second law of
102 thermodynamics. Namely, that for a closed system, entropy will
103 increase for an irreversible process.\cite{chandler:1987} Here the
104 process is the partitioning of energy among the two systems. This
105 allows the following definition of entropy:
106 \begin{equation}
107 S = k_B \ln \Omega(E)
108 \label{introEq:SM5}
109 \end{equation}
110 Where $k_B$ is the Boltzmann constant. Having defined entropy, one can
111 also define the temperature of the system using the relation
112 \begin{equation}
113 \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
114 \label{introEq:SM6}
115 \end{equation}
116 The temperature in the system $\gamma$ is then
117 \begin{equation}
118 \beta( E_{\gamma} ) = \frac{1}{k_B T} =
119 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
120 \label{introEq:SM7}
121 \end{equation}
122 Applying this to Eq.~\ref{introEq:SM4} gives the following
123 \begin{equation}
124 \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
125 \label{introEq:SM8}
126 \end{equation}
127 Showing that the partitioning of energy between the two systems is
128 actually a process of thermal equilibration.\cite{Frenkel1996}
129
130 An application of these results is to formulate the form of an
131 expectation value of an observable, $A$, in the canonical ensemble. In
132 the canonical ensemble, the number of particles, $N$, the volume, $V$,
133 and the temperature, $T$, are all held constant while the energy, $E$,
134 is allowed to fluctuate. Returning to the previous example, the bath
135 system is now an infinitely large thermal bath, whose exchange of
136 energy with the system $\gamma$ holds the temperature constant. The
137 partitioning of energy in the bath system is then related to the total
138 energy of both systems and the fluctuations in $E_{\gamma}$:
139 \begin{equation}
140 \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
141 \label{introEq:SM9}
142 \end{equation}
143 As for the expectation value, it can be defined
144 \begin{equation}
145 \langle A \rangle =
146 \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
147 P_{\gamma} A(\boldsymbol{\Gamma})
148 \label{introEq:SM10}
149 \end{equation}
150 Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
151 an integration over all accessible phase space, $P_{\gamma}$ is the
152 probability of being in a given phase state and
153 $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
154 phase state.
155
156 Because entropy seeks to maximize the number of degenerate states at a
157 given energy, the probability of being in a particular state in
158 $\gamma$ will be directly proportional to the number of allowable
159 states the coupled system is able to assume. Namely,
160 \begin{equation}
161 P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
162 e^{\ln \Omega( E - E_{\gamma})}
163 \label{introEq:SM11}
164 \end{equation}
165 With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
166 \begin{equation}
167 \ln \Omega ( E - E_{\gamma}) =
168 \ln \Omega (E) -
169 E_{\gamma} \frac{\partial \ln \Omega }{\partial E}
170 + \ldots
171 \label{introEq:SM12}
172 \end{equation}
173 Higher order terms are omitted as $E$ is an infinite thermal
174 bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
175 be rewritten:
176 \begin{equation}
177 P_{\gamma} \propto e^{-\beta E_{\gamma}}
178 \label{introEq:SM13}
179 \end{equation}
180 Where $\ln \Omega(E)$ has been factored out of the proportionality as a
181 constant. Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
182 d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
183 \begin{equation}
184 P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
185 {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
186 \label{introEq:SM14}
187 \end{equation}
188 This result is the standard Boltzmann statistical distribution.
189 Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
190 \begin{equation}
191 \langle A \rangle =
192 \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
193 A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
194 {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
195 \label{introEq:SM15}
196 \end{equation}
197
198 \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
199
200 In the case of a Molecular Dynamics simulation, rather than
201 calculating an ensemble average integral over phase space as in
202 Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time
203 average of an observable. Namely,
204 \begin{equation}
205 \langle A \rangle_t = \frac{1}{\tau}
206 \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt
207 \label{introEq:SM16}
208 \end{equation}
209 Where the value of an observable is averaged over the length of time
210 that the simulation is run. This type of measurement mirrors the
211 experimental measurement of an observable. In an experiment, the
212 instrument analyzing the system must average its observation of the
213 finite time of the measurement. What is required then, is a principle
214 to relate the time average to the ensemble average. This is the
215 ergodic hypothesis.
216
217 Ergodicity is the assumption that given an infinite amount of time, a
218 system will visit every available point in phase
219 space.\cite{Frenkel1996} For most systems, this is a valid assumption,
220 except in cases where the system may be trapped in a local feature
221 (\emph{e.g.}~glasses). When valid, ergodicity allows the unification
222 of a time averaged observation and an ensemble averaged one. If an
223 observation is averaged over a sufficiently long time, the system is
224 assumed to visit all appropriately available points in phase space,
225 giving a properly weighted statistical average. This allows the
226 researcher freedom of choice when deciding how best to measure a given
227 observable. When an ensemble averaged approach seems most logical,
228 the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo}
229 can be utilized. Conversely, if a problem lends itself to a time
230 averaging approach, the Molecular Dynamics techniques in
231 Sec.~\ref{introSec:MD} can be employed.
232
233 \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
234
235 The Monte Carlo method was developed by Metropolis and Ulam for their
236 work in fissionable material.\cite{metropolis:1949} The method is so
237 named, because it heavily uses random numbers in its
238 solution.\cite{allen87:csl} The Monte Carlo method allows for the
239 solution of integrals through the stochastic sampling of the values
240 within the integral. In the simplest case, the evaluation of an
241 integral would follow a brute force method of
242 sampling.\cite{Frenkel1996} Consider the following single dimensional
243 integral:
244 \begin{equation}
245 I = f(x)dx
246 \label{eq:MCex1}
247 \end{equation}
248 The equation can be recast as:
249 \begin{equation}
250 I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
251 \label{introEq:Importance1}
252 \end{equation}
253 Where $\rho(x)$ is an arbitrary probability distribution in $x$. If
254 one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
255 from the distribution $\rho(x)$ on the interval $[a,b]$, then
256 Eq.~\ref{introEq:Importance1} becomes
257 \begin{equation}
258 I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}
259 \label{introEq:Importance2}
260 \end{equation}
261 If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
262 \begin{equation}
263 \rho(x) = \frac{1}{b-a}
264 \label{introEq:importance2b}
265 \end{equation}
266 then the integral can be rewritten as
267 \begin{equation}
268 I = (b-a)\lim_{\tau \rightarrow \infty}
269 \langle f(x) \rangle_{\text{trials}[a,b]}
270 \label{eq:MCex2}
271 \end{equation}
272
273 However, in Statistical Mechanics, one is typically interested in
274 integrals of the form:
275 \begin{equation}
276 \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
277 e^{-\beta V(\mathbf{r}^N)}}%
278 {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
279 \label{eq:mcEnsAvg}
280 \end{equation}
281 Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
282 and $A$ is some observable that is only dependent on position. This is
283 the ensemble average of $A$ as presented in
284 Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
285 momentum. Therefore the momenta contribution of the integral can be
286 factored out, leaving the configurational integral. Application of the
287 brute force method to this system would yield highly inefficient
288 results. Due to the Boltzmann weighting of this integral, most random
289 configurations will have a near zero contribution to the ensemble
290 average. This is where importance sampling comes into
291 play.\cite{allen87:csl}
292
293 Importance sampling is a method where the distribution, from which the
294 random configurations are chosen, is selected in such a way as to
295 efficiently sample the integral in question. Looking at
296 Eq.~\ref{eq:mcEnsAvg}, and realizing
297 \begin {equation}
298 \rho_{kT}(\mathbf{r}^N) =
299 \frac{e^{-\beta V(\mathbf{r}^N)}}
300 {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
301 \label{introEq:MCboltzman}
302 \end{equation}
303 Where $\rho_{kT}$ is the Boltzmann distribution. The ensemble average
304 can be rewritten as
305 \begin{equation}
306 \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
307 \rho_{kT}(\mathbf{r}^N)
308 \label{introEq:Importance3}
309 \end{equation}
310 Applying Eq.~\ref{introEq:Importance1} one obtains
311 \begin{equation}
312 \langle A \rangle = \biggl \langle
313 \frac{ A \rho_{kT}(\mathbf{r}^N) }
314 {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
315 \label{introEq:Importance4}
316 \end{equation}
317 By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
318 Eq.~\ref{introEq:Importance4} becomes
319 \begin{equation}
320 \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}
321 \label{introEq:Importance5}
322 \end{equation}
323 The difficulty is selecting points $\mathbf{r}^N$ such that they are
324 sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution
325 was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
326 the use of a Markov chain whose limiting distribution was
327 $\rho_{kT}(\mathbf{r}^N)$.
328
329 \subsection{\label{introSec:markovChains}Markov Chains}
330
331 A Markov chain is a chain of states satisfying the following
332 conditions:\cite{leach01:mm}
333 \begin{enumerate}
334 \item The outcome of each trial depends only on the outcome of the previous trial.
335 \item Each trial belongs to a finite set of outcomes called the state space.
336 \end{enumerate}
337 If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
338 $\rho_m$ and $\rho_n$ are the probabilities of being in state
339 $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two
340 states are linked by a transition probability, $\pi_{mn}$, which is the
341 probability of going from state $m$ to state $n$.
342
343 \newcommand{\accMe}{\operatorname{acc}}
344
345 The transition probability is given by the following:
346 \begin{equation}
347 \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
348 \label{introEq:MCpi}
349 \end{equation}
350 Where $\alpha_{mn}$ is the probability of attempting the move $m
351 \rightarrow n$, and $\accMe$ is the probability of accepting the move
352 $m \rightarrow n$. Defining a probability vector,
353 $\boldsymbol{\rho}$, such that
354 \begin{equation}
355 \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
356 \ldots \rho_N \}
357 \label{introEq:MCrhoVector}
358 \end{equation}
359 a transition matrix $\boldsymbol{\Pi}$ can be defined,
360 whose elements are $\pi_{mn}$, for each given transition. The
361 limiting distribution of the Markov chain can then be found by
362 applying the transition matrix an infinite number of times to the
363 distribution vector.
364 \begin{equation}
365 \boldsymbol{\rho}_{\text{limit}} =
366 \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
367 \boldsymbol{\Pi}^N
368 \label{introEq:MCmarkovLimit}
369 \end{equation}
370 The limiting distribution of the chain is independent of the starting
371 distribution, and successive applications of the transition matrix
372 will only yield the limiting distribution again.
373 \begin{equation}
374 \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
375 \boldsymbol{\Pi}
376 \label{introEq:MCmarkovEquil}
377 \end{equation}
378
379 \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
380
381 In the Metropolis method\cite{metropolis:1953}
382 Eq.~\ref{introEq:MCmarkovEquil} is solved such that
383 $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
384 of states. The method accomplishes this by imposing the strong
385 condition of microscopic reversibility on the equilibrium
386 distribution. Meaning, that at equilibrium the probability of going
387 from $m$ to $n$ is the same as going from $n$ to $m$.
388 \begin{equation}
389 \rho_m\pi_{mn} = \rho_n\pi_{nm}
390 \label{introEq:MCmicroReverse}
391 \end{equation}
392 Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
393 the Metropolis method. Using Eq.~\ref{introEq:MCpi},
394 Eq.~\ref{introEq:MCmicroReverse} becomes
395 \begin{equation}
396 \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
397 \frac{\rho_n}{\rho_m}
398 \label{introEq:MCmicro2}
399 \end{equation}
400 For a Boltzmann limiting distribution,
401 \begin{equation}
402 \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
403 = e^{-\beta \Delta \mathcal{U}}
404 \label{introEq:MCmicro3}
405 \end{equation}
406 This allows for the following set of acceptance rules be defined:
407 \begin{equation}
408 \accMe( m \rightarrow n ) =
409 \begin{cases}
410 1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
411 e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
412 \end{cases}
413 \label{introEq:accRules}
414 \end{equation}
415
416 Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
417 Metropolis method proceeds as follows
418 \begin{enumerate}
419 \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
420 \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
421 \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$). Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
422 \item Accumulate the average for the configurational observable of interest.
423 \item Repeat from step 2 until the average converges.
424 \end{enumerate}
425 One important note is that the average is accumulated whether the move
426 is accepted or not, this ensures proper weighting of the average.
427 Using Eq.~\ref{introEq:Importance4} it becomes clear that the
428 accumulated averages are the ensemble averages, as this method ensures
429 that the limiting distribution is the Boltzmann distribution.
430
431 \section{\label{introSec:MD}Molecular Dynamics Simulations}
432
433 The main simulation tool used in this research is Molecular Dynamics.
434 Molecular Dynamics is when the equations of motion for a system are
435 integrated in order to obtain information about both the positions and
436 momentum of a system, allowing the calculation of not only
437 configurational observables, but momenta dependent ones as well:
438 diffusion constants, relaxation events, folding/unfolding
439 events, etc. With the time dependent information gained from a
440 Molecular Dynamics simulation, one can also calculate time correlation
441 functions of the form\cite{Hansen86}
442 \begin{equation}
443 \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
444 \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}
445 \label{introEq:timeCorr}
446 \end{equation}
447 These correlations can be used to measure fundamental time constants
448 of a system, such as diffusion constants from the velocity
449 autocorrelation or dipole relaxation times from the dipole
450 autocorrelation. Due to the principle of ergodicity,
451 Sec.~\ref{introSec:ergodic}, the average of these observables over the
452 time period of the simulation are taken to be the ensemble averages
453 for the system.
454
455 The choice of when to use molecular dynamics over Monte Carlo
456 techniques, is normally decided by the observables in which the
457 researcher is interested. If the observables depend on time in
458 any fashion, then the only choice is molecular dynamics in some form.
459 However, when the observable is dependent only on the configuration,
460 then for most small systems, Monte Carlo techniques will be more efficient.
461
462 The focus of research in the second half of this dissertation is
463 centered around the dynamic properties of phospholipid bilayers,
464 making molecular dynamics key in the simulation of those properties.
465
466 \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
467
468 To illustrate how the molecular dynamics technique is applied, the
469 following sections will describe the sequence involved in a
470 simulation. Sec.~\ref{introSec:mdInit} deals with the initialization
471 of a simulation. Sec.~\ref{introSec:mdForce} discusses issues
472 involved with the calculation of the forces.
473 Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
474 with the integration of the equations of motion.\cite{Frenkel1996}
475
476 \subsection{\label{introSec:mdInit}Initialization}
477
478 When selecting the initial configuration for the simulation it is
479 important to consider what dynamics one is hoping to observe.
480 Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
481 phospholipid membranes. Therefore in these simulations initial
482 positions were selected that in some cases dispersed the lipids in
483 water, and in other cases structured the lipids into performed
484 bilayers. Important considerations at this stage of the simulation are:
485 \begin{itemize}
486 \item There are no major overlaps of molecular or atomic orbitals
487 \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
488 \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
489 \end{itemize}
490
491 The first point is important due to the amount of potential energy
492 generated by having two particles too close together. If overlap
493 occurs, the first evaluation of forces will return numbers so large as
494 to render the numerical integration of the motion meaningless. The
495 second consideration keeps the system from drifting or rotating as a
496 whole. This arises from the fact that most simulations are of systems
497 in equilibrium in the absence of outside forces. Therefore any net
498 movement would be unphysical and an artifact of the simulation method
499 used. The final point addresses the selection of the magnitude of the
500 initial velocities. For many simulations it is convenient to use
501 this opportunity to scale the amount of kinetic energy to reflect the
502 desired thermal distribution of the system. However, it must be noted
503 that most systems will require further velocity rescaling after the
504 first few initial simulation steps due to either loss or gain of
505 kinetic energy from energy stored in potential degrees of freedom.
506
507 \subsection{\label{introSec:mdForce}Force Evaluation}
508
509 The evaluation of forces is the most computationally expensive portion
510 of a given molecular dynamics simulation. This is due entirely to the
511 evaluation of long range forces in a simulation, typically pair-wise.
512 These forces are most commonly the Van der Waals force, and sometimes
513 Coulombic forces as well. For a pair-wise force, there are $N(N-1)/ 2$
514 pairs to be evaluated, where $N$ is the number of particles in the
515 system. This leads to the calculations scaling as $N^2$, making large
516 simulations prohibitive in the absence of any computation saving
517 techniques.
518
519 Another consideration one must resolve, is that in a given simulation
520 a disproportionate number of the particles will feel the effects of
521 the surface.\cite{allen87:csl} For a cubic system of 1000 particles
522 arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
523 exposed to the surface. Unless one is simulating an isolated particle
524 group in a vacuum, the behavior of the system will be far from the
525 desired bulk characteristics. To offset this, simulations employ the
526 use of periodic boundary images.\cite{born:1912}
527
528 The technique involves the use of an algorithm that replicates the
529 simulation box on an infinite lattice in Cartesian space. Any given
530 particle leaving the simulation box on one side will have an image of
531 itself enter on the opposite side (see Fig.~\ref{introFig:pbc}). In
532 addition, this sets that any two particles have an image, real or
533 periodic, within $\text{box}/2$ of each other. A discussion of the
534 method used to calculate the periodic image can be found in
535 Sec.\ref{oopseSec:pbc}.
536
537 \begin{figure}
538 \centering
539 \includegraphics[width=\linewidth]{pbcFig.eps}
540 \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
541 \label{introFig:pbc}
542 \end{figure}
543
544 Returning to the topic of the computational scale of the force
545 evaluation, the use of periodic boundary conditions requires that a
546 cutoff radius be employed. Using a cutoff radius improves the
547 efficiency of the force evaluation, as particles farther than a
548 predetermined distance, $r_{\text{cut}}$, are not included in the
549 calculation.\cite{Frenkel1996} In a simulation with periodic images,
550 $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
551 Fig.~\ref{introFig:rMax} illustrates how when using an
552 $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
553 $r_{\text{cut}}$ at all, the corners of the simulation box are
554 unequally weighted due to the lack of particle images in the $x$, $y$,
555 or $z$ directions past a distance of $\text{box} / 2$.
556
557 \begin{figure}
558 \centering
559 \includegraphics[width=\linewidth]{rCutMaxFig.eps}
560 \caption[An explanation of $r_{\text{cut}}$]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
561 \label{introFig:rMax}
562 \end{figure}
563
564 With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
565 the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
566 discontinuity, one calculates the potential energy at the
567 $r_{\text{cut}}$, and adds that value to the potential, causing
568 the function to go smoothly to zero at the cutoff radius. This
569 shifted potential ensures conservation of energy when integrating the
570 Newtonian equations of motion.
571
572 \begin{figure}
573 \centering
574 \includegraphics[width=\linewidth]{shiftedPot.eps}
575 \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
576 \label{introFig:shiftPot}
577 \end{figure}
578
579 The second main simplification used in this research is the Verlet
580 neighbor list. \cite{allen87:csl} In the Verlet method, one generates
581 a list of all neighbor atoms, $j$, surrounding atom $i$ within some
582 cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
583 This list is created the first time forces are evaluated, then on
584 subsequent force evaluations, pair calculations are only calculated
585 from the neighbor lists. The lists are updated if any given particle
586 in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
587 giving rise to the possibility that a particle has left or joined a
588 neighbor list.
589
590 \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
591
592 A starting point for the discussion of molecular dynamics integrators
593 is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
594 expansion of position in time:
595 \begin{equation}
596 q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
597 \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
598 \mathcal{O}(\Delta t^4)
599 \label{introEq:verletForward}
600 \end{equation}
601 As well as,
602 \begin{equation}
603 q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
604 \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
605 \mathcal{O}(\Delta t^4)
606 \label{introEq:verletBack}
607 \end{equation}
608 Where $m$ is the mass of the particle, $q(t)$ is the position at time
609 $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
610 particle. Adding together Eq.~\ref{introEq:verletForward} and
611 Eq.~\ref{introEq:verletBack} results in,
612 \begin{equation}
613 q(t+\Delta t)+q(t-\Delta t) =
614 2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
615 \label{introEq:verletSum}
616 \end{equation}
617 Or equivalently,
618 \begin{equation}
619 q(t+\Delta t) \approx
620 2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
621 \label{introEq:verletFinal}
622 \end{equation}
623 Which contains an error in the estimate of the new positions on the
624 order of $\Delta t^4$.
625
626 In practice, however, the simulations in this research were integrated
627 with a velocity reformulation of the Verlet method.\cite{allen87:csl}
628 \begin{align}
629 q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
630 \label{introEq:MDvelVerletPos} \\%
631 %
632 v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
633 \label{introEq:MDvelVerletVel}
634 \end{align}
635 The original Verlet algorithm can be regained by substituting the
636 velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet
637 formulations are chosen in this research because the algorithms have
638 very little long term drift in energy conservation. Energy
639 conservation in a molecular dynamics simulation is of extreme
640 importance, as it is a measure of how closely one is following the
641 ``true'' trajectory with the finite integration scheme. An exact
642 solution to the integration will conserve area in phase space, as well
643 as be reversible in time, that is, the trajectory integrated forward
644 or backwards will exactly match itself. Having a finite algorithm
645 that both conserves area in phase space and is time reversible,
646 therefore increases, but does not guarantee the ``correctness'' or the
647 integrated trajectory.
648
649 It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
650 does not rigorously preserve the actual Hamiltonian, it does preserve
651 a pseudo-Hamiltonian which shadows the real one in phase space. This
652 pseudo-Hamiltonian is provably area-conserving as well as time
653 reversible. The fact that it shadows the true Hamiltonian in phase
654 space is acceptable in actual simulations as one is interested in the
655 ensemble average of the observable being measured. From the ergodic
656 hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
657 average will match the ensemble average, therefore two similar
658 trajectories in phase space should give matching statistical averages.
659
660 \subsection{\label{introSec:MDfurther}Further Considerations}
661
662 In the simulations presented in this research, a few additional
663 parameters are needed to describe the motions. The simulations
664 involving water and phospholipids in Ch.~\ref{chapt:lipid} are
665 required to integrate the equations of motions for dipoles on atoms.
666 This involves an additional three parameters be specified for each
667 dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are
668 taken to be the Euler angles, where $\phi$ is a rotation about the
669 $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
670 $\psi$ is a final rotation about the new $z$-axis (see
671 Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be
672 accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
673 defined as follows:
674 \begin{equation}
675 \mathbf{A} =
676 \begin{bmatrix}
677 \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
678 \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
679 \sin\theta\sin\psi \\%
680 %
681 -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
682 -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
683 \sin\theta\cos\psi \\%
684 %
685 \sin\phi\sin\theta &%
686 -\cos\phi\sin\theta &%
687 \cos\theta
688 \end{bmatrix}
689 \label{introEq:EulerRotMat}
690 \end{equation}
691
692 \begin{figure}
693 \centering
694 \includegraphics[width=\linewidth]{eulerRotFig.eps}
695 \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
696 \label{introFig:eulerAngles}
697 \end{figure}
698
699 The equations of motion for Euler angles can be written down
700 as\cite{allen87:csl}
701 \begin{align}
702 \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
703 \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
704 \omega^s_z
705 \label{introEq:MDeulerPhi} \\%
706 %
707 \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
708 \label{introEq:MDeulerTheta} \\%
709 %
710 \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
711 \omega^s_y \frac{\cos\phi}{\sin\theta}
712 \label{introEq:MDeulerPsi}
713 \end{align}
714 Where $\omega^s_i$ is the angular velocity in the lab space frame
715 along Cartesian coordinate $i$. However, a difficulty arises when
716 attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
717 Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
718 both equations means there is a non-physical instability present when
719 $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
720 the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
721 instability. This method was proposed by Dullweber
722 \emph{et. al.}\cite{Dullweber1997}, and is presented in
723 Sec.~\ref{introSec:MDsymplecticRot}.
724
725 \subsection{\label{introSec:MDliouville}Liouville Propagator}
726
727 Before discussing the integration of the rotation matrix, it is
728 necessary to understand the construction of a ``good'' integration
729 scheme. It has been previously
730 discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
731 integrator to be symplectic, or time reversible. The following is an
732 outline of the Trotter factorization of the Liouville Propagator as a
733 scheme for generating symplectic integrators.\cite{Tuckerman92}
734
735 For a system with $f$ degrees of freedom the Liouville operator can be
736 defined as,
737 \begin{equation}
738 iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
739 F_j\frac{\partial}{\partial p_j} \biggr ]
740 \label{introEq:LiouvilleOperator}
741 \end{equation}
742 Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
743 degree of freedom, and $F_j$ is the force on that degree of freedom.
744 $\Gamma$ is defined as the set of all positions and conjugate momenta,
745 $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
746 \begin {equation}
747 U(t) = e^{iLt}
748 \label{introEq:Lpropagator}
749 \end{equation}
750 This allows the specification of $\Gamma$ at any time $t$ as
751 \begin{equation}
752 \Gamma(t) = U(t)\Gamma(0)
753 \label{introEq:Lp2}
754 \end{equation}
755 It is important to note, $U(t)$ is a unitary operator meaning
756 \begin{equation}
757 U(-t)=U^{-1}(t)
758 \label{introEq:Lp3}
759 \end{equation}
760
761 Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
762 Trotter theorem to yield
763 \begin{align}
764 e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
765 %
766 &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
767 %
768 &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
769 e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
770 \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
771 \end{align}
772 Where $\Delta t = t/P$.
773 With this, a discrete time operator $G(\Delta t)$ can be defined:
774 \begin{align}
775 G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
776 e^{iL_1\frac{\Delta t}{2}} \notag \\%
777 %
778 &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
779 U_1 \biggl ( \frac{\Delta t}{2} \biggr )
780 \label{introEq:Lp5}
781 \end{align}
782 Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
783 unitary. Meaning an integrator based on this factorization will be
784 reversible in time.
785
786 As an example, consider the following decomposition of $L$:
787 \begin{align}
788 iL_1 &= \dot{q}\frac{\partial}{\partial q}%
789 \label{introEq:Lp6a} \\%
790 %
791 iL_2 &= F(q)\frac{\partial}{\partial p}%
792 \label{introEq:Lp6b}
793 \end{align}
794 This leads to propagator $G( \Delta t )$ as,
795 \begin{equation}
796 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
797 e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
798 e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
799 \label{introEq:Lp7}
800 \end{equation}
801 Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
802 \begin{equation}
803 e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
804 \label{introEq:Lp8}
805 \end{equation}
806 Where $c$ is independent of $x$. One obtains the following:
807 \begin{align}
808 \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
809 \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
810 %
811 q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
812 \label{introEq:Lp9b}\\%
813 %
814 \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
815 \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
816 \end{align}
817 Or written another way,
818 \begin{align}
819 q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
820 \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
821 \label{introEq:Lp10a} \\%
822 %
823 \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
824 \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
825 \label{introEq:Lp10b}
826 \end{align}
827 This is the velocity Verlet formulation presented in
828 Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is
829 comprised of unitary propagators, it is symplectic, and therefore area
830 preserving in phase space. From the preceding factorization, one can
831 see that the integration of the equations of motion would follow:
832 \begin{enumerate}
833 \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
834
835 \item Use the half step velocities to move positions one whole step, $\Delta t$.
836
837 \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
838
839 \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
840 \end{enumerate}
841
842 \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
843
844 Based on the factorization from the previous section,
845 Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
846 symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
847 alternative method for the integration of orientational degrees of
848 freedom. The method starts with a straightforward splitting of the
849 Liouville operator:
850 \begin{align}
851 iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
852 \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
853 \label{introEq:SR1a} \\%
854 %
855 iL_F &= F(q)\frac{\partial}{\partial p}
856 \label{introEq:SR1b} \\%
857 iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
858 \label{introEq:SR1b} \\%
859 \end{align}
860 Where $\tau(\mathbf{A})$ is the torque of the system
861 due to the configuration, and $\pi$ is the conjugate
862 angular momenta of the system. The propagator, $G(\Delta t)$, becomes
863 \begin{equation}
864 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
865 e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
866 e^{\Delta t\,iL_{\text{pos}}} \,
867 e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
868 e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
869 \label{introEq:SR2}
870 \end{equation}
871 Propagation of the linear and angular momenta follows as in the Verlet
872 scheme. The propagation of positions also follows the Verlet scheme
873 with the addition of a further symplectic splitting of the rotation
874 matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
875 $U_{\text{pos}}(\Delta t)$.
876 \begin{equation}
877 \mathcal{U}_{\text{rot}}(\Delta t) =
878 \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
879 \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
880 \mathcal{U}_z (\Delta t)\,
881 \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
882 \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
883 \label{introEq:SR3}
884 \end{equation}
885 Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
886 $\pi$ about each axis $j$. As all propagations are now
887 unitary and symplectic, the entire integration scheme is also
888 symplectic and time reversible.
889
890 \section{\label{introSec:layout}Dissertation Layout}
891
892 This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
893 presents the random sequential adsorption simulations of related
894 pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
895 is about the writing of the molecular dynamics simulation package
896 {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
897 phospholipid bilayers using a mesoscale model. And lastly,
898 Ch.~\ref{chapt:conclusion} concludes this dissertation with a
899 summary of all results. The chapters are arranged in chronological
900 order, and reflect the progression of techniques I employed during my
901 research.
902
903 The chapter concerning random sequential adsorption simulations is a
904 study in applying Statistical Mechanics simulation techniques in order
905 to obtain a simple model capable of explaining the results. My
906 advisor, Dr. Gezelter, and I were approached by a colleague,
907 Dr. Lieberman, about possible explanations for the partial coverage of a
908 gold surface by a particular compound of hers. We suggested it might
909 be due to the statistical packing fraction of disks on a plane, and
910 set about to simulate this system. As the events in our model were
911 not dynamic in nature, a Monte Carlo method was employed. Here, if a
912 molecule landed on the surface without overlapping another, then its
913 landing was accepted. However, if there was overlap, the landing we
914 rejected and a new random landing location was chosen. This defined
915 our acceptance rules and allowed us to construct a Markov chain whose
916 limiting distribution was the surface coverage in which we were
917 interested.
918
919 The following chapter, about the simulation package {\sc oopse},
920 describes in detail the large body of scientific code that had to be
921 written in order to study phospholipid bilayers. Although there are
922 pre-existing molecular dynamic simulation packages available, none
923 were capable of implementing the models we were developing.{\sc oopse}
924 is a unique package capable of not only integrating the equations of
925 motion in Cartesian space, but is also able to integrate the
926 rotational motion of rigid bodies and dipoles. Add to this the
927 ability to perform calculations across parallel processors and a
928 flexible script syntax for creating systems, and {\sc oopse} becomes a
929 very powerful scientific instrument for the exploration of our model.
930
931 Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
932 able to parameterize a mesoscale model for phospholipid simulations.
933 This model retains information about solvent ordering around the
934 bilayer, as well as information regarding the interaction of the
935 phospholipid head groups' dipoles with each other and the surrounding
936 solvent. These simulations give us insight into the dynamic events
937 that lead to the formation of phospholipid bilayers, as well as
938 provide the foundation for future exploration of bilayer phase
939 behavior with this model.
940
941 Which leads into the last chapter, where I discuss future directions
942 for both{\sc oopse} and this mesoscale model. Additionally, I will
943 give a summary of results for this dissertation.
944
945