ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 914 by mmeineke, Fri Jan 9 02:41:45 2004 UTC vs.
Revision 1013 by mmeineke, Tue Feb 3 21:24:17 2004 UTC

# Line 3 | Line 3
3   \chapter{\label{chapt:intro}Introduction and Theoretical Background}
4  
5  
6 + The techniques used in the course of this research fall under the two
7 + main classes of molecular simulation: Molecular Dynamics and Monte
8 + Carlo. Molecular Dynamic simulations integrate the equations of motion
9 + for a given system of particles, allowing the researcher to gain
10 + insight into the time dependent evolution of a system. Diffusion
11 + phenomena are readily studied with this simulation technique, making
12 + Molecular Dynamics the main simulation technique used in this
13 + research. Other aspects of the research fall under the Monte Carlo
14 + class of simulations. In Monte Carlo, the configuration space
15 + available to the collection of particles is sampled stochastically,
16 + or randomly. Each configuration is chosen with a given probability
17 + based on the Maxwell Boltzmann distribution. These types of simulations
18 + are best used to probe properties of a system that are only dependent
19 + only on the state of the system. Structural information about a system
20 + is most readily obtained through these types of methods.
21  
22 < \section{\label{introSec:theory}Theoretical Background}
22 > Although the two techniques employed seem dissimilar, they are both
23 > linked by the overarching principles of Statistical
24 > Thermodynamics. Statistical Thermodynamics governs the behavior of
25 > both classes of simulations and dictates what each method can and
26 > cannot do. When investigating a system, one most first analyze what
27 > thermodynamic properties of the system are being probed, then chose
28 > which method best suits that objective.
29  
30 < The techniques used in the course of this research fall under the two main classes of
10 < molecular simulation: Molecular Dynamics and Monte Carlo. Molecular Dynamic simulations
11 < integrate the equations of motion for a given system of particles, allowing the researher
12 < to gain insight into the time dependent evolution of a system. Diffusion phenomena are
13 < readily studied with this simulation technique, making Molecular Dynamics the main simulation
14 < technique used in this research. Other aspects of the research fall under the Monte Carlo
15 < class of simulations. In Monte Carlo, the configuration space available to the collection
16 < of particles is sampled stochastichally, or randomly. Each configuration is chosen with
17 < a given probability based on the Maxwell Boltzman distribution. These types of simulations
18 < are best used to probe properties of a system that are only dependent only on the state of
19 < the system. Structural information about a system is most readily obtained through
20 < these types of methods.
30 > \section{\label{introSec:statThermo}Statistical Mechanics}
31  
32 < Although the two techniques employed seem dissimilar, they are both linked by the overarching
33 < principles of Statistical Thermodynamics. Statistical Thermodynamics governs the behavior of
34 < both classes of simulations and dictates what each method can and cannot do. When investigating
35 < a system, one most first analyze what thermodynamic properties of the system are being probed,
36 < then chose which method best suits that objective.
32 > The following section serves as a brief introduction to some of the
33 > Statistical Mechanics concepts present in this dissertation.  What
34 > follows is a brief derivation of Boltzmann weighted statistics, and an
35 > explanation of how one can use the information to calculate an
36 > observable for a system.  This section then concludes with a brief
37 > discussion of the ergodic hypothesis and its relevance to this
38 > research.
39  
40 < \subsection{\label{introSec:statThermo}Statistical Thermodynamics}
40 > \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41  
42 < ergodic hypothesis
42 > Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
43 > Let $\Omega(E_{\gamma})$ represent the number of degenerate ways
44 > $\boldsymbol{\Gamma}$, the collection of positions and conjugate
45 > momenta of system $\gamma$, can be configured to give
46 > $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
47 > where energy is exchanged between the two systems, $\Omega(E)$, where
48 > $E$ is the total energy of both systems, can be represented as
49 > \begin{equation}
50 > \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
51 > \label{introEq:SM1}
52 > \end{equation}
53 > Or additively as
54 > \begin{equation}
55 > \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
56 > \label{introEq:SM2}
57 > \end{equation}
58  
59 < enesemble averages
59 > The solution to Eq.~\ref{introEq:SM2} maximizes the number of
60 > degenerative configurations in $E$. \cite{Frenkel1996}
61 > This gives
62 > \begin{equation}
63 > \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
64 >        \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
65 >         +
66 >        \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
67 >        \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
68 > \label{introEq:SM3}
69 > \end{equation}
70 > Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
71 > $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
72 > $-1$. Eq.~\ref{introEq:SM3} becomes
73 > \begin{equation}
74 > \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
75 > \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
76 > \label{introEq:SM4}
77 > \end{equation}
78  
79 < \subsection{\label{introSec:monteCarlo}Monte Carlo Simulations}
79 > At this point, one can draw a relationship between the maximization of
80 > degeneracy in Eq.~\ref{introEq:SM3} and the second law of
81 > thermodynamics.  Namely, that for a closed system, entropy will
82 > increase for an irreversible process.\cite{chandler:1987} Here the
83 > process is the partitioning of energy among the two systems.  This
84 > allows the following definition of entropy:
85 > \begin{equation}
86 > S = k_B \ln \Omega(E)
87 > \label{introEq:SM5}
88 > \end{equation}
89 > Where $k_B$ is the Boltzmann constant.  Having defined entropy, one can
90 > also define the temperature of the system using the relation
91 > \begin{equation}
92 > \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
93 > \label{introEq:SM6}
94 > \end{equation}
95 > The temperature in the system $\gamma$ is then
96 > \begin{equation}
97 > \beta( E_{\gamma} ) = \frac{1}{k_B T} =
98 >        \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
99 > \label{introEq:SM7}
100 > \end{equation}
101 > Applying this to Eq.~\ref{introEq:SM4} gives the following
102 > \begin{equation}
103 > \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
104 > \label{introEq:SM8}
105 > \end{equation}
106 > Showing that the partitioning of energy between the two systems is
107 > actually a process of thermal equilibration.\cite{Frenkel1996}
108  
109 < Stochastic sampling
109 > An application of these results is to formulate the form of an
110 > expectation value of an observable, $A$, in the canonical ensemble. In
111 > the canonical ensemble, the number of particles, $N$, the volume, $V$,
112 > and the temperature, $T$, are all held constant while the energy, $E$,
113 > is allowed to fluctuate. Returning to the previous example, the bath
114 > system is now an infinitely large thermal bath, whose exchange of
115 > energy with the system $\gamma$ holds the temperature constant.  The
116 > partitioning of energy in the bath system is then related to the total
117 > energy of both systems and the fluctuations in $E_{\gamma}$:
118 > \begin{equation}
119 > \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
120 > \label{introEq:SM9}
121 > \end{equation}
122 > As for the expectation value, it can be defined
123 > \begin{equation}
124 > \langle A \rangle =
125 >        \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
126 >        P_{\gamma} A(\boldsymbol{\Gamma})
127 > \label{introEq:SM10}
128 > \end{equation}
129 > Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
130 > an integration over all accessible phase space, $P_{\gamma}$ is the
131 > probability of being in a given phase state and
132 > $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
133 > phase state.
134  
135 < detatiled balance
135 > Because entropy seeks to maximize the number of degenerate states at a
136 > given energy, the probability of being in a particular state in
137 > $\gamma$ will be directly proportional to the number of allowable
138 > states the coupled system is able to assume. Namely,
139 > \begin{equation}
140 > P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
141 >        e^{\ln \Omega( E - E_{\gamma})}
142 > \label{introEq:SM11}
143 > \end{equation}
144 > With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
145 > \begin{equation}
146 > \ln \Omega ( E - E_{\gamma}) =
147 >        \ln \Omega (E) -
148 >        E_{\gamma}  \frac{\partial \ln \Omega }{\partial E}
149 >        + \ldots
150 > \label{introEq:SM12}
151 > \end{equation}
152 > Higher order terms are omitted as $E$ is an infinite thermal
153 > bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
154 > be rewritten:
155 > \begin{equation}
156 > P_{\gamma} \propto e^{-\beta E_{\gamma}}
157 > \label{introEq:SM13}
158 > \end{equation}
159 > Where $\ln \Omega(E)$ has been factored out of the proportionality as a
160 > constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
161 > d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
162 > \begin{equation}
163 > P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
164 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
165 > \label{introEq:SM14}
166 > \end{equation}
167 > This result is the standard Boltzmann statistical distribution.
168 > Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
169 > \begin{equation}
170 > \langle A \rangle =
171 > \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
172 >        A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
173 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
174 > \label{introEq:SM15}
175 > \end{equation}
176  
177 < metropilis monte carlo
177 > \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
178  
179 < \subsection{\label{introSec:md}Molecular Dynamics Simulations}
179 > One last important consideration is that of ergodicity. Ergodicity is
180 > the assumption that given an infinite amount of time, a system will
181 > visit every available point in phase space.\cite{Frenkel1996} For most
182 > systems, this is a valid assumption, except in cases where the system
183 > may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
184 > ergodicity allows the unification of a time averaged observation and
185 > an ensemble averaged one. If an observation is averaged over a
186 > sufficiently long time, the system is assumed to visit all
187 > appropriately available points in phase space, giving a properly
188 > weighted statistical average. This allows the researcher freedom of
189 > choice when deciding how best to measure a given observable.  When an
190 > ensemble averaged approach seems most logical, the Monte Carlo
191 > techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
192 > Conversely, if a problem lends itself to a time averaging approach,
193 > the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
194 > employed.
195  
196 < time averages
196 > \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
197  
198 < time integrating schemes
198 > The Monte Carlo method was developed by Metropolis and Ulam for their
199 > work in fissionable material.\cite{metropolis:1949} The method is so
200 > named, because it heavily uses random numbers in its
201 > solution.\cite{allen87:csl} The Monte Carlo method allows for the
202 > solution of integrals through the stochastic sampling of the values
203 > within the integral. In the simplest case, the evaluation of an
204 > integral would follow a brute force method of
205 > sampling.\cite{Frenkel1996} Consider the following single dimensional
206 > integral:
207 > \begin{equation}
208 > I = f(x)dx
209 > \label{eq:MCex1}
210 > \end{equation}
211 > The equation can be recast as:
212 > \begin{equation}
213 > I = (b-a)\langle f(x) \rangle
214 > \label{eq:MCex2}
215 > \end{equation}
216 > Where $\langle f(x) \rangle$ is the unweighted average over the interval
217 > $[a,b]$. The calculation of the integral could then be solved by
218 > randomly choosing points along the interval $[a,b]$ and calculating
219 > the value of $f(x)$ at each point. The accumulated average would then
220 > approach $I$ in the limit where the number of trials is infinitely
221 > large.
222  
223 < time reversible
223 > However, in Statistical Mechanics, one is typically interested in
224 > integrals of the form:
225 > \begin{equation}
226 > \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
227 >        e^{-\beta V(\mathbf{r}^N)}}%
228 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
229 > \label{eq:mcEnsAvg}
230 > \end{equation}
231 > Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
232 > and $A$ is some observable that is only dependent on position. This is
233 > the ensemble average of $A$ as presented in
234 > Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
235 > momentum. Therefore the momenta contribution of the integral can be
236 > factored out, leaving the configurational integral. Application of the
237 > brute force method to this system would yield highly inefficient
238 > results. Due to the Boltzmann weighting of this integral, most random
239 > configurations will have a near zero contribution to the ensemble
240 > average. This is where importance sampling comes into
241 > play.\cite{allen87:csl}
242  
243 < symplectic methods
243 > Importance Sampling is a method where one selects a distribution from
244 > which the random configurations are chosen in order to more
245 > efficiently calculate the integral.\cite{Frenkel1996} Consider again
246 > Eq.~\ref{eq:MCex1} rewritten to be:
247 > \begin{equation}
248 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
249 > \label{introEq:Importance1}
250 > \end{equation}
251 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
252 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
253 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
254 > Eq.~\ref{introEq:Importance1} becomes
255 > \begin{equation}
256 > I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
257 > \label{introEq:Importance2}
258 > \end{equation}
259 > Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
260 > \begin {equation}
261 > \rho_{kT}(\mathbf{r}^N) =
262 >        \frac{e^{-\beta V(\mathbf{r}^N)}}
263 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
264 > \label{introEq:MCboltzman}
265 > \end{equation}
266 > Where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
267 > can be rewritten as
268 > \begin{equation}
269 > \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
270 >        \rho_{kT}(\mathbf{r}^N)
271 > \label{introEq:Importance3}
272 > \end{equation}
273 > Applying Eq.~\ref{introEq:Importance1} one obtains
274 > \begin{equation}
275 > \langle A \rangle = \biggl \langle
276 >        \frac{ A \rho_{kT}(\mathbf{r}^N) }
277 >        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
278 > \label{introEq:Importance4}
279 > \end{equation}
280 > By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
281 > Eq.~\ref{introEq:Importance4} becomes
282 > \begin{equation}
283 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
284 > \label{introEq:Importance5}
285 > \end{equation}
286 > The difficulty is selecting points $\mathbf{r}^N$ such that they are
287 > sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
288 > was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
289 > the use of a Markov chain whose limiting distribution was
290 > $\rho_{kT}(\mathbf{r}^N)$.
291  
292 < Extended ensembles (NVT NPT)
292 > \subsection{\label{introSec:markovChains}Markov Chains}
293  
294 < constrained dynamics
294 > A Markov chain is a chain of states satisfying the following
295 > conditions:\cite{leach01:mm}
296 > \begin{enumerate}
297 > \item The outcome of each trial depends only on the outcome of the previous trial.
298 > \item Each trial belongs to a finite set of outcomes called the state space.
299 > \end{enumerate}
300 > If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
301 > $\rho_m$ and $\rho_n$ are the probabilities of being in state
302 > $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
303 > states are linked by a transition probability, $\pi_{mn}$, which is the
304 > probability of going from state $m$ to state $n$.
305  
306 < \section{\label{introSec:chapterLayout}Chapter Layout}
306 > \newcommand{\accMe}{\operatorname{acc}}
307  
308 < \subsection{\label{introSec:RSA}Random Sequential Adsorption}
308 > The transition probability is given by the following:
309 > \begin{equation}
310 > \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
311 > \label{introEq:MCpi}
312 > \end{equation}
313 > Where $\alpha_{mn}$ is the probability of attempting the move $m
314 > \rightarrow n$, and $\accMe$ is the probability of accepting the move
315 > $m \rightarrow n$.  Defining a probability vector,
316 > $\boldsymbol{\rho}$, such that
317 > \begin{equation}
318 > \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
319 >        \ldots \rho_N \}
320 > \label{introEq:MCrhoVector}
321 > \end{equation}
322 > a transition matrix $\boldsymbol{\Pi}$ can be defined,
323 > whose elements are $\pi_{mn}$, for each given transition.  The
324 > limiting distribution of the Markov chain can then be found by
325 > applying the transition matrix an infinite number of times to the
326 > distribution vector.
327 > \begin{equation}
328 > \boldsymbol{\rho}_{\text{limit}} =
329 >        \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
330 >        \boldsymbol{\Pi}^N
331 > \label{introEq:MCmarkovLimit}
332 > \end{equation}
333 > The limiting distribution of the chain is independent of the starting
334 > distribution, and successive applications of the transition matrix
335 > will only yield the limiting distribution again.
336 > \begin{equation}
337 > \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
338 >        \boldsymbol{\Pi}
339 > \label{introEq:MCmarkovEquil}
340 > \end{equation}
341  
342 < \subsection{\label{introSec:OOPSE}The OOPSE Simulation Package}
342 > \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
343  
344 < \subsection{\label{introSec:bilayers}A Mesoscale Model for Phospholipid Bilayers}
344 > In the Metropolis method\cite{metropolis:1953}
345 > Eq.~\ref{introEq:MCmarkovEquil} is solved such that
346 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
347 > of states.  The method accomplishes this by imposing the strong
348 > condition of microscopic reversibility on the equilibrium
349 > distribution.  Meaning, that at equilibrium the probability of going
350 > from $m$ to $n$ is the same as going from $n$ to $m$.
351 > \begin{equation}
352 > \rho_m\pi_{mn} = \rho_n\pi_{nm}
353 > \label{introEq:MCmicroReverse}
354 > \end{equation}
355 > Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
356 > the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
357 > Eq.~\ref{introEq:MCmicroReverse} becomes
358 > \begin{equation}
359 > \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
360 >        \frac{\rho_n}{\rho_m}
361 > \label{introEq:MCmicro2}
362 > \end{equation}
363 > For a Boltzmann limiting distribution,
364 > \begin{equation}
365 > \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
366 >        = e^{-\beta \Delta \mathcal{U}}
367 > \label{introEq:MCmicro3}
368 > \end{equation}
369 > This allows for the following set of acceptance rules be defined:
370 > \begin{equation}
371 > \accMe( m \rightarrow n ) =
372 >        \begin{cases}
373 >        1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
374 >        e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
375 >        \end{cases}
376 > \label{introEq:accRules}
377 > \end{equation}
378 >
379 > Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
380 > Metropolis method proceeds as follows
381 > \begin{enumerate}
382 > \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
383 > \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
384 > \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
385 > \item Accumulate the average for the configurational observable of interest.
386 > \item Repeat from step 2 until the average converges.
387 > \end{enumerate}
388 > One important note is that the average is accumulated whether the move
389 > is accepted or not, this ensures proper weighting of the average.
390 > Using Eq.~\ref{introEq:Importance4} it becomes clear that the
391 > accumulated averages are the ensemble averages, as this method ensures
392 > that the limiting distribution is the Boltzmann distribution.
393 >
394 > \section{\label{introSec:MD}Molecular Dynamics Simulations}
395 >
396 > The main simulation tool used in this research is Molecular Dynamics.
397 > Molecular Dynamics is when the equations of motion for a system are
398 > integrated in order to obtain information about both the positions and
399 > momentum of a system, allowing the calculation of not only
400 > configurational observables, but momenta dependent ones as well:
401 > diffusion constants, velocity auto correlations, folding/unfolding
402 > events, etc.  Due to the principle of ergodicity,
403 > Sec.~\ref{introSec:ergodic}, the average of these observables over the
404 > time period of the simulation are taken to be the ensemble averages
405 > for the system.
406 >
407 > The choice of when to use molecular dynamics over Monte Carlo
408 > techniques, is normally decided by the observables in which the
409 > researcher is interested.  If the observables depend on momenta in
410 > any fashion, then the only choice is molecular dynamics in some form.
411 > However, when the observable is dependent only on the configuration,
412 > then most of the time Monte Carlo techniques will be more efficient.
413 >
414 > The focus of research in the second half of this dissertation is
415 > centered around the dynamic properties of phospholipid bilayers,
416 > making molecular dynamics key in the simulation of those properties.
417 >
418 > \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
419 >
420 > To illustrate how the molecular dynamics technique is applied, the
421 > following sections will describe the sequence involved in a
422 > simulation.  Sec.~\ref{introSec:mdInit} deals with the initialization
423 > of a simulation.  Sec.~\ref{introSec:mdForce} discusses issues
424 > involved with the calculation of the forces.
425 > Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
426 > with the integration of the equations of motion.\cite{Frenkel1996}
427 >
428 > \subsection{\label{introSec:mdInit}Initialization}
429 >
430 > When selecting the initial configuration for the simulation it is
431 > important to consider what dynamics one is hoping to observe.
432 > Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
433 > phospholipid membranes.  Therefore in these simulations initial
434 > positions were selected that in some cases dispersed the lipids in
435 > water, and in other cases structured the lipids into performed
436 > bilayers.  Important considerations at this stage of the simulation are:
437 > \begin{itemize}
438 > \item There are no major overlaps of molecular or atomic orbitals
439 > \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
440 > \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
441 > \end{itemize}
442 >
443 > The first point is important due to the amount of potential energy
444 > generated by having two particles too close together.  If overlap
445 > occurs, the first evaluation of forces will return numbers so large as
446 > to render the numerical integration of the motion meaningless.  The
447 > second consideration keeps the system from drifting or rotating as a
448 > whole.  This arises from the fact that most simulations are of systems
449 > in equilibrium in the absence of outside forces.  Therefore any net
450 > movement would be unphysical and an artifact of the simulation method
451 > used.  The final point addresses the selection of the magnitude of the
452 > initial velocities.  For many simulations it is convenient to use
453 > this opportunity to scale the amount of kinetic energy to reflect the
454 > desired thermal distribution of the system.  However, it must be noted
455 > that most systems will require further velocity rescaling after the
456 > first few initial simulation steps due to either loss or gain of
457 > kinetic energy from energy stored in potential degrees of freedom.
458 >
459 > \subsection{\label{introSec:mdForce}Force Evaluation}
460 >
461 > The evaluation of forces is the most computationally expensive portion
462 > of a given molecular dynamics simulation.  This is due entirely to the
463 > evaluation of long range forces in a simulation, typically pair-wise.
464 > These forces are most commonly the Van der Waals force, and sometimes
465 > Coulombic forces as well.  For a pair-wise force, there are $N(N-1)/ 2$
466 > pairs to be evaluated, where $N$ is the number of particles in the
467 > system.  This leads to the calculations scaling as $N^2$, making large
468 > simulations prohibitive in the absence of any computation saving
469 > techniques.
470 >
471 > Another consideration one must resolve, is that in a given simulation
472 > a disproportionate number of the particles will feel the effects of
473 > the surface.\cite{allen87:csl} For a cubic system of 1000 particles
474 > arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
475 > exposed to the surface.  Unless one is simulating an isolated particle
476 > group in a vacuum, the behavior of the system will be far from the
477 > desired bulk characteristics.  To offset this, simulations employ the
478 > use of periodic boundary images.\cite{born:1912}
479 >
480 > The technique involves the use of an algorithm that replicates the
481 > simulation box on an infinite lattice in Cartesian space.  Any given
482 > particle leaving the simulation box on one side will have an image of
483 > itself enter on the opposite side (see Fig.~\ref{introFig:pbc}).  In
484 > addition, this sets that any two particles have an image, real or
485 > periodic, within $\text{box}/2$ of each other.  A discussion of the
486 > method used to calculate the periodic image can be found in
487 > Sec.\ref{oopseSec:pbc}.
488 >
489 > \begin{figure}
490 > \centering
491 > \includegraphics[width=\linewidth]{pbcFig.eps}
492 > \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
493 > \label{introFig:pbc}
494 > \end{figure}
495 >
496 > Returning to the topic of the computational scale of the force
497 > evaluation, the use of periodic boundary conditions requires that a
498 > cutoff radius be employed.  Using a cutoff radius improves the
499 > efficiency of the force evaluation, as particles farther than a
500 > predetermined distance, $r_{\text{cut}}$, are not included in the
501 > calculation.\cite{Frenkel1996} In a simulation with periodic images,
502 > $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
503 > Fig.~\ref{introFig:rMax} illustrates how when using an
504 > $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
505 > $r_{\text{cut}}$ at all, the corners of the simulation box are
506 > unequally weighted due to the lack of particle images in the $x$, $y$,
507 > or $z$ directions past a distance of $\text{box} / 2$.
508 >
509 > \begin{figure}
510 > \centering
511 > \includegraphics[width=\linewidth]{rCutMaxFig.eps}
512 > \caption[An explanation of $r_{\text{cut}}$]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
513 > \label{introFig:rMax}
514 > \end{figure}
515 >
516 > With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
517 > the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
518 > discontinuity, one calculates the potential energy at the
519 > $r_{\text{cut}}$, and adds that value to the potential, causing
520 > the function to go smoothly to zero at the cutoff radius.  This
521 > shifted potential ensures conservation of energy when integrating the
522 > Newtonian equations of motion.
523 >
524 > \begin{figure}
525 > \centering
526 > \includegraphics[width=\linewidth]{shiftedPot.eps}
527 > \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
528 > \label{introFig:shiftPot}
529 > \end{figure}
530 >
531 > The second main simplification used in this research is the Verlet
532 > neighbor list. \cite{allen87:csl} In the Verlet method, one generates
533 > a list of all neighbor atoms, $j$, surrounding atom $i$ within some
534 > cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
535 > This list is created the first time forces are evaluated, then on
536 > subsequent force evaluations, pair calculations are only calculated
537 > from the neighbor lists.  The lists are updated if any given particle
538 > in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
539 > giving rise to the possibility that a particle has left or joined a
540 > neighbor list.
541 >
542 > \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
543 >
544 > A starting point for the discussion of molecular dynamics integrators
545 > is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
546 > expansion of position in time:
547 > \begin{equation}
548 > q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
549 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
550 >        \mathcal{O}(\Delta t^4)
551 > \label{introEq:verletForward}
552 > \end{equation}
553 > As well as,
554 > \begin{equation}
555 > q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
556 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
557 >        \mathcal{O}(\Delta t^4)
558 > \label{introEq:verletBack}
559 > \end{equation}
560 > Where $m$ is the mass of the particle, $q(t)$ is the position at time
561 > $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
562 > particle. Adding together Eq.~\ref{introEq:verletForward} and
563 > Eq.~\ref{introEq:verletBack} results in,
564 > \begin{equation}
565 > q(t+\Delta t)+q(t-\Delta t) =
566 >        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
567 > \label{introEq:verletSum}
568 > \end{equation}
569 > Or equivalently,
570 > \begin{equation}
571 > q(t+\Delta t) \approx
572 >        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
573 > \label{introEq:verletFinal}
574 > \end{equation}
575 > Which contains an error in the estimate of the new positions on the
576 > order of $\Delta t^4$.
577 >
578 > In practice, however, the simulations in this research were integrated
579 > with a velocity reformulation of the Verlet method.\cite{allen87:csl}
580 > \begin{align}
581 > q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
582 > \label{introEq:MDvelVerletPos} \\%
583 > %
584 > v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
585 > \label{introEq:MDvelVerletVel}
586 > \end{align}
587 > The original Verlet algorithm can be regained by substituting the
588 > velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
589 > formulations are chosen in this research because the algorithms have
590 > very little long term drift in energy conservation.  Energy
591 > conservation in a molecular dynamics simulation is of extreme
592 > importance, as it is a measure of how closely one is following the
593 > ``true'' trajectory with the finite integration scheme.  An exact
594 > solution to the integration will conserve area in phase space, as well
595 > as be reversible in time, that is, the trajectory integrated forward
596 > or backwards will exactly match itself.  Having a finite algorithm
597 > that both conserves area in phase space and is time reversible,
598 > therefore increases, but does not guarantee the ``correctness'' or the
599 > integrated trajectory.
600 >
601 > It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
602 > does not rigorously preserve the actual Hamiltonian, it does preserve
603 > a pseudo-Hamiltonian which shadows the real one in phase space.  This
604 > pseudo-Hamiltonian is provably area-conserving as well as time
605 > reversible.  The fact that it shadows the true Hamiltonian in phase
606 > space is acceptable in actual simulations as one is interested in the
607 > ensemble average of the observable being measured.  From the ergodic
608 > hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
609 > average will match the ensemble average, therefore two similar
610 > trajectories in phase space should give matching statistical averages.
611 >
612 > \subsection{\label{introSec:MDfurther}Further Considerations}
613 >
614 > In the simulations presented in this research, a few additional
615 > parameters are needed to describe the motions.  The simulations
616 > involving water and phospholipids in Ch.~\ref{chapt:lipid} are
617 > required to integrate the equations of motions for dipoles on atoms.
618 > This involves an additional three parameters be specified for each
619 > dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
620 > taken to be the Euler angles, where $\phi$ is a rotation about the
621 > $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
622 > $\psi$ is a final rotation about the new $z$-axis (see
623 > Fig.~\ref{introFig:eulerAngles}).  This sequence of rotations can be
624 > accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
625 > defined as follows:
626 > \begin{equation}
627 > \mathbf{A} =
628 > \begin{bmatrix}
629 >        \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
630 >        \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
631 >        \sin\theta\sin\psi \\%
632 >        %
633 >        -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
634 >        -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
635 >        \sin\theta\cos\psi \\%
636 >        %
637 >        \sin\phi\sin\theta &%
638 >        -\cos\phi\sin\theta &%
639 >        \cos\theta
640 > \end{bmatrix}
641 > \label{introEq:EulerRotMat}
642 > \end{equation}
643 >
644 > \begin{figure}
645 > \caentering
646 > \includegraphics[width=\linewidth]{eulerRotFig.eps}
647 > \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x\prime$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z\prime$ axis (red rotation).}
648 > \label{introFig:eulerAngles}
649 > \end{figure}
650 >
651 > The equations of motion for Euler angles can be written down
652 > as\cite{allen87:csl}
653 > \begin{align}
654 > \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
655 >        \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
656 >        \omega^s_z
657 > \label{introEq:MDeulerPhi} \\%
658 > %
659 > \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
660 > \label{introEq:MDeulerTheta} \\%
661 > %
662 > \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
663 >        \omega^s_y \frac{\cos\phi}{\sin\theta}
664 > \label{introEq:MDeulerPsi}
665 > \end{align}
666 > Where $\omega^s_i$ is the angular velocity in the lab space frame
667 > along Cartesian coordinate $i$.  However, a difficulty arises when
668 > attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
669 > Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
670 > both equations means there is a non-physical instability present when
671 > $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
672 > the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
673 > instability.  This method was proposed by Dullweber
674 > \emph{et. al.}\cite{Dullweber1997}, and is presented in
675 > Sec.~\ref{introSec:MDsymplecticRot}.
676 >
677 > \subsection{\label{introSec:MDliouville}Liouville Propagator}
678 >
679 > Before discussing the integration of the rotation matrix, it is
680 > necessary to understand the construction of a ``good'' integration
681 > scheme.  It has been previously
682 > discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
683 > integrator to be symplectic, or time reversible.  The following is an
684 > outline of the Trotter factorization of the Liouville Propagator as a
685 > scheme for generating symplectic integrators.\cite{Tuckerman92}
686 >
687 > For a system with $f$ degrees of freedom the Liouville operator can be
688 > defined as,
689 > \begin{equation}
690 > iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
691 >        F_j\frac{\partial}{\partial p_j} \biggr ]
692 > \label{introEq:LiouvilleOperator}
693 > \end{equation}
694 > Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
695 > degree of freedom, and $F_j$ is the force on that degree of freedom.
696 > $\Gamma$ is defined as the set of all positions and conjugate momenta,
697 > $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
698 > \begin {equation}
699 > U(t) = e^{iLt}
700 > \label{introEq:Lpropagator}
701 > \end{equation}
702 > This allows the specification of $\Gamma$ at any time $t$ as
703 > \begin{equation}
704 > \Gamma(t) = U(t)\Gamma(0)
705 > \label{introEq:Lp2}
706 > \end{equation}
707 > It is important to note, $U(t)$ is a unitary operator meaning
708 > \begin{equation}
709 > U(-t)=U^{-1}(t)
710 > \label{introEq:Lp3}
711 > \end{equation}
712 >
713 > Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
714 > Trotter theorem to yield
715 > \begin{align}
716 > e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
717 > %
718 >        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
719 > %
720 >        &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
721 >        e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
722 >        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
723 > \end{align}
724 > Where $\Delta t = t/P$.
725 > With this, a discrete time operator $G(\Delta t)$ can be defined:
726 > \begin{align}
727 > G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
728 >        e^{iL_1\frac{\Delta t}{2}} \notag \\%
729 > %
730 >        &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
731 >        U_1 \biggl ( \frac{\Delta t}{2} \biggr )
732 > \label{introEq:Lp5}
733 > \end{align}
734 > Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
735 > unitary.  Meaning an integrator based on this factorization will be
736 > reversible in time.
737 >
738 > As an example, consider the following decomposition of $L$:
739 > \begin{align}
740 > iL_1 &= \dot{q}\frac{\partial}{\partial q}%
741 > \label{introEq:Lp6a} \\%
742 > %
743 > iL_2 &= F(q)\frac{\partial}{\partial p}%
744 > \label{introEq:Lp6b}
745 > \end{align}
746 > This leads to propagator $G( \Delta t )$ as,
747 > \begin{equation}
748 > G(\Delta t) =  e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
749 >        e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
750 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
751 > \label{introEq:Lp7}
752 > \end{equation}
753 > Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
754 > \begin{equation}
755 > e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
756 > \label{introEq:Lp8}
757 > \end{equation}
758 > Where $c$ is independent of $x$.  One obtains the following:  
759 > \begin{align}
760 > \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
761 >        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
762 > %
763 > q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
764 >        \label{introEq:Lp9b}\\%
765 > %
766 > \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
767 >        \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
768 > \end{align}
769 > Or written another way,
770 > \begin{align}
771 > q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
772 >        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
773 > \label{introEq:Lp10a} \\%
774 > %
775 > \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
776 >        \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
777 > \label{introEq:Lp10b}
778 > \end{align}
779 > This is the velocity Verlet formulation presented in
780 > Sec.~\ref{introSec:mdIntegrate}.  Because this integration scheme is
781 > comprised of unitary propagators, it is symplectic, and therefore area
782 > preserving in phase space.  From the preceding factorization, one can
783 > see that the integration of the equations of motion would follow:
784 > \begin{enumerate}
785 > \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
786 >
787 > \item Use the half step velocities to move positions one whole step, $\Delta t$.
788 >
789 > \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
790 >
791 > \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
792 > \end{enumerate}
793 >
794 > \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
795 >
796 > Based on the factorization from the previous section,
797 > Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
798 > symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
799 > alternative method for the integration of orientational degrees of
800 > freedom. The method starts with a straightforward splitting of the
801 > Liouville operator:
802 > \begin{align}
803 > iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
804 >        \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
805 > \label{introEq:SR1a} \\%
806 > %
807 > iL_F &= F(q)\frac{\partial}{\partial p}
808 > \label{introEq:SR1b} \\%
809 > iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
810 > \label{introEq:SR1b} \\%
811 > \end{align}
812 > Where $\tau(\mathbf{A})$ is the torque of the system
813 > due to the configuration, and $\pi$ is the conjugate
814 > angular momenta of the system. The propagator, $G(\Delta t)$, becomes
815 > \begin{equation}
816 > G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
817 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
818 >        e^{\Delta t\,iL_{\text{pos}}} \,
819 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
820 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
821 > \label{introEq:SR2}
822 > \end{equation}
823 > Propagation of the linear and angular momenta follows as in the Verlet
824 > scheme.  The propagation of positions also follows the Verlet scheme
825 > with the addition of a further symplectic splitting of the rotation
826 > matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
827 > $U_{\text{pos}}(\Delta t)$.
828 > \begin{equation}
829 > \mathcal{U}_{\text{rot}}(\Delta t) =
830 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
831 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
832 >        \mathcal{U}_z (\Delta t)\,
833 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
834 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
835 > \label{introEq:SR3}
836 > \end{equation}
837 > Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
838 > $\pi$ about each axis $j$.  As all propagations are now
839 > unitary and symplectic, the entire integration scheme is also
840 > symplectic and time reversible.
841 >
842 > \section{\label{introSec:layout}Dissertation Layout}
843 >
844 > This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
845 > presents the random sequential adsorption simulations of related
846 > pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
847 > is about the writing of the molecular dynamics simulation package
848 > {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
849 > phospholipid bilayers using a mesoscale model. And lastly,
850 > Ch.~\ref{chapt:conclusion} concludes this dissertation with a
851 > summary of all results. The chapters are arranged in chronological
852 > order, and reflect the progression of techniques I employed during my
853 > research.  
854 >
855 > The chapter concerning random sequential adsorption simulations is a
856 > study in applying Statistical Mechanics simulation techniques in order
857 > to obtain a simple model capable of explaining the results.  My
858 > advisor, Dr. Gezelter, and I were approached by a colleague,
859 > Dr. Lieberman, about possible explanations for the  partial coverage of a
860 > gold surface by a particular compound of hers. We suggested it might
861 > be due to the statistical packing fraction of disks on a plane, and
862 > set about to simulate this system.  As the events in our model were
863 > not dynamic in nature, a Monte Carlo method was employed.  Here, if a
864 > molecule landed on the surface without overlapping another, then its
865 > landing was accepted.  However, if there was overlap, the landing we
866 > rejected and a new random landing location was chosen.  This defined
867 > our acceptance rules and allowed us to construct a Markov chain whose
868 > limiting distribution was the surface coverage in which we were
869 > interested.
870 >
871 > The following chapter, about the simulation package {\sc oopse},
872 > describes in detail the large body of scientific code that had to be
873 > written in order to study phospholipid bilayers.  Although there are
874 > pre-existing molecular dynamic simulation packages available, none
875 > were capable of implementing the models we were developing.{\sc oopse}
876 > is a unique package capable of not only integrating the equations of
877 > motion in Cartesian space, but is also able to integrate the
878 > rotational motion of rigid bodies and dipoles.  Add to this the
879 > ability to perform calculations across parallel processors and a
880 > flexible script syntax for creating systems, and {\sc oopse} becomes a
881 > very powerful scientific instrument for the exploration of our model.
882 >
883 > Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
884 > able to parameterize a mesoscale model for phospholipid simulations.
885 > This model retains information about solvent ordering around the
886 > bilayer, as well as information regarding the interaction of the
887 > phospholipid head groups' dipoles with each other and the surrounding
888 > solvent.  These simulations give us insight into the dynamic events
889 > that lead to the formation of phospholipid bilayers, as well as
890 > provide the foundation for future exploration of bilayer phase
891 > behavior with this model.  
892 >
893 > Which leads into the last chapter, where I discuss future directions
894 > for both{\sc oopse} and this mesoscale model.  Additionally, I will
895 > give a summary of results for this dissertation.
896 >
897 >

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines