ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 914 by mmeineke, Fri Jan 9 02:41:45 2004 UTC vs.
Revision 1001 by mmeineke, Sat Jan 31 22:10:21 2004 UTC

# Line 6 | Line 6
6  
7   \section{\label{introSec:theory}Theoretical Background}
8  
9 < The techniques used in the course of this research fall under the two main classes of
10 < molecular simulation: Molecular Dynamics and Monte Carlo. Molecular Dynamic simulations
11 < integrate the equations of motion for a given system of particles, allowing the researher
12 < to gain insight into the time dependent evolution of a system. Diffusion phenomena are
13 < readily studied with this simulation technique, making Molecular Dynamics the main simulation
14 < technique used in this research. Other aspects of the research fall under the Monte Carlo
15 < class of simulations. In Monte Carlo, the configuration space available to the collection
16 < of particles is sampled stochastichally, or randomly. Each configuration is chosen with
17 < a given probability based on the Maxwell Boltzman distribution. These types of simulations
18 < are best used to probe properties of a system that are only dependent only on the state of
19 < the system. Structural information about a system is most readily obtained through
20 < these types of methods.
9 > The techniques used in the course of this research fall under the two
10 > main classes of molecular simulation: Molecular Dynamics and Monte
11 > Carlo. Molecular Dynamic simulations integrate the equations of motion
12 > for a given system of particles, allowing the researher to gain
13 > insight into the time dependent evolution of a system. Diffusion
14 > phenomena are readily studied with this simulation technique, making
15 > Molecular Dynamics the main simulation technique used in this
16 > research. Other aspects of the research fall under the Monte Carlo
17 > class of simulations. In Monte Carlo, the configuration space
18 > available to the collection of particles is sampled stochastichally,
19 > or randomly. Each configuration is chosen with a given probability
20 > based on the Maxwell Boltzman distribution. These types of simulations
21 > are best used to probe properties of a system that are only dependent
22 > only on the state of the system. Structural information about a system
23 > is most readily obtained through these types of methods.
24  
25 < Although the two techniques employed seem dissimilar, they are both linked by the overarching
26 < principles of Statistical Thermodynamics. Statistical Thermodynamics governs the behavior of
27 < both classes of simulations and dictates what each method can and cannot do. When investigating
28 < a system, one most first analyze what thermodynamic properties of the system are being probed,
29 < then chose which method best suits that objective.
25 > Although the two techniques employed seem dissimilar, they are both
26 > linked by the overarching principles of Statistical
27 > Thermodynamics. Statistical Thermodynamics governs the behavior of
28 > both classes of simulations and dictates what each method can and
29 > cannot do. When investigating a system, one most first analyze what
30 > thermodynamic properties of the system are being probed, then chose
31 > which method best suits that objective.
32  
33 < \subsection{\label{introSec:statThermo}Statistical Thermodynamics}
33 > \subsection{\label{introSec:statThermo}Statistical Mechanics}
34  
35 < ergodic hypothesis
35 > The following section serves as a brief introduction to some of the
36 > Statistical Mechanics concepts present in this dissertation.  What
37 > follows is a brief derivation of Blotzman weighted statistics, and an
38 > explanation of how one can use the information to calculate an
39 > observable for a system.  This section then concludes with a brief
40 > discussion of the ergodic hypothesis and its relevance to this
41 > research.
42  
43 < enesemble averages
43 > \subsection{\label{introSec:boltzman}Boltzman weighted statistics}
44  
45 + Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
46 + Let $\Omega(E_{gamma})$ represent the number of degenerate ways
47 + $\boldsymbol{\Gamma}$, the collection of positions and conjugate
48 + momenta of system $\gamma$, can be configured to give
49 + $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
50 + where energy is exchanged between the two systems, $\Omega(E)$, where
51 + $E$ is the total energy of both systems, can be represented as
52 + \begin{equation}
53 + eq here
54 + \label{introEq:SM1}
55 + \end{equation}
56 + Or additively as
57 + \begin{equation}
58 + eq here
59 + \label{introEq:SM2}
60 + \end{equation}
61 +
62 + The solution to Eq.~\ref{introEq:SM2} maximizes the number of
63 + degenerative configurations in $E$. \cite{fix}
64 + This gives
65 + \begin{equation}
66 + eq here
67 + \label{introEq:SM3}
68 + \end{equation}
69 + Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
70 + $\frac{partialE_{\text{bath}}}{\partial E_{\gamma}}$ is
71 + $-1$. Eq.~\ref{introEq:SM3} becomes
72 + \begin{equation}
73 + eq here
74 + \label{introEq:SM4}
75 + \end{equation}
76 +
77 + At this point, one can draw a relationship between the maximization of
78 + degeneracy in Eq.~\ref{introEq:SM3} and the second law of
79 + thermodynamics.  Namely, that for a closed system, entropy wil
80 + increase for an irreversible process.\cite{fix} Here the
81 + process is the partitioning of energy among the two systems.  This
82 + allows the following definition of entropy:
83 + \begin{equation}
84 + eq here
85 + \label{introEq:SM5}
86 + \end{equation}
87 + Where $k_B$ is the Boltzman constant.  Having defined entropy, one can
88 + also define the temperature of the system using the relation
89 + \begin{equation}
90 + eq here
91 + \label{introEq:SM6}
92 + \end{equation}
93 + The temperature in the system $\gamma$ is then
94 + \begin{equation}
95 + eq here
96 + \label{introEq:SM7}
97 + \end{equation}
98 + Applying this to Eq.~\ref{introEq:SM4} gives the following
99 + \begin{equation}
100 + eq here
101 + \label{introEq:SM8}
102 + \end{equation}
103 + Showing that the partitioning of energy between the two systems is
104 + actually a process of thermal equilibration. \cite{fix}
105 +
106 + An application of these results is to formulate the form of an
107 + expectation value of an observable, $A$, in the canonical ensemble. In
108 + the canonical ensemble, the number of particles, $N$, the volume, $V$,
109 + and the temperature, $T$, are all held constant while the energy, $E$,
110 + is allowed to fluctuate. Returning to the previous example, the bath
111 + system is now an infinitly large thermal bath, whose exchange of
112 + energy with the system $\gamma$ holds teh temperature constant.  The
113 + partitioning of energy in the bath system is then related to the total
114 + energy of both systems and the fluctuations in $E_{\gamma}}$:
115 + \begin{equation}
116 + eq here
117 + \label{introEq:SM9}
118 + \end{equation}
119 + As for the expectation value, it can be defined
120 + \begin{equation}
121 + eq here
122 + \label{introEq:SM10}
123 + \end{eequation}
124 + Where $\int_{\boldsymbol{\Gamma}} d\Boldsymbol{\Gamma}$ denotes an
125 + integration over all accessable phase space, $P_{\gamma}$ is the
126 + probability of being in a given phase state and
127 + $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
128 + phase state.
129 +
130 + Because entropy seeks to maximize the number of degenerate states at a
131 + given energy, the probability of being in a particular state in
132 + $\gamma$ will be directly proportional to the number of allowable
133 + states the coupled system is able to assume. Namely,
134 + \begin{equation}
135 + eq here
136 + \label{introEq:SM11}
137 + \end{equation}
138 + With $E_{\gamma} \lE$, $\ln \Omega$ can be expanded in a Taylor series:
139 + \begin{equation}
140 + eq here
141 + \label{introEq:SM12}
142 + \end{equation}
143 + Higher order terms are omitted as $E$ is an infinite thermal
144 + bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
145 + be rewritten:
146 + \begin{equation}
147 + eq here
148 + \label{introEq:SM13}
149 + \end{equation}
150 + Where $\ln \Omega(E)$ has been factored out of the porpotionality as a
151 + constant.  Normalizing the probability ($\int_{\boldsymbol{\Gamma}}
152 + d\boldsymbol{\Gamma} P_{\gamma} =1$ gives
153 + \begin{equation}
154 + eq here
155 + \label{introEq:SM14}
156 + \end{equation}
157 + This result is the standard Boltzman statistical distribution.
158 + Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
159 + \begin{equation}
160 + eq here
161 + \label{introEq:SM15}
162 + \end{equation}
163 +
164 + \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
165 +
166 + One last important consideration is that of ergodicity. Ergodicity is
167 + the assumption that given an infinite amount of time, a system will
168 + visit every available point in phase space.\cite{fix} For most
169 + systems, this is a valid assumption, except in cases where the system
170 + may be trapped in a local feature (\emph{i.~e.~glasses}). When valid,
171 + ergodicity allows the unification of a time averaged observation and
172 + an ensemble averged one. If an observation is averaged over a
173 + sufficiently long time, the system is assumed to visit all
174 + appropriately available points in phase space, giving a properly
175 + weighted statistical average. This allows the researcher freedom of
176 + choice when deciding how best to measure a given observable.  When an
177 + ensemble averaged approach seems most logical, the Monte Carlo
178 + techniques described in Sec.~\ref{introSec:MC} can be utilized.
179 + Conversely, if a problem lends itself to a time averaging approach,
180 + the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
181 + employed.
182 +
183   \subsection{\label{introSec:monteCarlo}Monte Carlo Simulations}
184  
185 < Stochastic sampling
185 > The Monte Carlo method was developed by Metropolis and Ulam for their
186 > work in fissionable material.\cite{metropolis:1949} The method is so
187 > named, because it heavily uses random numbers in its
188 > solution.\cite{allen87:csl} The Monte Carlo method allows for the
189 > solution of integrals through the stochastic sampling of the values
190 > within the integral. In the simplest case, the evaluation of an
191 > integral would follow a brute force method of
192 > sampling.\cite{Frenkel1996} Consider the following single dimensional
193 > integral:
194 > \begin{equation}
195 > I = f(x)dx
196 > \label{eq:MCex1}
197 > \end{equation}
198 > The equation can be recast as:
199 > \begin{equation}
200 > I = (b-a)\langle f(x) \rangle
201 > \label{eq:MCex2}
202 > \end{equation}
203 > Where $\langle f(x) \rangle$ is the unweighted average over the interval
204 > $[a,b]$. The calculation of the integral could then be solved by
205 > randomly choosing points along the interval $[a,b]$ and calculating
206 > the value of $f(x)$ at each point. The accumulated average would then
207 > approach $I$ in the limit where the number of trials is infintely
208 > large.
209  
210 < detatiled balance
210 > However, in Statistical Mechanics, one is typically interested in
211 > integrals of the form:
212 > \begin{equation}
213 > \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
214 >        e^{-\beta V(\mathbf{r}^N)}}%
215 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
216 > \label{eq:mcEnsAvg}
217 > \end{equation}
218 > Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
219 > and $A$ is some observable that is only dependent on
220 > position. $\langle A \rangle$ is the ensemble average of $A$ as
221 > presented in Sec.~\ref{introSec:statThermo}. Because $A$ is
222 > independent of momentum, the momenta contribution of the integral can
223 > be factored out, leaving the configurational integral. Application of
224 > the brute force method to this system would yield highly inefficient
225 > results. Due to the Boltzman weighting of this integral, most random
226 > configurations will have a near zero contribution to the ensemble
227 > average. This is where a importance sampling comes into
228 > play.\cite{allen87:csl}
229  
230 < metropilis monte carlo
230 > Importance Sampling is a method where one selects a distribution from
231 > which the random configurations are chosen in order to more
232 > efficiently calculate the integral.\cite{Frenkel1996} Consider again
233 > Eq.~\ref{eq:MCex1} rewritten to be:
234 > \begin{equation}
235 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
236 > \label{introEq:Importance1}
237 > \end{equation}
238 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
239 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
240 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
241 > Eq.~\ref{introEq:Importance1} becomes
242 > \begin{equation}
243 > I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
244 > \label{introEq:Importance2}
245 > \end{equation}
246 > Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
247 > \begin {equation}
248 > \rho_{kT}(\mathbf{r}^N) =
249 >        \frac{e^{-\beta V(\mathbf{r}^N)}}
250 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
251 > \label{introEq:MCboltzman}
252 > \end{equation}
253 > Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
254 > can be rewritten as
255 > \begin{equation}
256 > \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
257 >        \rho_{kT}(\mathbf{r}^N)
258 > \label{introEq:Importance3}
259 > \end{equation}
260 > Applying Eq.~\ref{introEq:Importance1} one obtains
261 > \begin{equation}
262 > \langle A \rangle = \biggl \langle
263 >        \frac{ A \rho_{kT}(\mathbf{r}^N) }
264 >        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
265 > \label{introEq:Importance4}
266 > \end{equation}
267 > By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
268 > Eq.~\ref{introEq:Importance4} becomes
269 > \begin{equation}
270 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
271 > \label{introEq:Importance5}
272 > \end{equation}
273 > The difficulty is selecting points $\mathbf{r}^N$ such that they are
274 > sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
275 > was proposed by Metropolis et al.\cite{metropolis:1953} which involved
276 > the use of a Markov chain whose limiting distribution was
277 > $\rho_{kT}(\mathbf{r}^N)$.
278  
279 < \subsection{\label{introSec:md}Molecular Dynamics Simulations}
279 > \subsubsection{\label{introSec:markovChains}Markov Chains}
280  
281 < time averages
281 > A Markov chain is a chain of states satisfying the following
282 > conditions:\cite{leach01:mm}
283 > \begin{enumerate}
284 > \item The outcome of each trial depends only on the outcome of the previous trial.
285 > \item Each trial belongs to a finite set of outcomes called the state space.
286 > \end{enumerate}
287 > If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
288 > $\rho_m$ and $\rho_n$ are the probablilities of being in state
289 > $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
290 > states are linked by a transition probability, $\pi_{mn}$, which is the
291 > probability of going from state $m$ to state $n$.
292  
293 < time integrating schemes
293 > \newcommand{\accMe}{\operatorname{acc}}
294  
295 < time reversible
295 > The transition probability is given by the following:
296 > \begin{equation}
297 > \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
298 > \label{introEq:MCpi}
299 > \end{equation}
300 > Where $\alpha_{mn}$ is the probability of attempting the move $m
301 > \rightarrow n$, and $\accMe$ is the probability of accepting the move
302 > $m \rightarrow n$.  Defining a probability vector,
303 > $\boldsymbol{\rho}$, such that
304 > \begin{equation}
305 > \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
306 >        \ldots \rho_N \}
307 > \label{introEq:MCrhoVector}
308 > \end{equation}
309 > a transition matrix $\boldsymbol{\Pi}$ can be defined,
310 > whose elements are $\pi_{mn}$, for each given transition.  The
311 > limiting distribution of the Markov chain can then be found by
312 > applying the transition matrix an infinite number of times to the
313 > distribution vector.
314 > \begin{equation}
315 > \boldsymbol{\rho}_{\text{limit}} =
316 >        \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
317 >        \boldsymbol{\Pi}^N
318 > \label{introEq:MCmarkovLimit}
319 > \end{equation}
320 > The limiting distribution of the chain is independent of the starting
321 > distribution, and successive applications of the transition matrix
322 > will only yield the limiting distribution again.
323 > \begin{equation}
324 > \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
325 >        \boldsymbol{\Pi}
326 > \label{introEq:MCmarkovEquil}
327 > \end{equation}
328  
329 < symplectic methods
329 > \subsubsection{\label{introSec:metropolisMethod}The Metropolis Method}
330  
331 < Extended ensembles (NVT NPT)
331 > In the Metropolis method\cite{metropolis:1953}
332 > Eq.~\ref{introEq:MCmarkovEquil} is solved such that
333 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
334 > of states.  The method accomplishes this by imposing the strong
335 > condition of microscopic reversibility on the equilibrium
336 > distribution.  Meaning, that at equilibrium the probability of going
337 > from $m$ to $n$ is the same as going from $n$ to $m$.
338 > \begin{equation}
339 > \rho_m\pi_{mn} = \rho_n\pi_{nm}
340 > \label{introEq:MCmicroReverse}
341 > \end{equation}
342 > Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
343 > the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
344 > Eq.~\ref{introEq:MCmicroReverse} becomes
345 > \begin{equation}
346 > \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
347 >        \frac{\rho_n}{\rho_m}
348 > \label{introEq:MCmicro2}
349 > \end{equation}
350 > For a Boltxman limiting distribution,
351 > \begin{equation}
352 > \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
353 >        = e^{-\beta \Delta \mathcal{U}}
354 > \label{introEq:MCmicro3}
355 > \end{equation}
356 > This allows for the following set of acceptance rules be defined:
357 > \begin{equation}
358 > EQ Here
359 > \end{equation}
360  
361 < constrained dynamics
361 > Using the acceptance criteria from Eq.~\ref{fix} the Metropolis method
362 > proceeds as follows
363 > \begin{itemize}
364 > \item Generate an initial configuration $fix$ which has some finite probability in $fix$.
365 > \item Modify $fix$, to generate configuratioon $fix$.
366 > \item If configuration $n$ lowers the energy of the system, accept the move with unity ($fix$ becomes $fix$).  Otherwise accept with probability $fix$.
367 > \item Accumulate the average for the configurational observable of intereest.
368 > \item Repeat from step 2 until average converges.
369 > \end{itemize}
370 > One important note is that the average is accumulated whether the move
371 > is accepted or not, this ensures proper weighting of the average.
372 > Using Eq.~\ref{fix} it becomes clear that the accumulated averages are
373 > the ensemble averages, as this method ensures that the limiting
374 > distribution is the Boltzman distribution.
375  
376 < \section{\label{introSec:chapterLayout}Chapter Layout}
376 > \subsection{\label{introSec:MD}Molecular Dynamics Simulations}
377  
378 < \subsection{\label{introSec:RSA}Random Sequential Adsorption}
378 > The main simulation tool used in this research is Molecular Dynamics.
379 > Molecular Dynamics is when the equations of motion for a system are
380 > integrated in order to obtain information about both the positions and
381 > momentum of a system, allowing the calculation of not only
382 > configurational observables, but momenta dependent ones as well:
383 > diffusion constants, velocity auto correlations, folding/unfolding
384 > events, etc.  Due to the principle of ergodicity, Eq.~\ref{fix}, the
385 > average of these observables over the time period of the simulation
386 > are taken to be the ensemble averages for the system.
387  
388 < \subsection{\label{introSec:OOPSE}The OOPSE Simulation Package}
388 > The choice of when to use molecular dynamics over Monte Carlo
389 > techniques, is normally decided by the observables in which the
390 > researcher is interested.  If the observables depend on momenta in
391 > any fashion, then the only choice is molecular dynamics in some form.
392 > However, when the observable is dependent only on the configuration,
393 > then most of the time Monte Carlo techniques will be more efficent.
394  
395 < \subsection{\label{introSec:bilayers}A Mesoscale Model for Phospholipid Bilayers}
395 > The focus of research in the second half of this dissertation is
396 > centered around the dynamic properties of phospholipid bilayers,
397 > making molecular dynamics key in the simulation of those properties.
398 >
399 > \subsubsection{Molecular dynamics Algorithm}
400 >
401 > To illustrate how the molecular dynamics technique is applied, the
402 > following sections will describe the sequence involved in a
403 > simulation.  Sec.~\ref{fix} deals with the initialization of a
404 > simulation.  Sec.~\ref{fix} discusses issues involved with the
405 > calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
406 > discussion with the integration of the equations of motion. \cite{fix}
407 >
408 > \subsubsection{initialization}
409 >
410 > When selecting the initial configuration for the simulation it is
411 > important to consider what dynamics one is hoping to observe.
412 > Ch.~\ref{fix} deals with the formation and equilibrium dynamics of
413 > phospholipid membranes.  Therefore in these simulations initial
414 > positions were selected that in some cases dispersed the lipids in
415 > water, and in other cases structured the lipids into preformed
416 > bilayers.  Important considerations at this stage of the simulation are:
417 > \begin{itemize}
418 > \item There are no major overlaps of molecular or atomic orbitals
419 > \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
420 > \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
421 > \end{itemize}
422 >
423 > The first point is important due to the amount of potential energy
424 > generated by having two particles too close together.  If overlap
425 > occurs, the first evaluation of forces will return numbers so large as
426 > to render the numerical integration of teh motion meaningless.  The
427 > second consideration keeps the system from drifting or rotating as a
428 > whole.  This arises from the fact that most simulations are of systems
429 > in equilibrium in the absence of outside forces.  Therefore any net
430 > movement would be unphysical and an artifact of the simulation method
431 > used.  The final point addresses teh selection of the magnitude of the
432 > initial velocities.  For many simulations it is convienient to use
433 > this opportunity to scale the amount of kinetic energy to reflect the
434 > desired thermal distribution of the system.  However, it must be noted
435 > that most systems will require further velocity rescaling after the
436 > first few initial simulation steps due to either loss or gain of
437 > kinetic energy from energy stored in potential degrees of freedom.
438 >
439 > \subsubsection{Force Evaluation}
440 >
441 > The evaluation of forces is the most computationally expensive portion
442 > of a given molecular dynamics simulation.  This is due entirely to the
443 > evaluation of long range forces in a simulation, typically pair-wise.
444 > These forces are most commonly the Van der Waals force, and sometimes
445 > Coulombic forces as well.  For a pair-wise force, there are $fix$
446 > pairs to be evaluated, where $n$ is the number of particles in the
447 > system.  This leads to the calculations scaling as $fix$, making large
448 > simulations prohibitive in the absence of any computation saving
449 > techniques.
450 >
451 > Another consideration one must resolve, is that in a given simulation
452 > a disproportionate number of the particles will feel the effects of
453 > the surface.\cite{fix} For a cubic system of 1000 particles arranged
454 > in a $10x10x10$ cube, 488 particles will be exposed to the surface.
455 > Unless one is simulating an isolated particle group in a vacuum, the
456 > behavior of the system will be far from the desired bulk
457 > charecteristics.  To offset this, simulations employ the use of
458 > periodic boundary images.\cite{fix}
459 >
460 > The technique involves the use of an algorithm that replicates the
461 > simulation box on an infinite lattice in cartesian space.  Any given
462 > particle leaving the simulation box on one side will have an image of
463 > itself enter on the opposite side (see Fig.~\ref{fix}).
464 > \begin{equation}
465 > EQ Here
466 > \end{equation}
467 > In addition, this sets that any given particle pair has an image, real
468 > or periodic, within $fix$ of each other.  A discussion of the method
469 > used to calculate the periodic image can be found in Sec.\ref{fix}.
470 >
471 > Returning to the topic of the computational scale of the force
472 > evaluation, the use of periodic boundary conditions requires that a
473 > cutoff radius be employed.  Using a cutoff radius improves the
474 > efficiency of the force evaluation, as particles farther than a
475 > predetermined distance, $fix$, are not included in the
476 > calculation.\cite{fix} In a simultation with periodic images, $fix$
477 > has a maximum value of $fix$.  Fig.~\ref{fix} illustrates how using an
478 > $fix$ larger than this value, or in the extreme limit of no $fix$ at
479 > all, the corners of the simulation box are unequally weighted due to
480 > the lack of particle images in the $x$, $y$, or $z$ directions past a
481 > disance of $fix$.
482 >
483 > With the use of an $fix$, however, comes a discontinuity in the
484 > potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
485 > one calculates the potential energy at the $r_{\text{cut}}$, and add
486 > that value to the potential.  This causes the function to go smoothly
487 > to zero at the cutoff radius.  This ensures conservation of energy
488 > when integrating the Newtonian equations of motion.
489 >
490 > The second main simplification used in this research is the Verlet
491 > neighbor list. \cite{allen87:csl} In the Verlet method, one generates
492 > a list of all neighbor atoms, $j$, surrounding atom $i$ within some
493 > cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
494 > This list is created the first time forces are evaluated, then on
495 > subsequent force evaluations, pair calculations are only calculated
496 > from the neighbor lists.  The lists are updated if any given particle
497 > in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
498 > giving rise to the possibility that a particle has left or joined a
499 > neighbor list.
500 >
501 > \subsection{\label{introSec:MDintegrate} Integration of the equations of motion}
502 >
503 > A starting point for the discussion of molecular dynamics integrators
504 > is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
505 > expansion of position in time:
506 > \begin{equation}
507 > eq here
508 > \label{introEq:verletForward}
509 > \end{equation}
510 > As well as,
511 > \begin{equation}
512 > eq here
513 > \label{introEq:verletBack}
514 > \end{equation}
515 > Adding together Eq.~\ref{introEq:verletForward} and
516 > Eq.~\ref{introEq:verletBack} results in,
517 > \begin{equation}
518 > eq here
519 > \label{introEq:verletSum}
520 > \end{equation}
521 > Or equivalently,
522 > \begin{equation}
523 > eq here
524 > \label{introEq:verletFinal}
525 > \end{equation}
526 > Which contains an error in the estimate of the new positions on the
527 > order of $\Delta t^4$.
528 >
529 > In practice, however, the simulations in this research were integrated
530 > with a velocity reformulation of teh Verlet method.\cite{allen87:csl}
531 > \begin{equation}
532 > eq here
533 > \label{introEq:MDvelVerletPos}
534 > \end{equation}
535 > \begin{equation}
536 > eq here
537 > \label{introEq:MDvelVerletVel}
538 > \end{equation}
539 > The original Verlet algorithm can be regained by substituting the
540 > velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
541 > formulations are chosen in this research because the algorithms have
542 > very little long term drift in energy conservation.  Energy
543 > conservation in a molecular dynamics simulation is of extreme
544 > importance, as it is a measure of how closely one is following the
545 > ``true'' trajectory wtih the finite integration scheme.  An exact
546 > solution to the integration will conserve area in phase space, as well
547 > as be reversible in time, that is, the trajectory integrated forward
548 > or backwards will exactly match itself.  Having a finite algorithm
549 > that both conserves area in phase space and is time reversible,
550 > therefore increases, but does not guarantee the ``correctness'' or the
551 > integrated trajectory.
552 >
553 > It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
554 > does not rigorously preserve the actual Hamiltonian, it does preserve
555 > a pseudo-Hamiltonian which shadows the real one in phase space.  This
556 > pseudo-Hamiltonian is proveably area-conserving as well as time
557 > reversible.  The fact that it shadows the true Hamiltonian in phase
558 > space is acceptable in actual simulations as one is interested in the
559 > ensemble average of the observable being measured.  From the ergodic
560 > hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
561 > average will match the ensemble average, therefore two similar
562 > trajectories in phase space should give matching statistical averages.
563 >
564 > \subsection{\label{introSec:MDfurther}Further Considerations}
565 > In the simulations presented in this research, a few additional
566 > parameters are needed to describe the motions.  The simulations
567 > involving water and phospholipids in Chapt.~\ref{chaptLipids} are
568 > required to integrate the equations of motions for dipoles on atoms.
569 > This involves an additional three parameters be specified for each
570 > dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
571 > taken to be the Euler angles, where $\phi$ is a rotation about the
572 > $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
573 > $\psi$ is a final rotation about the new $z$-axis (see
574 > Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
575 > accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
576 > defined as follows:
577 > \begin{equation}
578 > eq here
579 > \label{introEq:EulerRotMat}
580 > \end{equation}
581 >
582 > The equations of motion for Euler angles can be written down as
583 > \cite{allen87:csl}
584 > \begin{equation}
585 > eq here
586 > \label{introEq:MDeuleeerPsi}
587 > \end{equation}
588 > Where $\omega^s_i$ is the angular velocity in the lab space frame
589 > along cartesian coordinate $i$.  However, a difficulty arises when
590 > attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
591 > Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
592 > both equations means there is a non-physical instability present when
593 > $\theta$ is 0 or $\pi$.
594 >
595 > To correct for this, the simulations integrate the rotation matrix,
596 > $\mathbf{A}$, directly, thus avoiding the instability.
597 > This method was proposed by Dullwebber
598 > \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
599 > Sec.~\ref{introSec:MDsymplecticRot}.
600 >
601 > \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
602 >
603 > Before discussing the integration of the rotation matrix, it is
604 > necessary to understand the construction of a ``good'' integration
605 > scheme.  It has been previously
606 > discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
607 > integrator to be symplectic, or time reversible.  The following is an
608 > outline of the Trotter factorization of the Liouville Propagator as a
609 > scheme for generating symplectic integrators. \cite{Tuckerman:1992}
610 >
611 > For a system with $f$ degrees of freedom the Liouville operator can be
612 > defined as,
613 > \begin{equation}
614 > eq here
615 > \label{introEq:LiouvilleOperator}
616 > \end{equation}
617 > Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
618 > degree of freedom, and $f_j$ is the force on that degree of freedom.
619 > $\Gamma$ is defined as the set of all positions nad conjugate momenta,
620 > $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
621 > \begin {equation}
622 > eq here
623 > \label{introEq:Lpropagator}
624 > \end{equation}
625 > This allows the specification of $\Gamma$ at any time $t$ as
626 > \begin{equation}
627 > eq here
628 > \label{introEq:Lp2}
629 > \end{equation}
630 > It is important to note, $U(t)$ is a unitary operator meaning
631 > \begin{equation}
632 > U(-t)=U^{-1}(t)
633 > \label{introEq:Lp3}
634 > \end{equation}
635 >
636 > Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
637 > Trotter theorem to yield
638 > \begin{equation}
639 > eq here
640 > \label{introEq:Lp4}
641 > \end{equation}
642 > Where $\Delta t = \frac{t}{P}$.
643 > With this, a discrete time operator $G(\Delta t)$ can be defined:
644 > \begin{equation}
645 > eq here
646 > \label{introEq:Lp5}
647 > \end{equation}
648 > Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
649 > unitary.  Meaning an integrator based on this factorization will be
650 > reversible in time.
651 >
652 > As an example, consider the following decomposition of $L$:
653 > \begin{equation}
654 > eq here
655 > \label{introEq:Lp6}
656 > \end{equation}
657 > Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
658 > \begin{equation}
659 > eq here
660 > \label{introEq:Lp8}
661 > \end{equation}
662 > Where $c$ is independent of $q$.  One obtains the following:  
663 > \begin{equation}
664 > eq here
665 > \label{introEq:Lp8}
666 > \end{equation}
667 > Or written another way,
668 > \begin{equation}
669 > eq here
670 > \label{intorEq:Lp9}
671 > \end{equation}
672 > This is the velocity Verlet formulation presented in
673 > Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
674 > comprised of unitary propagators, it is symplectic, and therefore area
675 > preserving in phase space.  From the preceeding fatorization, one can
676 > see that the integration of the equations of motion would follow:
677 > \begin{enumerate}
678 > \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
679 >
680 > \item Use the half step velocities to move positions one whole step, $\Delta t$.
681 >
682 > \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
683 >
684 > \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
685 > \end{enumerate}
686 >
687 > \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
688 >
689 > Based on the factorization from the previous section,
690 > Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
691 > symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
692 > alternative method for the integration of orientational degrees of
693 > freedom. The method starts with a straightforward splitting of the
694 > Liouville operator:
695 > \begin{equation}
696 > eq here
697 > \label{introEq:SR1}
698 > \end{equation}
699 > Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
700 > due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
701 > angular momenta of the system. The propagator, $G(\Delta t)$, becomes
702 > \begin{equation}
703 > eq here
704 > \label{introEq:SR2}
705 > \end{equation}
706 > Propagation fo the linear and angular momenta follows as in the Verlet
707 > scheme.  The propagation of positions also follows the verlet scheme
708 > with the addition of a further symplectic splitting of the rotation
709 > matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
710 > \begin{equation}
711 > eq here
712 > \label{introEq:SR3}
713 > \end{equation}
714 > Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
715 > $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
716 > unitary and symplectic, the entire integration scheme is also
717 > symplectic and time reversible.
718 >
719 > \section{\label{introSec:layout}Dissertation Layout}
720 >
721 > This dissertation is divided as follows:Chapt.~\ref{chapt:RSA}
722 > presents the random sequential adsorption simulations of related
723 > pthalocyanines on a gold (111) surface. Chapt.~\ref{chapt:OOPSE}
724 > is about the writing of the molecular dynamics simulation package
725 > {\sc oopse}, Chapt.~\ref{chapt:lipid} regards the simulations of
726 > phospholipid bilayers using a mesoscale model, and lastly,
727 > Chapt.~\ref{chapt:conclusion} concludes this dissertation with a
728 > summary of all results. The chapters are arranged in chronological
729 > order, and reflect the progression of techniques I employed during my
730 > research.  
731 >
732 > The chapter concerning random sequential adsorption
733 > simulations is a study in applying the principles of theoretical
734 > research in order to obtain a simple model capaable of explaining the
735 > results.  My advisor, Dr. Gezelter, and I were approached by a
736 > colleague, Dr. Lieberman, about possible explanations for partial
737 > coverge of a gold surface by a particular compound of hers. We
738 > suggested it might be due to the statistical packing fraction of disks
739 > on a plane, and set about to simulate this system.  As the events in
740 > our model were not dynamic in nature, a Monte Carlo method was
741 > emplyed.  Here, if a molecule landed on the surface without
742 > overlapping another, then its landing was accepted.  However, if there
743 > was overlap, the landing we rejected and a new random landing location
744 > was chosen.  This defined our acceptance rules and allowed us to
745 > construct a Markov chain whose limiting distribution was the surface
746 > coverage in which we were interested.
747 >
748 > The following chapter, about the simulation package {\sc oopse},
749 > describes in detail the large body of scientific code that had to be
750 > written in order to study phospholipid bilayer.  Although there are
751 > pre-existing molecular dynamic simulation packages available, none
752 > were capable of implementing the models we were developing.{\sc oopse}
753 > is a unique package capable of not only integrating the equations of
754 > motion in cartesian space, but is also able to integrate the
755 > rotational motion of rigid bodies and dipoles.  Add to this the
756 > ability to perform calculations across parallel processors and a
757 > flexible script syntax for creating systems, and {\sc oopse} becomes a
758 > very powerful scientific instrument for the exploration of our model.
759 >
760 > Bringing us to Chapt.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
761 > able to parametrize a mesoscale model for phospholipid simulations.
762 > This model retains information about solvent ordering about the
763 > bilayer, as well as information regarding the interaction of the
764 > phospholipid head groups' dipole with each other and the surrounding
765 > solvent.  These simulations give us insight into the dynamic events
766 > that lead to the formation of phospholipid bilayers, as well as
767 > provide the foundation for future exploration of bilayer phase
768 > behavior with this model.  
769 >
770 > Which leads into the last chapter, where I discuss future directions
771 > for both{\sc oopse} and this mesoscale model.  Additionally, I will
772 > give a summary of results for this dissertation.
773 >
774 >

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines