ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 1001 by mmeineke, Sat Jan 31 22:10:21 2004 UTC vs.
Revision 1037 by mmeineke, Fri Feb 6 21:51:01 2004 UTC

# Line 1 | Line 1
1  
2  
3 < \chapter{\label{chapt:intro}Introduction and Theoretical Background}
3 > \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4  
5  
6
7 \section{\label{introSec:theory}Theoretical Background}
8
6   The techniques used in the course of this research fall under the two
7   main classes of molecular simulation: Molecular Dynamics and Monte
8 < Carlo. Molecular Dynamic simulations integrate the equations of motion
9 < for a given system of particles, allowing the researher to gain
10 < insight into the time dependent evolution of a system. Diffusion
8 > Carlo. Molecular Dynamics simulations integrate the equations of
9 > motion for a given system of particles, allowing the researcher to
10 > gain insight into the time dependent evolution of a system. Diffusion
11   phenomena are readily studied with this simulation technique, making
12   Molecular Dynamics the main simulation technique used in this
13   research. Other aspects of the research fall under the Monte Carlo
14   class of simulations. In Monte Carlo, the configuration space
15 < available to the collection of particles is sampled stochastichally,
16 < or randomly. Each configuration is chosen with a given probability
17 < based on the Maxwell Boltzman distribution. These types of simulations
18 < are best used to probe properties of a system that are only dependent
19 < only on the state of the system. Structural information about a system
20 < is most readily obtained through these types of methods.
15 > available to the collection of particles is sampled stochastically, or
16 > randomly. Each configuration is chosen with a given probability based
17 > on the Maxwell Boltzmann distribution. These types of simulations are
18 > best used to probe properties of a system that are dependent only on
19 > the state of the system. Structural information about a system is most
20 > readily obtained through these types of methods.
21  
22   Although the two techniques employed seem dissimilar, they are both
23   linked by the overarching principles of Statistical
24 < Thermodynamics. Statistical Thermodynamics governs the behavior of
24 > Mechanics. Statistical Meachanics governs the behavior of
25   both classes of simulations and dictates what each method can and
26   cannot do. When investigating a system, one most first analyze what
27   thermodynamic properties of the system are being probed, then chose
28   which method best suits that objective.
29  
30 < \subsection{\label{introSec:statThermo}Statistical Mechanics}
30 > \section{\label{introSec:statThermo}Statistical Mechanics}
31  
32   The following section serves as a brief introduction to some of the
33   Statistical Mechanics concepts present in this dissertation.  What
34 < follows is a brief derivation of Blotzman weighted statistics, and an
34 > follows is a brief derivation of Boltzmann weighted statistics and an
35   explanation of how one can use the information to calculate an
36   observable for a system.  This section then concludes with a brief
37   discussion of the ergodic hypothesis and its relevance to this
38   research.
39  
40 < \subsection{\label{introSec:boltzman}Boltzman weighted statistics}
40 > \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41  
42 < Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
43 < Let $\Omega(E_{gamma})$ represent the number of degenerate ways
44 < $\boldsymbol{\Gamma}$, the collection of positions and conjugate
45 < momenta of system $\gamma$, can be configured to give
46 < $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
47 < where energy is exchanged between the two systems, $\Omega(E)$, where
48 < $E$ is the total energy of both systems, can be represented as
42 > Consider a system, $\gamma$, with total energy $E_{\gamma}$.  Let
43 > $\Omega(E_{\gamma})$ represent the number of degenerate ways
44 > $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
45 > collection of positions and conjugate momenta of system $\gamma$, can
46 > be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
47 > of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
48 > E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
49 > expressed as,
50   \begin{equation}
51 < eq here
51 > \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
52 >        \Omega(E_{\gamma}) \times \ldots \Omega(E_n)
53 > \label{introEq:SM0.1}
54 > \end{equation}
55 > This multiplicative combination of degeneracies is illustrated in
56 > Fig.~\ref{introFig:degenProd}.
57 >
58 > \begin{figure}
59 > \centering
60 > \includegraphics[width=\linewidth]{omegaFig.eps}
61 > \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
62 > \label{introFig:degenProd}
63 > \end{figure}
64 >
65 > Next, consider the specific case of $\gamma$ in contact with a
66 > bath. Exchange of energy is allowed between the bath and the system,
67 > subject to the constraint that the total energy
68 > ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
69 > is the total energy of both systems, can be represented as
70 > \begin{equation}
71 > \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
72   \label{introEq:SM1}
73   \end{equation}
74   Or additively as
75   \begin{equation}
76 < eq here
76 > \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
77   \label{introEq:SM2}
78   \end{equation}
79  
80   The solution to Eq.~\ref{introEq:SM2} maximizes the number of
81 < degenerative configurations in $E$. \cite{fix}
81 > degenerative configurations in $E$. \cite{Frenkel1996}
82   This gives
83   \begin{equation}
84 < eq here
84 > \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
85 >        \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
86 >         +
87 >        \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
88 >        \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
89   \label{introEq:SM3}
90   \end{equation}
91   Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
92 < $\frac{partialE_{\text{bath}}}{\partial E_{\gamma}}$ is
92 > $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
93   $-1$. Eq.~\ref{introEq:SM3} becomes
94   \begin{equation}
95 < eq here
95 > \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
96 > \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
97   \label{introEq:SM4}
98   \end{equation}
99  
100   At this point, one can draw a relationship between the maximization of
101   degeneracy in Eq.~\ref{introEq:SM3} and the second law of
102 < thermodynamics.  Namely, that for a closed system, entropy wil
103 < increase for an irreversible process.\cite{fix} Here the
102 > thermodynamics.  Namely, that for a closed system, entropy will
103 > increase for an irreversible process.\cite{chandler:1987} Here the
104   process is the partitioning of energy among the two systems.  This
105   allows the following definition of entropy:
106   \begin{equation}
107 < eq here
107 > S = k_B \ln \Omega(E)
108   \label{introEq:SM5}
109   \end{equation}
110 < Where $k_B$ is the Boltzman constant.  Having defined entropy, one can
110 > Where $k_B$ is the Boltzmann constant.  Having defined entropy, one can
111   also define the temperature of the system using the relation
112   \begin{equation}
113 < eq here
113 > \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
114   \label{introEq:SM6}
115   \end{equation}
116   The temperature in the system $\gamma$ is then
117   \begin{equation}
118 < eq here
118 > \beta( E_{\gamma} ) = \frac{1}{k_B T} =
119 >        \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
120   \label{introEq:SM7}
121   \end{equation}
122   Applying this to Eq.~\ref{introEq:SM4} gives the following
123   \begin{equation}
124 < eq here
124 > \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
125   \label{introEq:SM8}
126   \end{equation}
127   Showing that the partitioning of energy between the two systems is
128 < actually a process of thermal equilibration. \cite{fix}
128 > actually a process of thermal equilibration.\cite{Frenkel1996}
129  
130   An application of these results is to formulate the form of an
131   expectation value of an observable, $A$, in the canonical ensemble. In
132   the canonical ensemble, the number of particles, $N$, the volume, $V$,
133   and the temperature, $T$, are all held constant while the energy, $E$,
134   is allowed to fluctuate. Returning to the previous example, the bath
135 < system is now an infinitly large thermal bath, whose exchange of
136 < energy with the system $\gamma$ holds teh temperature constant.  The
135 > system is now an infinitely large thermal bath, whose exchange of
136 > energy with the system $\gamma$ holds the temperature constant.  The
137   partitioning of energy in the bath system is then related to the total
138 < energy of both systems and the fluctuations in $E_{\gamma}}$:
138 > energy of both systems and the fluctuations in $E_{\gamma}$:
139   \begin{equation}
140 < eq here
140 > \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
141   \label{introEq:SM9}
142   \end{equation}
143   As for the expectation value, it can be defined
144   \begin{equation}
145 < eq here
145 > \langle A \rangle =
146 >        \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
147 >        P_{\gamma} A(\boldsymbol{\Gamma})
148   \label{introEq:SM10}
149 < \end{eequation}
150 < Where $\int_{\boldsymbol{\Gamma}} d\Boldsymbol{\Gamma}$ denotes an
151 < integration over all accessable phase space, $P_{\gamma}$ is the
149 > \end{equation}
150 > Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
151 > an integration over all accessible phase space, $P_{\gamma}$ is the
152   probability of being in a given phase state and
153   $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
154   phase state.
# Line 132 | Line 158 | states the coupled system is able to assume. Namely,
158   $\gamma$ will be directly proportional to the number of allowable
159   states the coupled system is able to assume. Namely,
160   \begin{equation}
161 < eq here
161 > P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
162 >        e^{\ln \Omega( E - E_{\gamma})}
163   \label{introEq:SM11}
164   \end{equation}
165 < With $E_{\gamma} \lE$, $\ln \Omega$ can be expanded in a Taylor series:
165 > With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
166   \begin{equation}
167 < eq here
167 > \ln \Omega ( E - E_{\gamma}) =
168 >        \ln \Omega (E) -
169 >        E_{\gamma}  \frac{\partial \ln \Omega }{\partial E}
170 >        + \ldots
171   \label{introEq:SM12}
172   \end{equation}
173   Higher order terms are omitted as $E$ is an infinite thermal
174   bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
175   be rewritten:
176   \begin{equation}
177 < eq here
177 > P_{\gamma} \propto e^{-\beta E_{\gamma}}
178   \label{introEq:SM13}
179   \end{equation}
180 < Where $\ln \Omega(E)$ has been factored out of the porpotionality as a
181 < constant.  Normalizing the probability ($\int_{\boldsymbol{\Gamma}}
182 < d\boldsymbol{\Gamma} P_{\gamma} =1$ gives
180 > Where $\ln \Omega(E)$ has been factored out of the proportionality as a
181 > constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
182 > d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
183   \begin{equation}
184 < eq here
184 > P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
185 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
186   \label{introEq:SM14}
187   \end{equation}
188 < This result is the standard Boltzman statistical distribution.
188 > This result is the standard Boltzmann statistical distribution.
189   Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
190   \begin{equation}
191 < eq here
191 > \langle A \rangle =
192 > \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
193 >        A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
194 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
195   \label{introEq:SM15}
196   \end{equation}
197  
# Line 165 | Line 199 | the assumption that given an infinite amount of time,
199  
200   One last important consideration is that of ergodicity. Ergodicity is
201   the assumption that given an infinite amount of time, a system will
202 < visit every available point in phase space.\cite{fix} For most
202 > visit every available point in phase space.\cite{Frenkel1996} For most
203   systems, this is a valid assumption, except in cases where the system
204 < may be trapped in a local feature (\emph{i.~e.~glasses}). When valid,
204 > may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
205   ergodicity allows the unification of a time averaged observation and
206 < an ensemble averged one. If an observation is averaged over a
206 > an ensemble averaged one. If an observation is averaged over a
207   sufficiently long time, the system is assumed to visit all
208   appropriately available points in phase space, giving a properly
209   weighted statistical average. This allows the researcher freedom of
210   choice when deciding how best to measure a given observable.  When an
211   ensemble averaged approach seems most logical, the Monte Carlo
212 < techniques described in Sec.~\ref{introSec:MC} can be utilized.
212 > techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
213   Conversely, if a problem lends itself to a time averaging approach,
214   the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
215   employed.
216  
217 < \subsection{\label{introSec:monteCarlo}Monte Carlo Simulations}
217 > \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
218  
219   The Monte Carlo method was developed by Metropolis and Ulam for their
220   work in fissionable material.\cite{metropolis:1949} The method is so
# Line 204 | Line 238 | the value of $f(x)$ at each point. The accumulated ave
238   $[a,b]$. The calculation of the integral could then be solved by
239   randomly choosing points along the interval $[a,b]$ and calculating
240   the value of $f(x)$ at each point. The accumulated average would then
241 < approach $I$ in the limit where the number of trials is infintely
241 > approach $I$ in the limit where the number of trials is infinitely
242   large.
243  
244   However, in Statistical Mechanics, one is typically interested in
# Line 216 | Line 250 | Where $\mathbf{r}^N$ stands for the coordinates of all
250   \label{eq:mcEnsAvg}
251   \end{equation}
252   Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
253 < and $A$ is some observable that is only dependent on
254 < position. $\langle A \rangle$ is the ensemble average of $A$ as
255 < presented in Sec.~\ref{introSec:statThermo}. Because $A$ is
256 < independent of momentum, the momenta contribution of the integral can
257 < be factored out, leaving the configurational integral. Application of
258 < the brute force method to this system would yield highly inefficient
259 < results. Due to the Boltzman weighting of this integral, most random
253 > and $A$ is some observable that is only dependent on position. This is
254 > the ensemble average of $A$ as presented in
255 > Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
256 > momentum. Therefore the momenta contribution of the integral can be
257 > factored out, leaving the configurational integral. Application of the
258 > brute force method to this system would yield highly inefficient
259 > results. Due to the Boltzmann weighting of this integral, most random
260   configurations will have a near zero contribution to the ensemble
261 < average. This is where a importance sampling comes into
261 > average. This is where importance sampling comes into
262   play.\cite{allen87:csl}
263  
264   Importance Sampling is a method where one selects a distribution from
# Line 250 | Line 284 | Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
284          {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
285   \label{introEq:MCboltzman}
286   \end{equation}
287 < Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
287 > Where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
288   can be rewritten as
289   \begin{equation}
290   \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
# Line 272 | Line 306 | sampled from the distribution $\rho_{kT}(\mathbf{r}^N)
306   \end{equation}
307   The difficulty is selecting points $\mathbf{r}^N$ such that they are
308   sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
309 < was proposed by Metropolis et al.\cite{metropolis:1953} which involved
309 > was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
310   the use of a Markov chain whose limiting distribution was
311   $\rho_{kT}(\mathbf{r}^N)$.
312  
313 < \subsubsection{\label{introSec:markovChains}Markov Chains}
313 > \subsection{\label{introSec:markovChains}Markov Chains}
314  
315   A Markov chain is a chain of states satisfying the following
316   conditions:\cite{leach01:mm}
# Line 284 | Line 318 | conditions:\cite{leach01:mm}
318   \item The outcome of each trial depends only on the outcome of the previous trial.
319   \item Each trial belongs to a finite set of outcomes called the state space.
320   \end{enumerate}
321 < If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
322 < $\rho_m$ and $\rho_n$ are the probablilities of being in state
321 > If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
322 > $\rho_m$ and $\rho_n$ are the probabilities of being in state
323   $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
324   states are linked by a transition probability, $\pi_{mn}$, which is the
325   probability of going from state $m$ to state $n$.
# Line 326 | Line 360 | will only yield the limiting distribution again.
360   \label{introEq:MCmarkovEquil}
361   \end{equation}
362  
363 < \subsubsection{\label{introSec:metropolisMethod}The Metropolis Method}
363 > \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
364  
365   In the Metropolis method\cite{metropolis:1953}
366   Eq.~\ref{introEq:MCmarkovEquil} is solved such that
367 < $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
367 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
368   of states.  The method accomplishes this by imposing the strong
369   condition of microscopic reversibility on the equilibrium
370   distribution.  Meaning, that at equilibrium the probability of going
# Line 339 | Line 373 | from $m$ to $n$ is the same as going from $n$ to $m$.
373   \rho_m\pi_{mn} = \rho_n\pi_{nm}
374   \label{introEq:MCmicroReverse}
375   \end{equation}
376 < Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
376 > Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
377   the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
378   Eq.~\ref{introEq:MCmicroReverse} becomes
379   \begin{equation}
# Line 347 | Line 381 | Eq.~\ref{introEq:MCmicroReverse} becomes
381          \frac{\rho_n}{\rho_m}
382   \label{introEq:MCmicro2}
383   \end{equation}
384 < For a Boltxman limiting distribution,
384 > For a Boltzmann limiting distribution,
385   \begin{equation}
386   \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
387          = e^{-\beta \Delta \mathcal{U}}
# Line 355 | Line 389 | This allows for the following set of acceptance rules
389   \end{equation}
390   This allows for the following set of acceptance rules be defined:
391   \begin{equation}
392 < EQ Here
392 > \accMe( m \rightarrow n ) =
393 >        \begin{cases}
394 >        1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
395 >        e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
396 >        \end{cases}
397 > \label{introEq:accRules}
398   \end{equation}
399  
400 < Using the acceptance criteria from Eq.~\ref{fix} the Metropolis method
401 < proceeds as follows
402 < \begin{itemize}
403 < \item Generate an initial configuration $fix$ which has some finite probability in $fix$.
404 < \item Modify $fix$, to generate configuratioon $fix$.
405 < \item If configuration $n$ lowers the energy of the system, accept the move with unity ($fix$ becomes $fix$).  Otherwise accept with probability $fix$.
406 < \item Accumulate the average for the configurational observable of intereest.
407 < \item Repeat from step 2 until average converges.
408 < \end{itemize}
400 > Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
401 > Metropolis method proceeds as follows
402 > \begin{enumerate}
403 > \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
404 > \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
405 > \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
406 > \item Accumulate the average for the configurational observable of interest.
407 > \item Repeat from step 2 until the average converges.
408 > \end{enumerate}
409   One important note is that the average is accumulated whether the move
410   is accepted or not, this ensures proper weighting of the average.
411 < Using Eq.~\ref{fix} it becomes clear that the accumulated averages are
412 < the ensemble averages, as this method ensures that the limiting
413 < distribution is the Boltzman distribution.
411 > Using Eq.~\ref{introEq:Importance4} it becomes clear that the
412 > accumulated averages are the ensemble averages, as this method ensures
413 > that the limiting distribution is the Boltzmann distribution.
414  
415 < \subsection{\label{introSec:MD}Molecular Dynamics Simulations}
415 > \section{\label{introSec:MD}Molecular Dynamics Simulations}
416  
417   The main simulation tool used in this research is Molecular Dynamics.
418   Molecular Dynamics is when the equations of motion for a system are
# Line 381 | Line 420 | diffusion constants, velocity auto correlations, foldi
420   momentum of a system, allowing the calculation of not only
421   configurational observables, but momenta dependent ones as well:
422   diffusion constants, velocity auto correlations, folding/unfolding
423 < events, etc.  Due to the principle of ergodicity, Eq.~\ref{fix}, the
424 < average of these observables over the time period of the simulation
425 < are taken to be the ensemble averages for the system.
423 > events, etc.  Due to the principle of ergodicity,
424 > Sec.~\ref{introSec:ergodic}, the average of these observables over the
425 > time period of the simulation are taken to be the ensemble averages
426 > for the system.
427  
428   The choice of when to use molecular dynamics over Monte Carlo
429   techniques, is normally decided by the observables in which the
430   researcher is interested.  If the observables depend on momenta in
431   any fashion, then the only choice is molecular dynamics in some form.
432   However, when the observable is dependent only on the configuration,
433 < then most of the time Monte Carlo techniques will be more efficent.
433 > then most of the time Monte Carlo techniques will be more efficient.
434  
435   The focus of research in the second half of this dissertation is
436   centered around the dynamic properties of phospholipid bilayers,
437   making molecular dynamics key in the simulation of those properties.
438  
439 < \subsubsection{Molecular dynamics Algorithm}
439 > \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
440  
441   To illustrate how the molecular dynamics technique is applied, the
442   following sections will describe the sequence involved in a
443 < simulation.  Sec.~\ref{fix} deals with the initialization of a
444 < simulation.  Sec.~\ref{fix} discusses issues involved with the
445 < calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
446 < discussion with the integration of the equations of motion. \cite{fix}
443 > simulation.  Sec.~\ref{introSec:mdInit} deals with the initialization
444 > of a simulation.  Sec.~\ref{introSec:mdForce} discusses issues
445 > involved with the calculation of the forces.
446 > Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
447 > with the integration of the equations of motion.\cite{Frenkel1996}
448  
449 < \subsubsection{initialization}
449 > \subsection{\label{introSec:mdInit}Initialization}
450  
451   When selecting the initial configuration for the simulation it is
452   important to consider what dynamics one is hoping to observe.
453 < Ch.~\ref{fix} deals with the formation and equilibrium dynamics of
453 > Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
454   phospholipid membranes.  Therefore in these simulations initial
455   positions were selected that in some cases dispersed the lipids in
456 < water, and in other cases structured the lipids into preformed
456 > water, and in other cases structured the lipids into performed
457   bilayers.  Important considerations at this stage of the simulation are:
458   \begin{itemize}
459   \item There are no major overlaps of molecular or atomic orbitals
460 < \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
461 < \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
460 > \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
461 > \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
462   \end{itemize}
463  
464   The first point is important due to the amount of potential energy
465   generated by having two particles too close together.  If overlap
466   occurs, the first evaluation of forces will return numbers so large as
467 < to render the numerical integration of teh motion meaningless.  The
467 > to render the numerical integration of the motion meaningless.  The
468   second consideration keeps the system from drifting or rotating as a
469   whole.  This arises from the fact that most simulations are of systems
470   in equilibrium in the absence of outside forces.  Therefore any net
471   movement would be unphysical and an artifact of the simulation method
472 < used.  The final point addresses teh selection of the magnitude of the
473 < initial velocities.  For many simulations it is convienient to use
472 > used.  The final point addresses the selection of the magnitude of the
473 > initial velocities.  For many simulations it is convenient to use
474   this opportunity to scale the amount of kinetic energy to reflect the
475   desired thermal distribution of the system.  However, it must be noted
476   that most systems will require further velocity rescaling after the
477   first few initial simulation steps due to either loss or gain of
478   kinetic energy from energy stored in potential degrees of freedom.
479  
480 < \subsubsection{Force Evaluation}
480 > \subsection{\label{introSec:mdForce}Force Evaluation}
481  
482   The evaluation of forces is the most computationally expensive portion
483   of a given molecular dynamics simulation.  This is due entirely to the
484   evaluation of long range forces in a simulation, typically pair-wise.
485   These forces are most commonly the Van der Waals force, and sometimes
486 < Coulombic forces as well.  For a pair-wise force, there are $fix$
487 < pairs to be evaluated, where $n$ is the number of particles in the
488 < system.  This leads to the calculations scaling as $fix$, making large
486 > Coulombic forces as well.  For a pair-wise force, there are $N(N-1)/ 2$
487 > pairs to be evaluated, where $N$ is the number of particles in the
488 > system.  This leads to the calculations scaling as $N^2$, making large
489   simulations prohibitive in the absence of any computation saving
490   techniques.
491  
492   Another consideration one must resolve, is that in a given simulation
493   a disproportionate number of the particles will feel the effects of
494 < the surface.\cite{fix} For a cubic system of 1000 particles arranged
495 < in a $10x10x10$ cube, 488 particles will be exposed to the surface.
496 < Unless one is simulating an isolated particle group in a vacuum, the
497 < behavior of the system will be far from the desired bulk
498 < charecteristics.  To offset this, simulations employ the use of
499 < periodic boundary images.\cite{fix}
494 > the surface.\cite{allen87:csl} For a cubic system of 1000 particles
495 > arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
496 > exposed to the surface.  Unless one is simulating an isolated particle
497 > group in a vacuum, the behavior of the system will be far from the
498 > desired bulk characteristics.  To offset this, simulations employ the
499 > use of periodic boundary images.\cite{born:1912}
500  
501   The technique involves the use of an algorithm that replicates the
502 < simulation box on an infinite lattice in cartesian space.  Any given
502 > simulation box on an infinite lattice in Cartesian space.  Any given
503   particle leaving the simulation box on one side will have an image of
504 < itself enter on the opposite side (see Fig.~\ref{fix}).
505 < \begin{equation}
506 < EQ Here
507 < \end{equation}
508 < In addition, this sets that any given particle pair has an image, real
468 < or periodic, within $fix$ of each other.  A discussion of the method
469 < used to calculate the periodic image can be found in Sec.\ref{fix}.
504 > itself enter on the opposite side (see Fig.~\ref{introFig:pbc}).  In
505 > addition, this sets that any two particles have an image, real or
506 > periodic, within $\text{box}/2$ of each other.  A discussion of the
507 > method used to calculate the periodic image can be found in
508 > Sec.\ref{oopseSec:pbc}.
509  
510 + \begin{figure}
511 + \centering
512 + \includegraphics[width=\linewidth]{pbcFig.eps}
513 + \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
514 + \label{introFig:pbc}
515 + \end{figure}
516 +
517   Returning to the topic of the computational scale of the force
518   evaluation, the use of periodic boundary conditions requires that a
519   cutoff radius be employed.  Using a cutoff radius improves the
520   efficiency of the force evaluation, as particles farther than a
521 < predetermined distance, $fix$, are not included in the
522 < calculation.\cite{fix} In a simultation with periodic images, $fix$
523 < has a maximum value of $fix$.  Fig.~\ref{fix} illustrates how using an
524 < $fix$ larger than this value, or in the extreme limit of no $fix$ at
525 < all, the corners of the simulation box are unequally weighted due to
526 < the lack of particle images in the $x$, $y$, or $z$ directions past a
527 < disance of $fix$.
521 > predetermined distance, $r_{\text{cut}}$, are not included in the
522 > calculation.\cite{Frenkel1996} In a simulation with periodic images,
523 > $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
524 > Fig.~\ref{introFig:rMax} illustrates how when using an
525 > $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
526 > $r_{\text{cut}}$ at all, the corners of the simulation box are
527 > unequally weighted due to the lack of particle images in the $x$, $y$,
528 > or $z$ directions past a distance of $\text{box} / 2$.
529  
530 < With the use of an $fix$, however, comes a discontinuity in the
531 < potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
532 < one calculates the potential energy at the $r_{\text{cut}}$, and add
533 < that value to the potential.  This causes the function to go smoothly
534 < to zero at the cutoff radius.  This ensures conservation of energy
535 < when integrating the Newtonian equations of motion.
530 > \begin{figure}
531 > \centering
532 > \includegraphics[width=\linewidth]{rCutMaxFig.eps}
533 > \caption[An explanation of $r_{\text{cut}}$]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
534 > \label{introFig:rMax}
535 > \end{figure}
536 >
537 > With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
538 > the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
539 > discontinuity, one calculates the potential energy at the
540 > $r_{\text{cut}}$, and adds that value to the potential, causing
541 > the function to go smoothly to zero at the cutoff radius.  This
542 > shifted potential ensures conservation of energy when integrating the
543 > Newtonian equations of motion.
544 >
545 > \begin{figure}
546 > \centering
547 > \includegraphics[width=\linewidth]{shiftedPot.eps}
548 > \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
549 > \label{introFig:shiftPot}
550 > \end{figure}
551  
552   The second main simplification used in this research is the Verlet
553   neighbor list. \cite{allen87:csl} In the Verlet method, one generates
# Line 498 | Line 560 | neighbor list.
560   giving rise to the possibility that a particle has left or joined a
561   neighbor list.
562  
563 < \subsection{\label{introSec:MDintegrate} Integration of the equations of motion}
563 > \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
564  
565   A starting point for the discussion of molecular dynamics integrators
566 < is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
566 > is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
567   expansion of position in time:
568   \begin{equation}
569 < eq here
569 > q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
570 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
571 >        \mathcal{O}(\Delta t^4)
572   \label{introEq:verletForward}
573   \end{equation}
574   As well as,
575   \begin{equation}
576 < eq here
576 > q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
577 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
578 >        \mathcal{O}(\Delta t^4)
579   \label{introEq:verletBack}
580   \end{equation}
581 < Adding together Eq.~\ref{introEq:verletForward} and
581 > Where $m$ is the mass of the particle, $q(t)$ is the position at time
582 > $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
583 > particle. Adding together Eq.~\ref{introEq:verletForward} and
584   Eq.~\ref{introEq:verletBack} results in,
585   \begin{equation}
586 < eq here
586 > q(t+\Delta t)+q(t-\Delta t) =
587 >        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
588   \label{introEq:verletSum}
589   \end{equation}
590   Or equivalently,
591   \begin{equation}
592 < eq here
592 > q(t+\Delta t) \approx
593 >        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
594   \label{introEq:verletFinal}
595   \end{equation}
596   Which contains an error in the estimate of the new positions on the
597   order of $\Delta t^4$.
598  
599   In practice, however, the simulations in this research were integrated
600 < with a velocity reformulation of teh Verlet method.\cite{allen87:csl}
601 < \begin{equation}
602 < eq here
603 < \label{introEq:MDvelVerletPos}
604 < \end{equation}
605 < \begin{equation}
536 < eq here
600 > with a velocity reformulation of the Verlet method.\cite{allen87:csl}
601 > \begin{align}
602 > q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
603 > \label{introEq:MDvelVerletPos} \\%
604 > %
605 > v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
606   \label{introEq:MDvelVerletVel}
607 < \end{equation}
607 > \end{align}
608   The original Verlet algorithm can be regained by substituting the
609   velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
610   formulations are chosen in this research because the algorithms have
611   very little long term drift in energy conservation.  Energy
612   conservation in a molecular dynamics simulation is of extreme
613   importance, as it is a measure of how closely one is following the
614 < ``true'' trajectory wtih the finite integration scheme.  An exact
614 > ``true'' trajectory with the finite integration scheme.  An exact
615   solution to the integration will conserve area in phase space, as well
616   as be reversible in time, that is, the trajectory integrated forward
617   or backwards will exactly match itself.  Having a finite algorithm
# Line 553 | Line 622 | a pseudo-Hamiltonian which shadows the real one in pha
622   It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
623   does not rigorously preserve the actual Hamiltonian, it does preserve
624   a pseudo-Hamiltonian which shadows the real one in phase space.  This
625 < pseudo-Hamiltonian is proveably area-conserving as well as time
625 > pseudo-Hamiltonian is provably area-conserving as well as time
626   reversible.  The fact that it shadows the true Hamiltonian in phase
627   space is acceptable in actual simulations as one is interested in the
628   ensemble average of the observable being measured.  From the ergodic
629 < hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
629 > hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
630   average will match the ensemble average, therefore two similar
631   trajectories in phase space should give matching statistical averages.
632  
633   \subsection{\label{introSec:MDfurther}Further Considerations}
634 +
635   In the simulations presented in this research, a few additional
636   parameters are needed to describe the motions.  The simulations
637 < involving water and phospholipids in Chapt.~\ref{chaptLipids} are
637 > involving water and phospholipids in Ch.~\ref{chapt:lipid} are
638   required to integrate the equations of motions for dipoles on atoms.
639   This involves an additional three parameters be specified for each
640   dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
641   taken to be the Euler angles, where $\phi$ is a rotation about the
642   $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
643   $\psi$ is a final rotation about the new $z$-axis (see
644 < Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
645 < accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
644 > Fig.~\ref{introFig:eulerAngles}).  This sequence of rotations can be
645 > accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
646   defined as follows:
647   \begin{equation}
648 < eq here
648 > \mathbf{A} =
649 > \begin{bmatrix}
650 >        \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
651 >        \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
652 >        \sin\theta\sin\psi \\%
653 >        %
654 >        -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
655 >        -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
656 >        \sin\theta\cos\psi \\%
657 >        %
658 >        \sin\phi\sin\theta &%
659 >        -\cos\phi\sin\theta &%
660 >        \cos\theta
661 > \end{bmatrix}
662   \label{introEq:EulerRotMat}
663   \end{equation}
664  
665 < The equations of motion for Euler angles can be written down as
666 < \cite{allen87:csl}
667 < \begin{equation}
668 < eq here
669 < \label{introEq:MDeuleeerPsi}
670 < \end{equation}
665 > \begin{figure}
666 > \centering
667 > \includegraphics[width=\linewidth]{eulerRotFig.eps}
668 > \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
669 > \label{introFig:eulerAngles}
670 > \end{figure}
671 >
672 > The equations of motion for Euler angles can be written down
673 > as\cite{allen87:csl}
674 > \begin{align}
675 > \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
676 >        \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
677 >        \omega^s_z
678 > \label{introEq:MDeulerPhi} \\%
679 > %
680 > \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
681 > \label{introEq:MDeulerTheta} \\%
682 > %
683 > \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
684 >        \omega^s_y \frac{\cos\phi}{\sin\theta}
685 > \label{introEq:MDeulerPsi}
686 > \end{align}
687   Where $\omega^s_i$ is the angular velocity in the lab space frame
688 < along cartesian coordinate $i$.  However, a difficulty arises when
688 > along Cartesian coordinate $i$.  However, a difficulty arises when
689   attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
690   Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
691   both equations means there is a non-physical instability present when
692 < $\theta$ is 0 or $\pi$.
693 <
694 < To correct for this, the simulations integrate the rotation matrix,
695 < $\mathbf{A}$, directly, thus avoiding the instability.
597 < This method was proposed by Dullwebber
598 < \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
692 > $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
693 > the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
694 > instability.  This method was proposed by Dullweber
695 > \emph{et. al.}\cite{Dullweber1997}, and is presented in
696   Sec.~\ref{introSec:MDsymplecticRot}.
697  
698 < \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
698 > \subsection{\label{introSec:MDliouville}Liouville Propagator}
699  
700   Before discussing the integration of the rotation matrix, it is
701   necessary to understand the construction of a ``good'' integration
702   scheme.  It has been previously
703 < discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
703 > discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
704   integrator to be symplectic, or time reversible.  The following is an
705   outline of the Trotter factorization of the Liouville Propagator as a
706 < scheme for generating symplectic integrators. \cite{Tuckerman:1992}
706 > scheme for generating symplectic integrators.\cite{Tuckerman92}
707  
708   For a system with $f$ degrees of freedom the Liouville operator can be
709   defined as,
710   \begin{equation}
711 < eq here
711 > iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
712 >        F_j\frac{\partial}{\partial p_j} \biggr ]
713   \label{introEq:LiouvilleOperator}
714   \end{equation}
715 < Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
716 < degree of freedom, and $f_j$ is the force on that degree of freedom.
717 < $\Gamma$ is defined as the set of all positions nad conjugate momenta,
718 < $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
715 > Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
716 > degree of freedom, and $F_j$ is the force on that degree of freedom.
717 > $\Gamma$ is defined as the set of all positions and conjugate momenta,
718 > $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
719   \begin {equation}
720 < eq here
720 > U(t) = e^{iLt}
721   \label{introEq:Lpropagator}
722   \end{equation}
723   This allows the specification of $\Gamma$ at any time $t$ as
724   \begin{equation}
725 < eq here
725 > \Gamma(t) = U(t)\Gamma(0)
726   \label{introEq:Lp2}
727   \end{equation}
728   It is important to note, $U(t)$ is a unitary operator meaning
# Line 635 | Line 733 | Trotter theorem to yield
733  
734   Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
735   Trotter theorem to yield
736 < \begin{equation}
737 < eq here
738 < \label{introEq:Lp4}
739 < \end{equation}
740 < Where $\Delta t = \frac{t}{P}$.
736 > \begin{align}
737 > e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
738 > %
739 >        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
740 > %
741 >        &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
742 >        e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
743 >        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
744 > \end{align}
745 > Where $\Delta t = t/P$.
746   With this, a discrete time operator $G(\Delta t)$ can be defined:
747 < \begin{equation}
748 < eq here
747 > \begin{align}
748 > G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
749 >        e^{iL_1\frac{\Delta t}{2}} \notag \\%
750 > %
751 >        &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
752 >        U_1 \biggl ( \frac{\Delta t}{2} \biggr )
753   \label{introEq:Lp5}
754 < \end{equation}
755 < Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
754 > \end{align}
755 > Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
756   unitary.  Meaning an integrator based on this factorization will be
757   reversible in time.
758  
759   As an example, consider the following decomposition of $L$:
760 < \begin{equation}
761 < eq here
762 < \label{introEq:Lp6}
763 < \end{equation}
764 < Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
760 > \begin{align}
761 > iL_1 &= \dot{q}\frac{\partial}{\partial q}%
762 > \label{introEq:Lp6a} \\%
763 > %
764 > iL_2 &= F(q)\frac{\partial}{\partial p}%
765 > \label{introEq:Lp6b}
766 > \end{align}
767 > This leads to propagator $G( \Delta t )$ as,
768   \begin{equation}
769 < eq here
770 < \label{introEq:Lp8}
769 > G(\Delta t) =  e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
770 >        e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
771 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
772 > \label{introEq:Lp7}
773   \end{equation}
774 < Where $c$ is independent of $q$.  One obtains the following:  
774 > Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
775   \begin{equation}
776 < eq here
776 > e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
777   \label{introEq:Lp8}
778   \end{equation}
779 + Where $c$ is independent of $x$.  One obtains the following:  
780 + \begin{align}
781 + \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
782 +        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
783 + %
784 + q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
785 +        \label{introEq:Lp9b}\\%
786 + %
787 + \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
788 +        \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
789 + \end{align}
790   Or written another way,
791 < \begin{equation}
792 < eq here
793 < \label{intorEq:Lp9}
794 < \end{equation}
791 > \begin{align}
792 > q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
793 >        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
794 > \label{introEq:Lp10a} \\%
795 > %
796 > \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
797 >        \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
798 > \label{introEq:Lp10b}
799 > \end{align}
800   This is the velocity Verlet formulation presented in
801 < Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
801 > Sec.~\ref{introSec:mdIntegrate}.  Because this integration scheme is
802   comprised of unitary propagators, it is symplectic, and therefore area
803 < preserving in phase space.  From the preceeding fatorization, one can
803 > preserving in phase space.  From the preceding factorization, one can
804   see that the integration of the equations of motion would follow:
805   \begin{enumerate}
806   \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
# Line 684 | Line 812 | see that the integration of the equations of motion wo
812   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
813   \end{enumerate}
814  
815 < \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
815 > \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
816  
817   Based on the factorization from the previous section,
818 < Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
818 > Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
819   symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
820   alternative method for the integration of orientational degrees of
821   freedom. The method starts with a straightforward splitting of the
822   Liouville operator:
823 < \begin{equation}
824 < eq here
825 < \label{introEq:SR1}
826 < \end{equation}
827 < Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
828 < due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
823 > \begin{align}
824 > iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
825 >        \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
826 > \label{introEq:SR1a} \\%
827 > %
828 > iL_F &= F(q)\frac{\partial}{\partial p}
829 > \label{introEq:SR1b} \\%
830 > iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
831 > \label{introEq:SR1b} \\%
832 > \end{align}
833 > Where $\tau(\mathbf{A})$ is the torque of the system
834 > due to the configuration, and $\pi$ is the conjugate
835   angular momenta of the system. The propagator, $G(\Delta t)$, becomes
836   \begin{equation}
837 < eq here
837 > G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
838 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
839 >        e^{\Delta t\,iL_{\text{pos}}} \,
840 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
841 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
842   \label{introEq:SR2}
843   \end{equation}
844 < Propagation fo the linear and angular momenta follows as in the Verlet
845 < scheme.  The propagation of positions also follows the verlet scheme
844 > Propagation of the linear and angular momenta follows as in the Verlet
845 > scheme.  The propagation of positions also follows the Verlet scheme
846   with the addition of a further symplectic splitting of the rotation
847 < matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
847 > matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
848 > $U_{\text{pos}}(\Delta t)$.
849   \begin{equation}
850 < eq here
850 > \mathcal{U}_{\text{rot}}(\Delta t) =
851 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
852 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
853 >        \mathcal{U}_z (\Delta t)\,
854 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
855 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
856   \label{introEq:SR3}
857   \end{equation}
858 < Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
859 < $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
858 > Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
859 > $\pi$ about each axis $j$.  As all propagations are now
860   unitary and symplectic, the entire integration scheme is also
861   symplectic and time reversible.
862  
863   \section{\label{introSec:layout}Dissertation Layout}
864  
865 < This dissertation is divided as follows:Chapt.~\ref{chapt:RSA}
865 > This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
866   presents the random sequential adsorption simulations of related
867 < pthalocyanines on a gold (111) surface. Chapt.~\ref{chapt:OOPSE}
867 > pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
868   is about the writing of the molecular dynamics simulation package
869 < {\sc oopse}, Chapt.~\ref{chapt:lipid} regards the simulations of
870 < phospholipid bilayers using a mesoscale model, and lastly,
871 < Chapt.~\ref{chapt:conclusion} concludes this dissertation with a
869 > {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
870 > phospholipid bilayers using a mesoscale model. And lastly,
871 > Ch.~\ref{chapt:conclusion} concludes this dissertation with a
872   summary of all results. The chapters are arranged in chronological
873   order, and reflect the progression of techniques I employed during my
874   research.  
875  
876 < The chapter concerning random sequential adsorption
877 < simulations is a study in applying the principles of theoretical
878 < research in order to obtain a simple model capaable of explaining the
879 < results.  My advisor, Dr. Gezelter, and I were approached by a
880 < colleague, Dr. Lieberman, about possible explanations for partial
881 < coverge of a gold surface by a particular compound of hers. We
882 < suggested it might be due to the statistical packing fraction of disks
883 < on a plane, and set about to simulate this system.  As the events in
884 < our model were not dynamic in nature, a Monte Carlo method was
885 < emplyed.  Here, if a molecule landed on the surface without
886 < overlapping another, then its landing was accepted.  However, if there
887 < was overlap, the landing we rejected and a new random landing location
888 < was chosen.  This defined our acceptance rules and allowed us to
889 < construct a Markov chain whose limiting distribution was the surface
890 < coverage in which we were interested.
876 > The chapter concerning random sequential adsorption simulations is a
877 > study in applying Statistical Mechanics simulation techniques in order
878 > to obtain a simple model capable of explaining the results.  My
879 > advisor, Dr. Gezelter, and I were approached by a colleague,
880 > Dr. Lieberman, about possible explanations for the  partial coverage of a
881 > gold surface by a particular compound of hers. We suggested it might
882 > be due to the statistical packing fraction of disks on a plane, and
883 > set about to simulate this system.  As the events in our model were
884 > not dynamic in nature, a Monte Carlo method was employed.  Here, if a
885 > molecule landed on the surface without overlapping another, then its
886 > landing was accepted.  However, if there was overlap, the landing we
887 > rejected and a new random landing location was chosen.  This defined
888 > our acceptance rules and allowed us to construct a Markov chain whose
889 > limiting distribution was the surface coverage in which we were
890 > interested.
891  
892   The following chapter, about the simulation package {\sc oopse},
893   describes in detail the large body of scientific code that had to be
894 < written in order to study phospholipid bilayer.  Although there are
894 > written in order to study phospholipid bilayers.  Although there are
895   pre-existing molecular dynamic simulation packages available, none
896   were capable of implementing the models we were developing.{\sc oopse}
897   is a unique package capable of not only integrating the equations of
898 < motion in cartesian space, but is also able to integrate the
898 > motion in Cartesian space, but is also able to integrate the
899   rotational motion of rigid bodies and dipoles.  Add to this the
900   ability to perform calculations across parallel processors and a
901   flexible script syntax for creating systems, and {\sc oopse} becomes a
902   very powerful scientific instrument for the exploration of our model.
903  
904 < Bringing us to Chapt.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
905 < able to parametrize a mesoscale model for phospholipid simulations.
906 < This model retains information about solvent ordering about the
904 > Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
905 > able to parameterize a mesoscale model for phospholipid simulations.
906 > This model retains information about solvent ordering around the
907   bilayer, as well as information regarding the interaction of the
908 < phospholipid head groups' dipole with each other and the surrounding
908 > phospholipid head groups' dipoles with each other and the surrounding
909   solvent.  These simulations give us insight into the dynamic events
910   that lead to the formation of phospholipid bilayers, as well as
911   provide the foundation for future exploration of bilayer phase

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines