ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 1003 by mmeineke, Mon Feb 2 21:56:16 2004 UTC vs.
Revision 1092 by mmeineke, Tue Mar 16 21:35:16 2004 UTC

# Line 1 | Line 1
1  
2  
3 < \chapter{\label{chapt:intro}Introduction and Theoretical Background}
3 > \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4  
5  
6   The techniques used in the course of this research fall under the two
7   main classes of molecular simulation: Molecular Dynamics and Monte
8 < Carlo. Molecular Dynamic simulations integrate the equations of motion
9 < for a given system of particles, allowing the researher to gain
10 < insight into the time dependent evolution of a system. Diffusion
11 < phenomena are readily studied with this simulation technique, making
12 < Molecular Dynamics the main simulation technique used in this
13 < research. Other aspects of the research fall under the Monte Carlo
14 < class of simulations. In Monte Carlo, the configuration space
15 < available to the collection of particles is sampled stochastichally,
16 < or randomly. Each configuration is chosen with a given probability
17 < based on the Maxwell Boltzman distribution. These types of simulations
18 < are best used to probe properties of a system that are only dependent
19 < only on the state of the system. Structural information about a system
20 < is most readily obtained through these types of methods.
8 > Carlo. Molecular Dynamics simulations are carried out by integrating
9 > the equations of motion for a given system of particles, allowing the
10 > researcher to gain insight into the time dependent evolution of a
11 > system. Transport phenomena are readily studied with this simulation
12 > technique, making Molecular Dynamics the main simulation technique
13 > used in this research. Other aspects of the research fall in the
14 > Monte Carlo class of simulations. In Monte Carlo, the configuration
15 > space available to the collection of particles is sampled
16 > stochastically, or randomly. Each configuration is chosen with a given
17 > probability based on the Boltzmann distribution. These types
18 > of simulations are best used to probe properties of a system that are
19 > dependent only on the state of the system. Structural information
20 > about a system is most readily obtained through these types of
21 > methods.
22  
23   Although the two techniques employed seem dissimilar, they are both
24 < linked by the overarching principles of Statistical
25 < Thermodynamics. Statistical Thermodynamics governs the behavior of
24 > linked by the over-arching principles of Statistical
25 > Mechanics. Statistical Mechanics governs the behavior of
26   both classes of simulations and dictates what each method can and
27 < cannot do. When investigating a system, one most first analyze what
28 < thermodynamic properties of the system are being probed, then chose
27 > cannot do. When investigating a system, one must first analyze what
28 > thermodynamic properties of the system are being probed, then choose
29   which method best suits that objective.
30  
31   \section{\label{introSec:statThermo}Statistical Mechanics}
32  
33   The following section serves as a brief introduction to some of the
34 < Statistical Mechanics concepts present in this dissertation.  What
35 < follows is a brief derivation of Blotzman weighted statistics, and an
34 > Statistical Mechanics concepts presented in this dissertation.  What
35 > follows is a brief derivation of Boltzmann weighted statistics and an
36   explanation of how one can use the information to calculate an
37   observable for a system.  This section then concludes with a brief
38   discussion of the ergodic hypothesis and its relevance to this
39   research.
40  
41 < \subsection{\label{introSec:boltzman}Boltzman weighted statistics}
41 > \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
42  
43 < Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
44 < Let $\Omega(E_{\gamma})$ represent the number of degenerate ways
45 < $\boldsymbol{\Gamma}$, the collection of positions and conjugate
46 < momenta of system $\gamma$, can be configured to give
47 < $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
48 < where energy is exchanged between the two systems, $\Omega(E)$, where
49 < $E$ is the total energy of both systems, can be represented as
43 > Consider a system, $\gamma$, with total energy $E_{\gamma}$.  Let
44 > $\Omega(E_{\gamma})$ represent the number of degenerate ways
45 > $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
46 > collection of positions and conjugate momenta of system $\gamma$, can
47 > be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
48 > of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
49 > E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
50 > expressed as
51   \begin{equation}
52 + \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
53 +        \Omega(E_{\gamma}) \times \ldots \Omega(E_n).
54 + \label{introEq:SM0.1}
55 + \end{equation}
56 + This multiplicative combination of degeneracies is illustrated in
57 + Fig.~\ref{introFig:degenProd}.
58 +
59 + \begin{figure}
60 + \centering
61 + \includegraphics[width=\linewidth]{omegaFig.eps}
62 + \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
63 + \label{introFig:degenProd}
64 + \end{figure}
65 +
66 + Next, consider the specific case of $\gamma$ in contact with a
67 + bath. Exchange of energy is allowed between the bath and the system,
68 + subject to the constraint that the total energy
69 + ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
70 + is the total energy of both systems, can be represented as
71 + \begin{equation}
72   \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
73   \label{introEq:SM1}
74   \end{equation}
# Line 57 | Line 79 | The solution to Eq.~\ref{introEq:SM2} maximizes the nu
79   \end{equation}
80  
81   The solution to Eq.~\ref{introEq:SM2} maximizes the number of
82 < degenerative configurations in $E$. \cite{Frenkel1996}
82 > degenerate configurations in $E$. \cite{Frenkel1996}
83   This gives
84   \begin{equation}
85   \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
# Line 86 | Line 108 | S = k_B \ln \Omega(E)
108   S = k_B \ln \Omega(E)
109   \label{introEq:SM5}
110   \end{equation}
111 < Where $k_B$ is the Boltzman constant.  Having defined entropy, one can
112 < also define the temperature of the system using the relation
111 > Where $k_B$ is the Boltzmann constant.  Having defined entropy, one can
112 > also define the temperature of the system using the Maxwell relation
113   \begin{equation}
114   \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
115   \label{introEq:SM6}
# Line 103 | Line 125 | Applying this to Eq.~\ref{introEq:SM4} gives the follo
125   \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
126   \label{introEq:SM8}
127   \end{equation}
128 < Showing that the partitioning of energy between the two systems is
129 < actually a process of thermal equilibration.\cite{Frenkel1996}
128 > Eq.~\ref{introEq:SM8} shows that the partitioning of energy between
129 > the two systems is actually a process of thermal
130 > equilibration.\cite{Frenkel1996}
131  
132 < An application of these results is to formulate the form of an
133 < expectation value of an observable, $A$, in the canonical ensemble. In
134 < the canonical ensemble, the number of particles, $N$, the volume, $V$,
135 < and the temperature, $T$, are all held constant while the energy, $E$,
136 < is allowed to fluctuate. Returning to the previous example, the bath
137 < system is now an infinitly large thermal bath, whose exchange of
132 > An application of these results is to formulate the expectation value
133 > of an observable, $A$, in the canonical ensemble. In the canonical
134 > ensemble, the number of particles, $N$, the volume, $V$, and the
135 > temperature, $T$, are all held constant while the energy, $E$, is
136 > allowed to fluctuate. Returning to the previous example, the bath
137 > system is now an infinitely large thermal bath, whose exchange of
138   energy with the system $\gamma$ holds the temperature constant.  The
139 < partitioning of energy in the bath system is then related to the total
140 < energy of both systems and the fluctuations in $E_{\gamma}$:
139 > partitioning of energy between the bath and the system is then related
140 > to the total energy of both systems and the fluctuations in
141 > $E_{\gamma}$:
142   \begin{equation}
143   \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
144   \label{introEq:SM9}
# Line 127 | Line 151 | Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{
151   \label{introEq:SM10}
152   \end{equation}
153   Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
154 < an integration over all accessable phase space, $P_{\gamma}$ is the
155 < probability of being in a given phase state and
156 < $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
154 > an integration over all accessible points in phase space, $P_{\gamma}$
155 > is the probability of being in a given phase state and
156 > $A(\boldsymbol{\Gamma})$ is an observable that is a function of the
157   phase state.
158  
159   Because entropy seeks to maximize the number of degenerate states at a
# Line 156 | Line 180 | P_{\gamma} \propto e^{-\beta E_{\gamma}}
180   P_{\gamma} \propto e^{-\beta E_{\gamma}}
181   \label{introEq:SM13}
182   \end{equation}
183 < Where $\ln \Omega(E)$ has been factored out of the porpotionality as a
183 > Where $\ln \Omega(E)$ has been factored out of the proportionality as a
184   constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
185   d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
186   \begin{equation}
# Line 164 | Line 188 | P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
188   {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
189   \label{introEq:SM14}
190   \end{equation}
191 < This result is the standard Boltzman statistical distribution.
191 > This result is the standard Boltzmann statistical distribution.
192   Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
193   \begin{equation}
194   \langle A \rangle =
# Line 176 | Line 200 | Applying it to Eq.~\ref{introEq:SM10} one can obtain t
200  
201   \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
202  
203 < One last important consideration is that of ergodicity. Ergodicity is
204 < the assumption that given an infinite amount of time, a system will
205 < visit every available point in phase space.\cite{Frenkel1996} For most
206 < systems, this is a valid assumption, except in cases where the system
207 < may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
208 < ergodicity allows the unification of a time averaged observation and
209 < an ensemble averged one. If an observation is averaged over a
210 < sufficiently long time, the system is assumed to visit all
211 < appropriately available points in phase space, giving a properly
212 < weighted statistical average. This allows the researcher freedom of
213 < choice when deciding how best to measure a given observable.  When an
214 < ensemble averaged approach seems most logical, the Monte Carlo
215 < techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
216 < Conversely, if a problem lends itself to a time averaging approach,
217 < the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
218 < employed.
203 > In the case of a Molecular Dynamics simulation, rather than
204 > calculating an ensemble average integral over phase space as in
205 > Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time
206 > average of an observable. Namely,
207 > \begin{equation}
208 > \langle A \rangle_t = \frac{1}{\tau}
209 >        \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt
210 > \label{introEq:SM16}
211 > \end{equation}
212 > Where the value of an observable is averaged over the length of time,
213 > $\tau$, that the simulation is run. This type of measurement mirrors
214 > the experimental measurement of an observable. In an experiment, the
215 > instrument analyzing the system must average its observation over the
216 > finite time of the measurement. What is required then, is a principle
217 > to relate the time average to the ensemble average. This is the
218 > ergodic hypothesis.
219  
220 + Ergodicity is the assumption that given an infinite amount of time, a
221 + system will visit every available point in phase
222 + space.\cite{Frenkel1996} For most systems, this is a valid assumption,
223 + except in cases where the system may be trapped in a local feature
224 + (\emph{e.g.}~glasses). When valid, ergodicity allows the unification
225 + of a time averaged observation and an ensemble averaged one. If an
226 + observation is averaged over a sufficiently long time, the system is
227 + assumed to visit all energetically accessible points in phase space,
228 + giving a properly weighted statistical average. This allows the
229 + researcher freedom of choice when deciding how best to measure a given
230 + observable.  When an ensemble averaged approach seems most logical,
231 + the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo}
232 + can be utilized.  Conversely, if a problem lends itself to a time
233 + averaging approach, the Molecular Dynamics techniques in
234 + Sec.~\ref{introSec:MD} can be employed.
235 +
236   \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
237  
238   The Monte Carlo method was developed by Metropolis and Ulam for their
239   work in fissionable material.\cite{metropolis:1949} The method is so
240 < named, because it heavily uses random numbers in its
241 < solution.\cite{allen87:csl} The Monte Carlo method allows for the
242 < solution of integrals through the stochastic sampling of the values
243 < within the integral. In the simplest case, the evaluation of an
244 < integral would follow a brute force method of
245 < sampling.\cite{Frenkel1996} Consider the following single dimensional
206 < integral:
240 > named, because it uses random numbers extensively.\cite{allen87:csl}
241 > The Monte Carlo method allows for the solution of integrals through
242 > the stochastic sampling of the values within the integral. In the
243 > simplest case, the evaluation of an integral would follow a brute
244 > force method of sampling.\cite{Frenkel1996} Consider the following
245 > single dimensional integral:
246   \begin{equation}
247 < I = f(x)dx
247 > I = \int_a^b f(x)dx
248   \label{eq:MCex1}
249   \end{equation}
250   The equation can be recast as:
251   \begin{equation}
252 < I = (b-a)\langle f(x) \rangle
252 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
253 > \label{introEq:Importance1}
254 > \end{equation}
255 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
256 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
257 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
258 > Eq.~\ref{introEq:Importance1} becomes
259 > \begin{equation}
260 > I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}
261 > \label{introEq:Importance2}
262 > \end{equation}
263 > If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
264 > \begin{equation}
265 > \rho(x) = \frac{1}{b-a}
266 > \label{introEq:importance2b}
267 > \end{equation}
268 > then the integral can be rewritten as
269 > \begin{equation}
270 > I = (b-a)\lim_{\tau \rightarrow \infty}
271 >        \langle f(x) \rangle_{\text{trials}[a,b]}
272   \label{eq:MCex2}
273   \end{equation}
216 Where $\langle f(x) \rangle$ is the unweighted average over the interval
217 $[a,b]$. The calculation of the integral could then be solved by
218 randomly choosing points along the interval $[a,b]$ and calculating
219 the value of $f(x)$ at each point. The accumulated average would then
220 approach $I$ in the limit where the number of trials is infintely
221 large.
274  
275   However, in Statistical Mechanics, one is typically interested in
276   integrals of the form:
# Line 235 | Line 287 | brute force method to this system would yield highly i
287   momentum. Therefore the momenta contribution of the integral can be
288   factored out, leaving the configurational integral. Application of the
289   brute force method to this system would yield highly inefficient
290 < results. Due to the Boltzman weighting of this integral, most random
290 > results. Due to the Boltzmann weighting of this integral, most random
291   configurations will have a near zero contribution to the ensemble
292   average. This is where importance sampling comes into
293   play.\cite{allen87:csl}
294  
295 < Importance Sampling is a method where one selects a distribution from
296 < which the random configurations are chosen in order to more
297 < efficiently calculate the integral.\cite{Frenkel1996} Consider again
298 < Eq.~\ref{eq:MCex1} rewritten to be:
247 < \begin{equation}
248 < I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
249 < \label{introEq:Importance1}
250 < \end{equation}
251 < Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
252 < one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
253 < from the distribution $\rho(x)$ on the interval $[a,b]$, then
254 < Eq.~\ref{introEq:Importance1} becomes
255 < \begin{equation}
256 < I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
257 < \label{introEq:Importance2}
258 < \end{equation}
259 < Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
295 > Importance sampling is a method where the distribution, from which the
296 > random configurations are chosen, is selected in such a way as to
297 > efficiently sample the integral in question.  Looking at
298 > Eq.~\ref{eq:mcEnsAvg}, and realizing
299   \begin {equation}
300   \rho_{kT}(\mathbf{r}^N) =
301          \frac{e^{-\beta V(\mathbf{r}^N)}}
302          {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
303   \label{introEq:MCboltzman}
304   \end{equation}
305 < Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
305 > Where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
306   can be rewritten as
307   \begin{equation}
308   \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
# Line 280 | Line 319 | Eq.~\ref{introEq:Importance4} becomes
319   By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
320   Eq.~\ref{introEq:Importance4} becomes
321   \begin{equation}
322 < \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
322 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}
323   \label{introEq:Importance5}
324   \end{equation}
325   The difficulty is selecting points $\mathbf{r}^N$ such that they are
326   sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
327 < was proposed by Metropolis et al.\cite{metropolis:1953} which involved
327 > was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
328   the use of a Markov chain whose limiting distribution was
329   $\rho_{kT}(\mathbf{r}^N)$.
330  
# Line 297 | Line 336 | conditions:\cite{leach01:mm}
336   \item The outcome of each trial depends only on the outcome of the previous trial.
337   \item Each trial belongs to a finite set of outcomes called the state space.
338   \end{enumerate}
339 < If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
340 < $\rho_m$ and $\rho_n$ are the probablilities of being in state
339 > If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
340 > $\rho_m$ and $\rho_n$ are the probabilities of being in state
341   $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
342   states are linked by a transition probability, $\pi_{mn}$, which is the
343   probability of going from state $m$ to state $n$.
# Line 334 | Line 373 | will only yield the limiting distribution again.
373   distribution, and successive applications of the transition matrix
374   will only yield the limiting distribution again.
375   \begin{equation}
376 < \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
376 > \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{limit}}
377          \boldsymbol{\Pi}
378   \label{introEq:MCmarkovEquil}
379   \end{equation}
# Line 343 | Line 382 | Eq.~\ref{introEq:MCmarkovEquil} is solved such that
382  
383   In the Metropolis method\cite{metropolis:1953}
384   Eq.~\ref{introEq:MCmarkovEquil} is solved such that
385 < $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
385 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
386   of states.  The method accomplishes this by imposing the strong
387   condition of microscopic reversibility on the equilibrium
388 < distribution.  Meaning, that at equilibrium the probability of going
388 > distribution.  This means that at equilibrium, the probability of going
389   from $m$ to $n$ is the same as going from $n$ to $m$.
390   \begin{equation}
391   \rho_m\pi_{mn} = \rho_n\pi_{nm}
392   \label{introEq:MCmicroReverse}
393   \end{equation}
394 < Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
394 > Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
395   the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
396   Eq.~\ref{introEq:MCmicroReverse} becomes
397   \begin{equation}
# Line 360 | Line 399 | Eq.~\ref{introEq:MCmicroReverse} becomes
399          \frac{\rho_n}{\rho_m}
400   \label{introEq:MCmicro2}
401   \end{equation}
402 < For a Boltxman limiting distribution,
402 > For a Boltzmann limiting distribution,
403   \begin{equation}
404   \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
405          = e^{-\beta \Delta \mathcal{U}}
# Line 380 | Line 419 | Metropolis method proceeds as follows
419   Metropolis method proceeds as follows
420   \begin{enumerate}
421   \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
422 < \item Modify $\mathbf{r}^N$, to generate configuratioon $\mathbf{r^{\prime}}^N$.
423 < \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
424 < \item Accumulate the average for the configurational observable of intereest.
422 > \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
423 > \item If the new configuration lowers the energy of the system, accept the move ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
424 > \item Accumulate the average for the configurational observable of interest.
425   \item Repeat from step 2 until the average converges.
426   \end{enumerate}
427   One important note is that the average is accumulated whether the move
428   is accepted or not, this ensures proper weighting of the average.
429   Using Eq.~\ref{introEq:Importance4} it becomes clear that the
430   accumulated averages are the ensemble averages, as this method ensures
431 < that the limiting distribution is the Boltzman distribution.
431 > that the limiting distribution is the Boltzmann distribution.
432  
433   \section{\label{introSec:MD}Molecular Dynamics Simulations}
434  
435   The main simulation tool used in this research is Molecular Dynamics.
436 < Molecular Dynamics is when the equations of motion for a system are
437 < integrated in order to obtain information about both the positions and
438 < momentum of a system, allowing the calculation of not only
439 < configurational observables, but momenta dependent ones as well:
440 < diffusion constants, velocity auto correlations, folding/unfolding
441 < events, etc.  Due to the principle of ergodicity,
442 < Sec.~\ref{introSec:ergodic}, the average of these observables over the
443 < time period of the simulation are taken to be the ensemble averages
444 < for the system.
436 > Molecular Dynamics is the numerical integration of the equations of
437 > motion for a system in order to obtain information about the dynamic
438 > changes in the positions and momentum of the constituent particles.
439 > This allows the calculation of not only configurational observables,
440 > but momentum dependent ones as well: diffusion constants, relaxation
441 > events, folding/unfolding events, etc. With the time dependent
442 > information gained from a Molecular Dynamics simulation, one can also
443 > calculate time correlation functions of the form\cite{Hansen86}
444 > \begin{equation}
445 > \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
446 >        \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}
447 > \label{introEq:timeCorr}
448 > \end{equation}
449 > These correlations can be used to measure fundamental time constants
450 > of a system, such as diffusion constants from the velocity
451 > autocorrelation or dipole relaxation times from the dipole
452 > autocorrelation.  Due to the principle of ergodicity,
453 > Sec.~\ref{introSec:ergodic}, the average of static observables over
454 > the length of the simulation are taken to be the ensemble averages for
455 > the system.
456  
457   The choice of when to use molecular dynamics over Monte Carlo
458   techniques, is normally decided by the observables in which the
459 < researcher is interested.  If the observables depend on momenta in
460 < any fashion, then the only choice is molecular dynamics in some form.
459 > researcher is interested.  If the observables depend on time in any
460 > fashion, then the only choice is molecular dynamics in some form.
461   However, when the observable is dependent only on the configuration,
462 < then most of the time Monte Carlo techniques will be more efficent.
462 > then for most small systems, Monte Carlo techniques will be more
463 > efficient.  The focus of research in the second half of this
464 > dissertation is centered on the dynamic properties of phospholipid
465 > bilayers, making molecular dynamics key in the simulation of those
466 > properties.
467  
468 < The focus of research in the second half of this dissertation is
415 < centered around the dynamic properties of phospholipid bilayers,
416 < making molecular dynamics key in the simulation of those properties.
468 > \subsection{\label{introSec:mdAlgorithm}Molecular Dynamics Algorithms}
469  
418 \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
419
470   To illustrate how the molecular dynamics technique is applied, the
471   following sections will describe the sequence involved in a
472   simulation.  Sec.~\ref{introSec:mdInit} deals with the initialization
# Line 435 | Line 485 | bilayers.  Important considerations at this stage of t
485   water, and in other cases structured the lipids into preformed
486   bilayers.  Important considerations at this stage of the simulation are:
487   \begin{itemize}
488 < \item There are no major overlaps of molecular or atomic orbitals
489 < \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
490 < \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
488 > \item There are no overlapping molecules or atoms.
489 > \item Velocities are chosen in such a way as to give the system zero total momentum and angular momentum.
490 > \item It is also sometimes desirable to select the velocities to correctly sample the ensemble at a particular target temperature.
491   \end{itemize}
492  
493 < The first point is important due to the amount of potential energy
494 < generated by having two particles too close together.  If overlap
495 < occurs, the first evaluation of forces will return numbers so large as
496 < to render the numerical integration of teh motion meaningless.  The
497 < second consideration keeps the system from drifting or rotating as a
498 < whole.  This arises from the fact that most simulations are of systems
499 < in equilibrium in the absence of outside forces.  Therefore any net
493 > The first point is important due to the forces generated when two
494 > particles are too close together.  If overlap occurs, the first
495 > evaluation of forces will return numbers so large as to render the
496 > numerical integration of the equations motion meaningless.  The second
497 > consideration keeps the system from drifting or rotating as a whole.
498 > This arises from the fact that most simulations are of systems in
499 > equilibrium in the absence of outside forces.  Therefore any net
500   movement would be unphysical and an artifact of the simulation method
501   used.  The final point addresses the selection of the magnitude of the
502 < initial velocities.  For many simulations it is convienient to use
503 < this opportunity to scale the amount of kinetic energy to reflect the
502 > initial velocities.  For many simulations it is convenient to use this
503 > opportunity to scale the amount of kinetic energy to reflect the
504   desired thermal distribution of the system.  However, it must be noted
505 < that most systems will require further velocity rescaling after the
506 < first few initial simulation steps due to either loss or gain of
507 < kinetic energy from energy stored in potential degrees of freedom.
505 > that due to equipartition, most systems will require further velocity
506 > rescaling after the first few initial simulation steps due to either
507 > loss or gain of kinetic energy from energy stored as potential energy.
508  
509   \subsection{\label{introSec:mdForce}Force Evaluation}
510  
511   The evaluation of forces is the most computationally expensive portion
512 < of a given molecular dynamics simulation.  This is due entirely to the
513 < evaluation of long range forces in a simulation, typically pair-wise.
514 < These forces are most commonly the Van der Waals force, and sometimes
515 < Coulombic forces as well.  For a pair-wise force, there are $N(N-1)/ 2$
516 < pairs to be evaluated, where $N$ is the number of particles in the
517 < system.  This leads to the calculations scaling as $N^2$, making large
518 < simulations prohibitive in the absence of any computation saving
519 < techniques.
512 > of any molecular dynamics simulation.  This is due entirely to the
513 > evaluation of long range forces in a simulation, which are typically
514 > pair potentials.  These forces are most commonly the van der Waals
515 > force, and sometimes Coulombic forces as well.  For a pair-wise
516 > interaction, there are $N(N-1)/ 2$ pairs to be evaluated, where $N$ is
517 > the number of particles in the system.  This leads to calculations which
518 > scale as $N^2$, making large simulations prohibitive in the absence
519 > of any computation saving techniques.
520  
521 < Another consideration one must resolve, is that in a given simulation
521 > Another consideration one must resolve, is that in a given simulation,
522   a disproportionate number of the particles will feel the effects of
523   the surface.\cite{allen87:csl} For a cubic system of 1000 particles
524   arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
525 < exposed to the surface.  Unless one is simulating an isolated particle
526 < group in a vacuum, the behavior of the system will be far from the
527 < desired bulk charecteristics.  To offset this, simulations employ the
528 < use of periodic boundary images.\cite{born:1912}
525 > exposed to the surface.  Unless one is simulating an isolated cluster
526 > in a vacuum, the behavior of the system will be far from the desired
527 > bulk characteristics.  To offset this, simulations employ the use of
528 > periodic images.\cite{born:1912}
529  
530   The technique involves the use of an algorithm that replicates the
531 < simulation box on an infinite lattice in cartesian space.  Any given
531 > simulation box on an infinite lattice in Cartesian space.  Any
532   particle leaving the simulation box on one side will have an image of
533 < itself enter on the opposite side (see Fig.~\ref{introFig:pbc}).  In
534 < addition, this sets that any given particle pair has an image, real or
535 < periodic, within $fix$ of each other.  A discussion of the method used
536 < to calculate the periodic image can be found in
537 < Sec.\ref{oopseSec:pbc}.
533 > itself enter on the opposite side (see
534 > Fig.~\ref{introFig:pbc}). Therefore, a pair of particles have an
535 > image of each other, real or periodic, within $\text{box}/2$.  A
536 > discussion of the method used to calculate the periodic image can be
537 > found in Sec.\ref{oopseSec:pbc}.
538  
539   \begin{figure}
540   \centering
541   \includegraphics[width=\linewidth]{pbcFig.eps}
542 < \caption[An illustration of periodic boundry conditions]{A 2-D illustration of periodic boundry conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
542 > \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
543   \label{introFig:pbc}
544   \end{figure}
545  
546 < Returning to the topic of the computational scale of the force
546 > Returning to the topic of the computational scaling of the force
547   evaluation, the use of periodic boundary conditions requires that a
548   cutoff radius be employed.  Using a cutoff radius improves the
549   efficiency of the force evaluation, as particles farther than a
550   predetermined distance, $r_{\text{cut}}$, are not included in the
551 < calculation.\cite{Frenkel1996} In a simultation with periodic images,
552 < $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
553 < Fig.~\ref{introFig:rMax} illustrates how when using an
554 < $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
555 < $r_{\text{cut}}$ at all, the corners of the simulation box are
556 < unequally weighted due to the lack of particle images in the $x$, $y$,
557 < or $z$ directions past a disance of $\text{box} / 2$.
551 > calculation.\cite{Frenkel1996} In a simulation with periodic images,
552 > there are two methods to choose from, both with their own cutoff
553 > limits. In the minimum image convention, $r_{\text{cut}}$ has a
554 > maximum value of $\text{box}/2$. This is because each atom has only
555 > one image that is seen by other atoms. The image used is the one that
556 > minimizes the distance between the two atoms. A system of wrapped
557 > images about a central atom therefore has a maximum length scale of
558 > box on a side (Fig.~\ref{introFig:rMaxMin}). The second convention,
559 > multiple image convention, has a maximum $r_{\text{cut}}$ of the box
560 > length itself. Here multiple images of each atom are replicated in the
561 > periodic cells surrounding the central atom, this causes the atom to
562 > see multiple copies of several atoms. If the cutoff radius is larger
563 > than box, however, then the atom will see an image of itself, and
564 > attempt to calculate an unphysical self-self force interaction
565 > (Fig.~\ref{introFig:rMaxMult}). Due to the increased complexity and
566 > commputaional ineffeciency of the multiple image method, the minimum
567 > image method is the used throughout this research.
568  
569   \begin{figure}
570   \centering
571   \includegraphics[width=\linewidth]{rCutMaxFig.eps}
572 < \caption
573 < \label{introFig:rMax}
572 > \caption[An explanation of minimum image convention]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
573 > \label{introFig:rMaxMin}
574   \end{figure}
575  
576 < With the use of an $fix$, however, comes a discontinuity in the
577 < potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
578 < one calculates the potential energy at the $r_{\text{cut}}$, and add
579 < that value to the potential.  This causes the function to go smoothly
580 < to zero at the cutoff radius.  This ensures conservation of energy
581 < when integrating the Newtonian equations of motion.
576 > \begin{figure}
577 > \centering
578 > \includegraphics[width=\linewidth]{rCutMaxMultFig.eps}
579 > \caption[An explanation of multiple image convention]{The yellow atom is the central wrapping point. The blue atoms are the minimum images of the system about the central atom. The boxes with the green atoms are multiple images of the central box. If $r_{\text{cut}} \geq \text{box}$ then the central atom sees multiple images of itself (red atom), creating a self-self force evaluation.}
580 > \label{introFig:rMaxMult}
581 > \end{figure}
582  
583 + With the use of a cutoff radius, however, comes a discontinuity in
584 + the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
585 + discontinuity, one calculates the potential energy at the
586 + $r_{\text{cut}}$, and adds that value to the potential, causing
587 + the function to go smoothly to zero at the cutoff radius.  This
588 + shifted potential ensures conservation of energy when integrating the
589 + Newtonian equations of motion.
590 +
591 + \begin{figure}
592 + \centering
593 + \includegraphics[width=\linewidth]{shiftedPot.eps}
594 + \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
595 + \label{introFig:shiftPot}
596 + \end{figure}
597 +
598   The second main simplification used in this research is the Verlet
599 < neighbor list. \cite{allen87:csl} In the Verlet method, one generates
599 > neighbor list.\cite{allen87:csl} In the Verlet method, one generates
600   a list of all neighbor atoms, $j$, surrounding atom $i$ within some
601   cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
602   This list is created the first time forces are evaluated, then on
603   subsequent force evaluations, pair calculations are only calculated
604 < from the neighbor lists.  The lists are updated if any given particle
604 > from the neighbor lists.  The lists are updated if any particle
605   in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
606 < giving rise to the possibility that a particle has left or joined a
606 > which indeicates the possibility that a particle has left or joined the
607   neighbor list.
608  
609 < \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
609 > \subsection{\label{introSec:mdIntegrate} Integration of the Equations of Motion}
610  
611   A starting point for the discussion of molecular dynamics integrators
612 < is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
612 > is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
613   expansion of position in time:
614   \begin{equation}
615 < eq here
615 > q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
616 >        \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
617 >        \mathcal{O}(\Delta t^4)
618   \label{introEq:verletForward}
619   \end{equation}
620   As well as,
621   \begin{equation}
622 < eq here
622 > q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
623 >        \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
624 >        \mathcal{O}(\Delta t^4)
625   \label{introEq:verletBack}
626   \end{equation}
627 < Adding together Eq.~\ref{introEq:verletForward} and
627 > Where $m$ is the mass of the particle, $q(t)$ is the position at time
628 > $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
629 > particle. Adding together Eq.~\ref{introEq:verletForward} and
630   Eq.~\ref{introEq:verletBack} results in,
631   \begin{equation}
632 < eq here
632 > q(t+\Delta t)+q(t-\Delta t) =
633 >        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
634   \label{introEq:verletSum}
635   \end{equation}
636   Or equivalently,
637   \begin{equation}
638 < eq here
638 > q(t+\Delta t) \approx
639 >        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
640   \label{introEq:verletFinal}
641   \end{equation}
642   Which contains an error in the estimate of the new positions on the
643   order of $\Delta t^4$.
644  
645   In practice, however, the simulations in this research were integrated
646 < with a velocity reformulation of teh Verlet method.\cite{allen87:csl}
647 < \begin{equation}
648 < eq here
649 < \label{introEq:MDvelVerletPos}
650 < \end{equation}
651 < \begin{equation}
569 < eq here
646 > with the velocity reformulation of the Verlet method.\cite{allen87:csl}
647 > \begin{align}
648 > q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
649 > \label{introEq:MDvelVerletPos} \\%
650 > %
651 > v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
652   \label{introEq:MDvelVerletVel}
653 < \end{equation}
653 > \end{align}
654   The original Verlet algorithm can be regained by substituting the
655   velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
656   formulations are chosen in this research because the algorithms have
657 < very little long term drift in energy conservation.  Energy
658 < conservation in a molecular dynamics simulation is of extreme
659 < importance, as it is a measure of how closely one is following the
660 < ``true'' trajectory wtih the finite integration scheme.  An exact
661 < solution to the integration will conserve area in phase space, as well
657 > very little long time drift in energy.  Energy conservation in a
658 > molecular dynamics simulation is of extreme importance, as it is a
659 > measure of how closely one is following the ``true'' trajectory with
660 > the finite integration scheme.  An exact solution to the integration
661 > will conserve area in phase space (i.e.~will be symplectic), as well
662   as be reversible in time, that is, the trajectory integrated forward
663   or backwards will exactly match itself.  Having a finite algorithm
664   that both conserves area in phase space and is time reversible,
665 < therefore increases, but does not guarantee the ``correctness'' or the
666 < integrated trajectory.
665 > therefore increases the reliability, but does not guarantee the
666 > ``correctness'' of the integrated trajectory.
667  
668   It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
669   does not rigorously preserve the actual Hamiltonian, it does preserve
670   a pseudo-Hamiltonian which shadows the real one in phase space.  This
671 < pseudo-Hamiltonian is proveably area-conserving as well as time
671 > pseudo-Hamiltonian is provably area-conserving as well as time
672   reversible.  The fact that it shadows the true Hamiltonian in phase
673   space is acceptable in actual simulations as one is interested in the
674   ensemble average of the observable being measured.  From the ergodic
675 < hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
675 > hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
676   average will match the ensemble average, therefore two similar
677   trajectories in phase space should give matching statistical averages.
678  
679   \subsection{\label{introSec:MDfurther}Further Considerations}
680 +
681   In the simulations presented in this research, a few additional
682 < parameters are needed to describe the motions.  The simulations
683 < involving water and phospholipids in Chapt.~\ref{chaptLipids} are
682 > parameters are needed to describe the motions.  In the simulations
683 > involving water and phospholipids in Ch.~\ref{chapt:lipid}, we are
684   required to integrate the equations of motions for dipoles on atoms.
685   This involves an additional three parameters be specified for each
686   dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
687   taken to be the Euler angles, where $\phi$ is a rotation about the
688   $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
689   $\psi$ is a final rotation about the new $z$-axis (see
690 < Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
691 < accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
690 > Fig.~\ref{introFig:eulerAngles}).  This sequence of rotations can be
691 > accumulated into a single $3 \times 3$ matrix, $\mathsf{A}$,
692   defined as follows:
693   \begin{equation}
694 < eq here
694 > \mathsf{A} =
695 > \begin{bmatrix}
696 >        \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
697 >        \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
698 >        \sin\theta\sin\psi \\%
699 >        %
700 >        -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
701 >        -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
702 >        \sin\theta\cos\psi \\%
703 >        %
704 >        \sin\phi\sin\theta &%
705 >        -\cos\phi\sin\theta &%
706 >        \cos\theta
707 > \end{bmatrix}
708   \label{introEq:EulerRotMat}
709   \end{equation}
710  
711 < The equations of motion for Euler angles can be written down as
712 < \cite{allen87:csl}
713 < \begin{equation}
714 < eq here
715 < \label{introEq:MDeuleeerPsi}
716 < \end{equation}
717 < Where $\omega^s_i$ is the angular velocity in the lab space frame
718 < along cartesian coordinate $i$.  However, a difficulty arises when
711 > \begin{figure}
712 > \centering
713 > \includegraphics[width=\linewidth]{eulerRotFig.eps}
714 > \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
715 > \label{introFig:eulerAngles}
716 > \end{figure}
717 >
718 > The equations of motion for Euler angles can be written down
719 > as\cite{allen87:csl}
720 > \begin{align}
721 > \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
722 >        \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
723 >        \omega^s_z
724 > \label{introEq:MDeulerPhi} \\%
725 > %
726 > \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
727 > \label{introEq:MDeulerTheta} \\%
728 > %
729 > \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
730 >        \omega^s_y \frac{\cos\phi}{\sin\theta}
731 > \label{introEq:MDeulerPsi}
732 > \end{align}
733 > Where $\omega^s_{\alpha}$ is the angular velocity in the lab space frame
734 > along Cartesian coordinate $\alpha$.  However, a difficulty arises when
735   attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
736   Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
737   both equations means there is a non-physical instability present when
738 < $\theta$ is 0 or $\pi$.
739 <
740 < To correct for this, the simulations integrate the rotation matrix,
741 < $\mathbf{A}$, directly, thus avoiding the instability.
630 < This method was proposed by Dullwebber
631 < \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
738 > $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
739 > the rotation matrix, $\mathsf{A}$, directly, thus avoiding the
740 > instability.  This method was proposed by Dullweber
741 > \emph{et. al.}\cite{Dullweber1997}, and is presented in
742   Sec.~\ref{introSec:MDsymplecticRot}.
743  
744 < \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
744 > \subsection{\label{introSec:MDliouville}Liouville Propagator}
745  
746   Before discussing the integration of the rotation matrix, it is
747   necessary to understand the construction of a ``good'' integration
748 < scheme.  It has been previously
749 < discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
750 < integrator to be symplectic, or time reversible.  The following is an
748 > scheme.  It has been previously discussed
749 > (Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
750 > integrator to be symplectic and time reversible.  The following is an
751   outline of the Trotter factorization of the Liouville Propagator as a
752 < scheme for generating symplectic integrators. \cite{Tuckerman:1992}
752 > scheme for generating symplectic, time-reversible
753 > integrators.\cite{Tuckerman92}
754  
755   For a system with $f$ degrees of freedom the Liouville operator can be
756   defined as,
757   \begin{equation}
758 < eq here
758 > iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
759 >        F_j\frac{\partial}{\partial p_j} \biggr ]
760   \label{introEq:LiouvilleOperator}
761   \end{equation}
762 < Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
763 < degree of freedom, and $f_j$ is the force on that degree of freedom.
764 < $\Gamma$ is defined as the set of all positions nad conjugate momenta,
765 < $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
762 > Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
763 > degree of freedom, and $F_j$ is the force on that degree of freedom.
764 > $\Gamma$ is defined as the set of all positions and conjugate momenta,
765 > $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
766   \begin {equation}
767 < eq here
767 > U(t) = e^{iLt}
768   \label{introEq:Lpropagator}
769   \end{equation}
770   This allows the specification of $\Gamma$ at any time $t$ as
771   \begin{equation}
772 < eq here
772 > \Gamma(t) = U(t)\Gamma(0)
773   \label{introEq:Lp2}
774   \end{equation}
775   It is important to note, $U(t)$ is a unitary operator meaning
# Line 668 | Line 780 | Trotter theorem to yield
780  
781   Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
782   Trotter theorem to yield
783 < \begin{equation}
784 < eq here
785 < \label{introEq:Lp4}
786 < \end{equation}
787 < Where $\Delta t = \frac{t}{P}$.
783 > \begin{align}
784 > e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
785 > %
786 >        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
787 > %
788 >        &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
789 >        e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
790 >        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
791 > \end{align}
792 > Where $\Delta t = t/P$.
793   With this, a discrete time operator $G(\Delta t)$ can be defined:
794 < \begin{equation}
795 < eq here
794 > \begin{align}
795 > G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
796 >        e^{iL_1\frac{\Delta t}{2}} \notag \\%
797 > %
798 >        &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
799 >        U_1 \biggl ( \frac{\Delta t}{2} \biggr )
800   \label{introEq:Lp5}
801 < \end{equation}
802 < Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
803 < unitary.  Meaning an integrator based on this factorization will be
801 > \end{align}
802 > Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
803 > unitary.  This means that an integrator based on this factorization will be
804   reversible in time.
805  
806   As an example, consider the following decomposition of $L$:
807 + \begin{align}
808 + iL_1 &= \dot{q}\frac{\partial}{\partial q}%
809 + \label{introEq:Lp6a} \\%
810 + %
811 + iL_2 &= F(q)\frac{\partial}{\partial p}%
812 + \label{introEq:Lp6b}
813 + \end{align}
814 + This leads to propagator $G( \Delta t )$ as,
815   \begin{equation}
816 < eq here
817 < \label{introEq:Lp6}
816 > G(\Delta t) =  e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
817 >        e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
818 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
819 > \label{introEq:Lp7}
820   \end{equation}
821 < Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
821 > Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
822   \begin{equation}
823 < eq here
823 > e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
824   \label{introEq:Lp8}
825   \end{equation}
826 < Where $c$ is independent of $q$.  One obtains the following:  
827 < \begin{equation}
828 < eq here
829 < \label{introEq:Lp8}
830 < \end{equation}
826 > Where $c$ is independent of $x$.  One obtains the following:  
827 > \begin{align}
828 > \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
829 >        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
830 > %
831 > q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
832 >        \label{introEq:Lp9b}\\%
833 > %
834 > \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
835 >        \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
836 > \end{align}
837   Or written another way,
838 < \begin{equation}
839 < eq here
840 < \label{intorEq:Lp9}
841 < \end{equation}
838 > \begin{align}
839 > q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
840 >        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
841 > \label{introEq:Lp10a} \\%
842 > %
843 > \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
844 >        \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
845 > \label{introEq:Lp10b}
846 > \end{align}
847   This is the velocity Verlet formulation presented in
848 < Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
848 > Sec.~\ref{introSec:mdIntegrate}.  Because this integration scheme is
849   comprised of unitary propagators, it is symplectic, and therefore area
850 < preserving in phase space.  From the preceeding fatorization, one can
850 > preserving in phase space.  From the preceding factorization, one can
851   see that the integration of the equations of motion would follow:
852   \begin{enumerate}
853   \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
# Line 717 | Line 859 | see that the integration of the equations of motion wo
859   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
860   \end{enumerate}
861  
862 < \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
862 > \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
863  
864   Based on the factorization from the previous section,
865 < Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
866 < symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
865 > Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
866 > symplectic propagation of the rotation matrix, $\mathsf{A}$, as an
867   alternative method for the integration of orientational degrees of
868   freedom. The method starts with a straightforward splitting of the
869   Liouville operator:
870 < \begin{equation}
871 < eq here
872 < \label{introEq:SR1}
873 < \end{equation}
874 < Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
875 < due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
870 > \begin{align}
871 > iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
872 >        \mathsf{\dot{A}}\frac{\partial}{\partial \mathsf{A}}
873 > \label{introEq:SR1a} \\%
874 > %
875 > iL_F &= F(q)\frac{\partial}{\partial p}
876 > \label{introEq:SR1b} \\%
877 > iL_{\tau} &= \tau(\mathsf{A})\frac{\partial}{\partial j}
878 > \label{introEq:SR1b} \\%
879 > \end{align}
880 > Where $\tau(\mathsf{A})$ is the torque of the system
881 > due to the configuration, and $j$ is the conjugate
882   angular momenta of the system. The propagator, $G(\Delta t)$, becomes
883   \begin{equation}
884 < eq here
884 > G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
885 >        e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
886 >        e^{\Delta t\,iL_{\text{pos}}} \,
887 >        e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
888 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
889   \label{introEq:SR2}
890   \end{equation}
891 < Propagation fo the linear and angular momenta follows as in the Verlet
892 < scheme.  The propagation of positions also follows the verlet scheme
891 > Propagation of the linear and angular momenta follows as in the Verlet
892 > scheme.  The propagation of positions also follows the Verlet scheme
893   with the addition of a further symplectic splitting of the rotation
894 < matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
894 > matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
895 > $U_{\text{pos}}(\Delta t)$.
896   \begin{equation}
897 < eq here
897 > \mathcal{U}_{\text{rot}}(\Delta t) =
898 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
899 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
900 >        \mathcal{U}_z (\Delta t)\,
901 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
902 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
903   \label{introEq:SR3}
904   \end{equation}
905 < Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
906 < $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
905 > Where $\mathcal{U}_{\alpha}$ is a unitary rotation of $\mathsf{A}$ and
906 > $j$ about each axis $\alpha$.  As all propagations are now
907   unitary and symplectic, the entire integration scheme is also
908   symplectic and time reversible.
909  
910   \section{\label{introSec:layout}Dissertation Layout}
911  
912 < This dissertation is divided as follows:Chapt.~\ref{chapt:RSA}
912 > This dissertation is divided as follows: Ch.~\ref{chapt:RSA}
913   presents the random sequential adsorption simulations of related
914 < pthalocyanines on a gold (111) surface. Chapt.~\ref{chapt:OOPSE}
915 < is about the writing of the molecular dynamics simulation package
916 < {\sc oopse}, Chapt.~\ref{chapt:lipid} regards the simulations of
917 < phospholipid bilayers using a mesoscale model, and lastly,
918 < Chapt.~\ref{chapt:conclusion} concludes this dissertation with a
914 > pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:oopse}
915 > is about our molecular dynamics simulation package
916 > {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
917 > phospholipid bilayers using a mesoscale model. And lastly,
918 > Ch.~\ref{chapt:conclusion} concludes this dissertation with a
919   summary of all results. The chapters are arranged in chronological
920   order, and reflect the progression of techniques I employed during my
921   research.  
922  
923 < The chapter concerning random sequential adsorption
924 < simulations is a study in applying the principles of theoretical
925 < research in order to obtain a simple model capaable of explaining the
926 < results.  My advisor, Dr. Gezelter, and I were approached by a
927 < colleague, Dr. Lieberman, about possible explanations for partial
928 < coverge of a gold surface by a particular compound of hers. We
923 > The chapter concerning random sequential adsorption simulations is a
924 > study in applying Statistical Mechanics simulation techniques in order
925 > to obtain a simple model capable of explaining the results.  My
926 > advisor, Dr. Gezelter, and I were approached by a colleague,
927 > Dr. Lieberman, about possible explanations for the partial coverage of
928 > a gold surface by a particular compound synthesized in her group. We
929   suggested it might be due to the statistical packing fraction of disks
930 < on a plane, and set about to simulate this system.  As the events in
930 > on a plane, and set about simulating this system.  As the events in
931   our model were not dynamic in nature, a Monte Carlo method was
932 < emplyed.  Here, if a molecule landed on the surface without
933 < overlapping another, then its landing was accepted.  However, if there
932 > employed.  Here, if a molecule landed on the surface without
933 > overlapping with another, then its landing was accepted.  However, if there
934   was overlap, the landing we rejected and a new random landing location
935   was chosen.  This defined our acceptance rules and allowed us to
936   construct a Markov chain whose limiting distribution was the surface
# Line 780 | Line 938 | describes in detail the large body of scientific code
938  
939   The following chapter, about the simulation package {\sc oopse},
940   describes in detail the large body of scientific code that had to be
941 < written in order to study phospholipid bilayer.  Although there are
941 > written in order to study phospholipid bilayers.  Although there are
942   pre-existing molecular dynamic simulation packages available, none
943 < were capable of implementing the models we were developing.{\sc oopse}
944 < is a unique package capable of not only integrating the equations of
945 < motion in cartesian space, but is also able to integrate the
946 < rotational motion of rigid bodies and dipoles.  Add to this the
947 < ability to perform calculations across parallel processors and a
948 < flexible script syntax for creating systems, and {\sc oopse} becomes a
949 < very powerful scientific instrument for the exploration of our model.
943 > were capable of implementing the models we were developing. {\sc
944 > oopse} is a unique package capable of not only integrating the
945 > equations of motion in Cartesian space, but is also able to integrate
946 > the rotational motion of rigid bodies and dipoles.  It also has the
947 > ability to perform calculations across distributed parallel processors
948 > and contains a flexible script syntax for creating systems.  {\sc
949 > oopse} has become a very powerful scientific instrument for the
950 > exploration of our model.
951  
952 < Bringing us to Chapt.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
953 < able to parametrize a mesoscale model for phospholipid simulations.
954 < This model retains information about solvent ordering about the
952 > In Ch.~\ref{chapt:lipid}, utilizing {\sc oopse}, I have been
953 > able to parameterize a mesoscale model for phospholipid simulations.
954 > This model retains information about solvent ordering around the
955   bilayer, as well as information regarding the interaction of the
956 < phospholipid head groups' dipole with each other and the surrounding
956 > phospholipid head groups' dipoles with each other and the surrounding
957   solvent.  These simulations give us insight into the dynamic events
958   that lead to the formation of phospholipid bilayers, as well as
959   provide the foundation for future exploration of bilayer phase
960   behavior with this model.  
961  
962 < Which leads into the last chapter, where I discuss future directions
963 < for both{\sc oopse} and this mesoscale model.  Additionally, I will
964 < give a summary of results for this dissertation.
962 > In the last chapter, I discuss future directions
963 > for both {\sc oopse} and this mesoscale model.  Additionally, I will
964 > give a summary of the results found in this dissertation.
965  
966  

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines