ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 1003 by mmeineke, Mon Feb 2 21:56:16 2004 UTC vs.
Revision 1063 by mmeineke, Mon Feb 23 19:16:22 2004 UTC

# Line 1 | Line 1
1  
2  
3 < \chapter{\label{chapt:intro}Introduction and Theoretical Background}
3 > \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4  
5  
6   The techniques used in the course of this research fall under the two
7   main classes of molecular simulation: Molecular Dynamics and Monte
8 < Carlo. Molecular Dynamic simulations integrate the equations of motion
9 < for a given system of particles, allowing the researher to gain
10 < insight into the time dependent evolution of a system. Diffusion
8 > Carlo. Molecular Dynamics simulations integrate the equations of
9 > motion for a given system of particles, allowing the researcher to
10 > gain insight into the time dependent evolution of a system. Diffusion
11   phenomena are readily studied with this simulation technique, making
12   Molecular Dynamics the main simulation technique used in this
13   research. Other aspects of the research fall under the Monte Carlo
14   class of simulations. In Monte Carlo, the configuration space
15 < available to the collection of particles is sampled stochastichally,
16 < or randomly. Each configuration is chosen with a given probability
17 < based on the Maxwell Boltzman distribution. These types of simulations
18 < are best used to probe properties of a system that are only dependent
19 < only on the state of the system. Structural information about a system
20 < is most readily obtained through these types of methods.
15 > available to the collection of particles is sampled stochastically, or
16 > randomly. Each configuration is chosen with a given probability based
17 > on the Maxwell Boltzmann distribution. These types of simulations are
18 > best used to probe properties of a system that are dependent only on
19 > the state of the system. Structural information about a system is most
20 > readily obtained through these types of methods.
21  
22   Although the two techniques employed seem dissimilar, they are both
23   linked by the overarching principles of Statistical
24 < Thermodynamics. Statistical Thermodynamics governs the behavior of
24 > Mechanics. Statistical Meachanics governs the behavior of
25   both classes of simulations and dictates what each method can and
26   cannot do. When investigating a system, one most first analyze what
27   thermodynamic properties of the system are being probed, then chose
# Line 31 | Line 31 | Statistical Mechanics concepts present in this dissert
31  
32   The following section serves as a brief introduction to some of the
33   Statistical Mechanics concepts present in this dissertation.  What
34 < follows is a brief derivation of Blotzman weighted statistics, and an
34 > follows is a brief derivation of Boltzmann weighted statistics and an
35   explanation of how one can use the information to calculate an
36   observable for a system.  This section then concludes with a brief
37   discussion of the ergodic hypothesis and its relevance to this
38   research.
39  
40 < \subsection{\label{introSec:boltzman}Boltzman weighted statistics}
40 > \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41  
42 < Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
43 < Let $\Omega(E_{\gamma})$ represent the number of degenerate ways
44 < $\boldsymbol{\Gamma}$, the collection of positions and conjugate
45 < momenta of system $\gamma$, can be configured to give
46 < $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
47 < where energy is exchanged between the two systems, $\Omega(E)$, where
48 < $E$ is the total energy of both systems, can be represented as
42 > Consider a system, $\gamma$, with total energy $E_{\gamma}$.  Let
43 > $\Omega(E_{\gamma})$ represent the number of degenerate ways
44 > $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
45 > collection of positions and conjugate momenta of system $\gamma$, can
46 > be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
47 > of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
48 > E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
49 > expressed as,
50   \begin{equation}
51 + \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
52 +        \Omega(E_{\gamma}) \times \ldots \Omega(E_n)
53 + \label{introEq:SM0.1}
54 + \end{equation}
55 + This multiplicative combination of degeneracies is illustrated in
56 + Fig.~\ref{introFig:degenProd}.
57 +
58 + \begin{figure}
59 + \centering
60 + \includegraphics[width=\linewidth]{omegaFig.eps}
61 + \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
62 + \label{introFig:degenProd}
63 + \end{figure}
64 +
65 + Next, consider the specific case of $\gamma$ in contact with a
66 + bath. Exchange of energy is allowed between the bath and the system,
67 + subject to the constraint that the total energy
68 + ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
69 + is the total energy of both systems, can be represented as
70 + \begin{equation}
71   \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
72   \label{introEq:SM1}
73   \end{equation}
# Line 57 | Line 78 | The solution to Eq.~\ref{introEq:SM2} maximizes the nu
78   \end{equation}
79  
80   The solution to Eq.~\ref{introEq:SM2} maximizes the number of
81 < degenerative configurations in $E$. \cite{Frenkel1996}
81 > degenerate configurations in $E$. \cite{Frenkel1996}
82   This gives
83   \begin{equation}
84   \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
# Line 86 | Line 107 | S = k_B \ln \Omega(E)
107   S = k_B \ln \Omega(E)
108   \label{introEq:SM5}
109   \end{equation}
110 < Where $k_B$ is the Boltzman constant.  Having defined entropy, one can
110 > Where $k_B$ is the Boltzmann constant.  Having defined entropy, one can
111   also define the temperature of the system using the relation
112   \begin{equation}
113   \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
# Line 111 | Line 132 | is allowed to fluctuate. Returning to the previous exa
132   the canonical ensemble, the number of particles, $N$, the volume, $V$,
133   and the temperature, $T$, are all held constant while the energy, $E$,
134   is allowed to fluctuate. Returning to the previous example, the bath
135 < system is now an infinitly large thermal bath, whose exchange of
135 > system is now an infinitely large thermal bath, whose exchange of
136   energy with the system $\gamma$ holds the temperature constant.  The
137   partitioning of energy in the bath system is then related to the total
138   energy of both systems and the fluctuations in $E_{\gamma}$:
# Line 127 | Line 148 | Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{
148   \label{introEq:SM10}
149   \end{equation}
150   Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
151 < an integration over all accessable phase space, $P_{\gamma}$ is the
151 > an integration over all accessible phase space, $P_{\gamma}$ is the
152   probability of being in a given phase state and
153   $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
154   phase state.
# Line 156 | Line 177 | P_{\gamma} \propto e^{-\beta E_{\gamma}}
177   P_{\gamma} \propto e^{-\beta E_{\gamma}}
178   \label{introEq:SM13}
179   \end{equation}
180 < Where $\ln \Omega(E)$ has been factored out of the porpotionality as a
180 > Where $\ln \Omega(E)$ has been factored out of the proportionality as a
181   constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
182   d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
183   \begin{equation}
# Line 164 | Line 185 | P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
185   {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
186   \label{introEq:SM14}
187   \end{equation}
188 < This result is the standard Boltzman statistical distribution.
188 > This result is the standard Boltzmann statistical distribution.
189   Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
190   \begin{equation}
191   \langle A \rangle =
# Line 176 | Line 197 | Applying it to Eq.~\ref{introEq:SM10} one can obtain t
197  
198   \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
199  
200 < One last important consideration is that of ergodicity. Ergodicity is
201 < the assumption that given an infinite amount of time, a system will
202 < visit every available point in phase space.\cite{Frenkel1996} For most
203 < systems, this is a valid assumption, except in cases where the system
204 < may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
205 < ergodicity allows the unification of a time averaged observation and
206 < an ensemble averged one. If an observation is averaged over a
207 < sufficiently long time, the system is assumed to visit all
208 < appropriately available points in phase space, giving a properly
209 < weighted statistical average. This allows the researcher freedom of
210 < choice when deciding how best to measure a given observable.  When an
211 < ensemble averaged approach seems most logical, the Monte Carlo
212 < techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
213 < Conversely, if a problem lends itself to a time averaging approach,
214 < the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
215 < employed.
200 > In the case of a Molecular Dynamics simulation, rather than
201 > calculating an ensemble average integral over phase space as in
202 > Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time
203 > average of an observable. Namely,
204 > \begin{equation}
205 > \langle A \rangle_t = \frac{1}{\tau}
206 >        \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt
207 > \label{introEq:SM16}
208 > \end{equation}
209 > Where the value of an observable is averaged over the length of time
210 > that the simulation is run. This type of measurement mirrors the
211 > experimental measurement of an observable. In an experiment, the
212 > instrument analyzing the system must average its observation of the
213 > finite time of the measurement. What is required then, is a principle
214 > to relate the time average to the ensemble average. This is the
215 > ergodic hypothesis.
216  
217 + Ergodicity is the assumption that given an infinite amount of time, a
218 + system will visit every available point in phase
219 + space.\cite{Frenkel1996} For most systems, this is a valid assumption,
220 + except in cases where the system may be trapped in a local feature
221 + (\emph{e.g.}~glasses). When valid, ergodicity allows the unification
222 + of a time averaged observation and an ensemble averaged one. If an
223 + observation is averaged over a sufficiently long time, the system is
224 + assumed to visit all appropriately available points in phase space,
225 + giving a properly weighted statistical average. This allows the
226 + researcher freedom of choice when deciding how best to measure a given
227 + observable.  When an ensemble averaged approach seems most logical,
228 + the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo}
229 + can be utilized.  Conversely, if a problem lends itself to a time
230 + averaging approach, the Molecular Dynamics techniques in
231 + Sec.~\ref{introSec:MD} can be employed.
232 +
233   \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
234  
235   The Monte Carlo method was developed by Metropolis and Ulam for their
# Line 210 | Line 247 | The equation can be recast as:
247   \end{equation}
248   The equation can be recast as:
249   \begin{equation}
250 < I = (b-a)\langle f(x) \rangle
250 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
251 > \label{introEq:Importance1}
252 > \end{equation}
253 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
254 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
255 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
256 > Eq.~\ref{introEq:Importance1} becomes
257 > \begin{equation}
258 > I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}
259 > \label{introEq:Importance2}
260 > \end{equation}
261 > If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
262 > \begin{equation}
263 > \rho(x) = \frac{1}{b-a}
264 > \label{introEq:importance2b}
265 > \end{equation}
266 > then the integral can be rewritten as
267 > \begin{equation}
268 > I = (b-a)\lim_{\tau \rightarrow \infty}
269 >        \langle f(x) \rangle_{\text{trials}[a,b]}
270   \label{eq:MCex2}
271   \end{equation}
216 Where $\langle f(x) \rangle$ is the unweighted average over the interval
217 $[a,b]$. The calculation of the integral could then be solved by
218 randomly choosing points along the interval $[a,b]$ and calculating
219 the value of $f(x)$ at each point. The accumulated average would then
220 approach $I$ in the limit where the number of trials is infintely
221 large.
272  
273   However, in Statistical Mechanics, one is typically interested in
274   integrals of the form:
# Line 235 | Line 285 | brute force method to this system would yield highly i
285   momentum. Therefore the momenta contribution of the integral can be
286   factored out, leaving the configurational integral. Application of the
287   brute force method to this system would yield highly inefficient
288 < results. Due to the Boltzman weighting of this integral, most random
288 > results. Due to the Boltzmann weighting of this integral, most random
289   configurations will have a near zero contribution to the ensemble
290   average. This is where importance sampling comes into
291   play.\cite{allen87:csl}
292  
293 < Importance Sampling is a method where one selects a distribution from
294 < which the random configurations are chosen in order to more
295 < efficiently calculate the integral.\cite{Frenkel1996} Consider again
296 < Eq.~\ref{eq:MCex1} rewritten to be:
247 < \begin{equation}
248 < I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
249 < \label{introEq:Importance1}
250 < \end{equation}
251 < Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
252 < one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
253 < from the distribution $\rho(x)$ on the interval $[a,b]$, then
254 < Eq.~\ref{introEq:Importance1} becomes
255 < \begin{equation}
256 < I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
257 < \label{introEq:Importance2}
258 < \end{equation}
259 < Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
293 > Importance sampling is a method where the distribution, from which the
294 > random configurations are chosen, is selected in such a way as to
295 > efficiently sample the integral in question.  Looking at
296 > Eq.~\ref{eq:mcEnsAvg}, and realizing
297   \begin {equation}
298   \rho_{kT}(\mathbf{r}^N) =
299          \frac{e^{-\beta V(\mathbf{r}^N)}}
300          {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
301   \label{introEq:MCboltzman}
302   \end{equation}
303 < Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
303 > Where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
304   can be rewritten as
305   \begin{equation}
306   \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
# Line 280 | Line 317 | Eq.~\ref{introEq:Importance4} becomes
317   By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
318   Eq.~\ref{introEq:Importance4} becomes
319   \begin{equation}
320 < \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
320 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}
321   \label{introEq:Importance5}
322   \end{equation}
323   The difficulty is selecting points $\mathbf{r}^N$ such that they are
324   sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
325 < was proposed by Metropolis et al.\cite{metropolis:1953} which involved
325 > was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
326   the use of a Markov chain whose limiting distribution was
327   $\rho_{kT}(\mathbf{r}^N)$.
328  
# Line 297 | Line 334 | conditions:\cite{leach01:mm}
334   \item The outcome of each trial depends only on the outcome of the previous trial.
335   \item Each trial belongs to a finite set of outcomes called the state space.
336   \end{enumerate}
337 < If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
338 < $\rho_m$ and $\rho_n$ are the probablilities of being in state
337 > If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
338 > $\rho_m$ and $\rho_n$ are the probabilities of being in state
339   $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
340   states are linked by a transition probability, $\pi_{mn}$, which is the
341   probability of going from state $m$ to state $n$.
# Line 343 | Line 380 | Eq.~\ref{introEq:MCmarkovEquil} is solved such that
380  
381   In the Metropolis method\cite{metropolis:1953}
382   Eq.~\ref{introEq:MCmarkovEquil} is solved such that
383 < $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
383 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
384   of states.  The method accomplishes this by imposing the strong
385   condition of microscopic reversibility on the equilibrium
386   distribution.  Meaning, that at equilibrium the probability of going
# Line 352 | Line 389 | from $m$ to $n$ is the same as going from $n$ to $m$.
389   \rho_m\pi_{mn} = \rho_n\pi_{nm}
390   \label{introEq:MCmicroReverse}
391   \end{equation}
392 < Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
392 > Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
393   the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
394   Eq.~\ref{introEq:MCmicroReverse} becomes
395   \begin{equation}
# Line 360 | Line 397 | Eq.~\ref{introEq:MCmicroReverse} becomes
397          \frac{\rho_n}{\rho_m}
398   \label{introEq:MCmicro2}
399   \end{equation}
400 < For a Boltxman limiting distribution,
400 > For a Boltzmann limiting distribution,
401   \begin{equation}
402   \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
403          = e^{-\beta \Delta \mathcal{U}}
# Line 380 | Line 417 | Metropolis method proceeds as follows
417   Metropolis method proceeds as follows
418   \begin{enumerate}
419   \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
420 < \item Modify $\mathbf{r}^N$, to generate configuratioon $\mathbf{r^{\prime}}^N$.
420 > \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
421   \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
422 < \item Accumulate the average for the configurational observable of intereest.
422 > \item Accumulate the average for the configurational observable of interest.
423   \item Repeat from step 2 until the average converges.
424   \end{enumerate}
425   One important note is that the average is accumulated whether the move
426   is accepted or not, this ensures proper weighting of the average.
427   Using Eq.~\ref{introEq:Importance4} it becomes clear that the
428   accumulated averages are the ensemble averages, as this method ensures
429 < that the limiting distribution is the Boltzman distribution.
429 > that the limiting distribution is the Boltzmann distribution.
430  
431   \section{\label{introSec:MD}Molecular Dynamics Simulations}
432  
# Line 398 | Line 435 | configurational observables, but momenta dependent one
435   integrated in order to obtain information about both the positions and
436   momentum of a system, allowing the calculation of not only
437   configurational observables, but momenta dependent ones as well:
438 < diffusion constants, velocity auto correlations, folding/unfolding
439 < events, etc.  Due to the principle of ergodicity,
438 > diffusion constants, relaxation events, folding/unfolding
439 > events, etc. With the time dependent information gained from a
440 > Molecular Dynamics simulation, one can also calculate time correlation
441 > functions of the form\cite{Hansen86}
442 > \begin{equation}
443 > \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
444 >        \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}
445 > \label{introEq:timeCorr}
446 > \end{equation}
447 > These correlations can be used to measure fundamental time constants
448 > of a system, such as diffusion constants from the velocity
449 > autocorrelation or dipole relaxation times from the dipole
450 > autocorrelation.  Due to the principle of ergodicity,
451   Sec.~\ref{introSec:ergodic}, the average of these observables over the
452   time period of the simulation are taken to be the ensemble averages
453   for the system.
454  
455   The choice of when to use molecular dynamics over Monte Carlo
456   techniques, is normally decided by the observables in which the
457 < researcher is interested.  If the observables depend on momenta in
457 > researcher is interested.  If the observables depend on time in
458   any fashion, then the only choice is molecular dynamics in some form.
459   However, when the observable is dependent only on the configuration,
460 < then most of the time Monte Carlo techniques will be more efficent.
460 > then for most small systems, Monte Carlo techniques will be more efficient.
461  
462   The focus of research in the second half of this dissertation is
463   centered around the dynamic properties of phospholipid bilayers,
# Line 432 | Line 480 | positions were selected that in some cases dispersed t
480   Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
481   phospholipid membranes.  Therefore in these simulations initial
482   positions were selected that in some cases dispersed the lipids in
483 < water, and in other cases structured the lipids into preformed
483 > water, and in other cases structured the lipids into performed
484   bilayers.  Important considerations at this stage of the simulation are:
485   \begin{itemize}
486   \item There are no major overlaps of molecular or atomic orbitals
487 < \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
488 < \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
487 > \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
488 > \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
489   \end{itemize}
490  
491   The first point is important due to the amount of potential energy
492   generated by having two particles too close together.  If overlap
493   occurs, the first evaluation of forces will return numbers so large as
494 < to render the numerical integration of teh motion meaningless.  The
494 > to render the numerical integration of the motion meaningless.  The
495   second consideration keeps the system from drifting or rotating as a
496   whole.  This arises from the fact that most simulations are of systems
497   in equilibrium in the absence of outside forces.  Therefore any net
498   movement would be unphysical and an artifact of the simulation method
499   used.  The final point addresses the selection of the magnitude of the
500 < initial velocities.  For many simulations it is convienient to use
500 > initial velocities.  For many simulations it is convenient to use
501   this opportunity to scale the amount of kinetic energy to reflect the
502   desired thermal distribution of the system.  However, it must be noted
503   that most systems will require further velocity rescaling after the
# Line 474 | Line 522 | group in a vacuum, the behavior of the system will be
522   arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
523   exposed to the surface.  Unless one is simulating an isolated particle
524   group in a vacuum, the behavior of the system will be far from the
525 < desired bulk charecteristics.  To offset this, simulations employ the
525 > desired bulk characteristics.  To offset this, simulations employ the
526   use of periodic boundary images.\cite{born:1912}
527  
528   The technique involves the use of an algorithm that replicates the
529 < simulation box on an infinite lattice in cartesian space.  Any given
529 > simulation box on an infinite lattice in Cartesian space.  Any given
530   particle leaving the simulation box on one side will have an image of
531   itself enter on the opposite side (see Fig.~\ref{introFig:pbc}).  In
532 < addition, this sets that any given particle pair has an image, real or
533 < periodic, within $fix$ of each other.  A discussion of the method used
534 < to calculate the periodic image can be found in
532 > addition, this sets that any two particles have an image, real or
533 > periodic, within $\text{box}/2$ of each other.  A discussion of the
534 > method used to calculate the periodic image can be found in
535   Sec.\ref{oopseSec:pbc}.
536  
537   \begin{figure}
538   \centering
539   \includegraphics[width=\linewidth]{pbcFig.eps}
540 < \caption[An illustration of periodic boundry conditions]{A 2-D illustration of periodic boundry conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
540 > \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
541   \label{introFig:pbc}
542   \end{figure}
543  
# Line 498 | Line 546 | predetermined distance, $r_{\text{cut}}$, are not incl
546   cutoff radius be employed.  Using a cutoff radius improves the
547   efficiency of the force evaluation, as particles farther than a
548   predetermined distance, $r_{\text{cut}}$, are not included in the
549 < calculation.\cite{Frenkel1996} In a simultation with periodic images,
550 < $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
551 < Fig.~\ref{introFig:rMax} illustrates how when using an
552 < $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
553 < $r_{\text{cut}}$ at all, the corners of the simulation box are
554 < unequally weighted due to the lack of particle images in the $x$, $y$,
555 < or $z$ directions past a disance of $\text{box} / 2$.
549 > calculation.\cite{Frenkel1996} In a simulation with periodic images,
550 > there are two methods to choose from, both with their own cutoff
551 > limits. In the minimum image convention, $r_{\text{cut}}$ has a
552 > maximum value of $\text{box}/2$. This is because each atom has only
553 > one image that is seen by another atom, and further the image used is
554 > the one that minimizes the distance between the two atoms. A system of
555 > wrapped images about a central atom therefore has a maximum length
556 > scale of box on a side (Fig.~\ref{introFig:rMaxMin}). The second
557 > convention, multiple image convention, has a maximum $r_{\text{cut}}$
558 > of box. Here multiple images of each atom are replicated in the
559 > periodic cells surrounding the central atom, this causes the atom to
560 > see multiple copies of several atoms. If the cutoff radius is larger
561 > than box, however, then the atom will see an image of itself, and
562 > attempt to calculate an unphysical self-self force interaction
563 > (Fig.~\ref{introFig:rMaxMult}). Due to the increased complexity and
564 > commputaional ineffeciency of the multiple image method, the minimum
565 > image method is the periodic method used throughout this research.
566  
567   \begin{figure}
568   \centering
569   \includegraphics[width=\linewidth]{rCutMaxFig.eps}
570 < \caption
571 < \label{introFig:rMax}
570 > \caption[An explanation of minimum image convention]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
571 > \label{introFig:rMaxMin}
572   \end{figure}
573  
574 < With the use of an $fix$, however, comes a discontinuity in the
575 < potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
576 < one calculates the potential energy at the $r_{\text{cut}}$, and add
577 < that value to the potential.  This causes the function to go smoothly
578 < to zero at the cutoff radius.  This ensures conservation of energy
579 < when integrating the Newtonian equations of motion.
574 > \begin{figure}
575 > \centering
576 > \includegraphics[width=\linewidth]{rCutMaxMultFig.eps}
577 > \caption[An explanation of multiple image convention]{The yellow atom is the central wrapping point. The blue atoms are the minimum images of the system about the central atom. The boxes with the green atoms are multiple images of the central box. If $r_{\text{cut}} \geq \{text{box}$ then the central atom sees multiple images of itself (red atom), creating a self-self force evaluation.}
578 > \label{introFig:rMaxMult}
579 > \end{figure}
580  
581 + With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
582 + the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
583 + discontinuity, one calculates the potential energy at the
584 + $r_{\text{cut}}$, and adds that value to the potential, causing
585 + the function to go smoothly to zero at the cutoff radius.  This
586 + shifted potential ensures conservation of energy when integrating the
587 + Newtonian equations of motion.
588 +
589 + \begin{figure}
590 + \centering
591 + \includegraphics[width=\linewidth]{shiftedPot.eps}
592 + \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
593 + \label{introFig:shiftPot}
594 + \end{figure}
595 +
596   The second main simplification used in this research is the Verlet
597   neighbor list. \cite{allen87:csl} In the Verlet method, one generates
598   a list of all neighbor atoms, $j$, surrounding atom $i$ within some
# Line 534 | Line 607 | A starting point for the discussion of molecular dynam
607   \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
608  
609   A starting point for the discussion of molecular dynamics integrators
610 < is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
610 > is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
611   expansion of position in time:
612   \begin{equation}
613 < eq here
613 > q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
614 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
615 >        \mathcal{O}(\Delta t^4)
616   \label{introEq:verletForward}
617   \end{equation}
618   As well as,
619   \begin{equation}
620 < eq here
620 > q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
621 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
622 >        \mathcal{O}(\Delta t^4)
623   \label{introEq:verletBack}
624   \end{equation}
625 < Adding together Eq.~\ref{introEq:verletForward} and
625 > Where $m$ is the mass of the particle, $q(t)$ is the position at time
626 > $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
627 > particle. Adding together Eq.~\ref{introEq:verletForward} and
628   Eq.~\ref{introEq:verletBack} results in,
629   \begin{equation}
630 < eq here
630 > q(t+\Delta t)+q(t-\Delta t) =
631 >        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
632   \label{introEq:verletSum}
633   \end{equation}
634   Or equivalently,
635   \begin{equation}
636 < eq here
636 > q(t+\Delta t) \approx
637 >        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
638   \label{introEq:verletFinal}
639   \end{equation}
640   Which contains an error in the estimate of the new positions on the
641   order of $\Delta t^4$.
642  
643   In practice, however, the simulations in this research were integrated
644 < with a velocity reformulation of teh Verlet method.\cite{allen87:csl}
645 < \begin{equation}
646 < eq here
647 < \label{introEq:MDvelVerletPos}
648 < \end{equation}
649 < \begin{equation}
569 < eq here
644 > with a velocity reformulation of the Verlet method.\cite{allen87:csl}
645 > \begin{align}
646 > q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
647 > \label{introEq:MDvelVerletPos} \\%
648 > %
649 > v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
650   \label{introEq:MDvelVerletVel}
651 < \end{equation}
651 > \end{align}
652   The original Verlet algorithm can be regained by substituting the
653   velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
654   formulations are chosen in this research because the algorithms have
655   very little long term drift in energy conservation.  Energy
656   conservation in a molecular dynamics simulation is of extreme
657   importance, as it is a measure of how closely one is following the
658 < ``true'' trajectory wtih the finite integration scheme.  An exact
658 > ``true'' trajectory with the finite integration scheme.  An exact
659   solution to the integration will conserve area in phase space, as well
660   as be reversible in time, that is, the trajectory integrated forward
661   or backwards will exactly match itself.  Having a finite algorithm
# Line 586 | Line 666 | a pseudo-Hamiltonian which shadows the real one in pha
666   It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
667   does not rigorously preserve the actual Hamiltonian, it does preserve
668   a pseudo-Hamiltonian which shadows the real one in phase space.  This
669 < pseudo-Hamiltonian is proveably area-conserving as well as time
669 > pseudo-Hamiltonian is provably area-conserving as well as time
670   reversible.  The fact that it shadows the true Hamiltonian in phase
671   space is acceptable in actual simulations as one is interested in the
672   ensemble average of the observable being measured.  From the ergodic
673 < hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
673 > hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
674   average will match the ensemble average, therefore two similar
675   trajectories in phase space should give matching statistical averages.
676  
677   \subsection{\label{introSec:MDfurther}Further Considerations}
678 +
679   In the simulations presented in this research, a few additional
680   parameters are needed to describe the motions.  The simulations
681 < involving water and phospholipids in Chapt.~\ref{chaptLipids} are
681 > involving water and phospholipids in Ch.~\ref{chapt:lipid} are
682   required to integrate the equations of motions for dipoles on atoms.
683   This involves an additional three parameters be specified for each
684   dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
685   taken to be the Euler angles, where $\phi$ is a rotation about the
686   $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
687   $\psi$ is a final rotation about the new $z$-axis (see
688 < Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
689 < accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
688 > Fig.~\ref{introFig:eulerAngles}).  This sequence of rotations can be
689 > accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
690   defined as follows:
691   \begin{equation}
692 < eq here
692 > \mathbf{A} =
693 > \begin{bmatrix}
694 >        \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
695 >        \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
696 >        \sin\theta\sin\psi \\%
697 >        %
698 >        -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
699 >        -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
700 >        \sin\theta\cos\psi \\%
701 >        %
702 >        \sin\phi\sin\theta &%
703 >        -\cos\phi\sin\theta &%
704 >        \cos\theta
705 > \end{bmatrix}
706   \label{introEq:EulerRotMat}
707   \end{equation}
708  
709 < The equations of motion for Euler angles can be written down as
710 < \cite{allen87:csl}
711 < \begin{equation}
712 < eq here
713 < \label{introEq:MDeuleeerPsi}
714 < \end{equation}
709 > \begin{figure}
710 > \centering
711 > \includegraphics[width=\linewidth]{eulerRotFig.eps}
712 > \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
713 > \label{introFig:eulerAngles}
714 > \end{figure}
715 >
716 > The equations of motion for Euler angles can be written down
717 > as\cite{allen87:csl}
718 > \begin{align}
719 > \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
720 >        \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
721 >        \omega^s_z
722 > \label{introEq:MDeulerPhi} \\%
723 > %
724 > \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
725 > \label{introEq:MDeulerTheta} \\%
726 > %
727 > \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
728 >        \omega^s_y \frac{\cos\phi}{\sin\theta}
729 > \label{introEq:MDeulerPsi}
730 > \end{align}
731   Where $\omega^s_i$ is the angular velocity in the lab space frame
732 < along cartesian coordinate $i$.  However, a difficulty arises when
732 > along Cartesian coordinate $i$.  However, a difficulty arises when
733   attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
734   Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
735   both equations means there is a non-physical instability present when
736 < $\theta$ is 0 or $\pi$.
737 <
738 < To correct for this, the simulations integrate the rotation matrix,
739 < $\mathbf{A}$, directly, thus avoiding the instability.
630 < This method was proposed by Dullwebber
631 < \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
736 > $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
737 > the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
738 > instability.  This method was proposed by Dullweber
739 > \emph{et. al.}\cite{Dullweber1997}, and is presented in
740   Sec.~\ref{introSec:MDsymplecticRot}.
741  
742 < \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
742 > \subsection{\label{introSec:MDliouville}Liouville Propagator}
743  
744   Before discussing the integration of the rotation matrix, it is
745   necessary to understand the construction of a ``good'' integration
746   scheme.  It has been previously
747 < discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
747 > discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
748   integrator to be symplectic, or time reversible.  The following is an
749   outline of the Trotter factorization of the Liouville Propagator as a
750 < scheme for generating symplectic integrators. \cite{Tuckerman:1992}
750 > scheme for generating symplectic integrators.\cite{Tuckerman92}
751  
752   For a system with $f$ degrees of freedom the Liouville operator can be
753   defined as,
754   \begin{equation}
755 < eq here
755 > iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
756 >        F_j\frac{\partial}{\partial p_j} \biggr ]
757   \label{introEq:LiouvilleOperator}
758   \end{equation}
759 < Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
760 < degree of freedom, and $f_j$ is the force on that degree of freedom.
761 < $\Gamma$ is defined as the set of all positions nad conjugate momenta,
762 < $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
759 > Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
760 > degree of freedom, and $F_j$ is the force on that degree of freedom.
761 > $\Gamma$ is defined as the set of all positions and conjugate momenta,
762 > $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
763   \begin {equation}
764 < eq here
764 > U(t) = e^{iLt}
765   \label{introEq:Lpropagator}
766   \end{equation}
767   This allows the specification of $\Gamma$ at any time $t$ as
768   \begin{equation}
769 < eq here
769 > \Gamma(t) = U(t)\Gamma(0)
770   \label{introEq:Lp2}
771   \end{equation}
772   It is important to note, $U(t)$ is a unitary operator meaning
# Line 668 | Line 777 | Trotter theorem to yield
777  
778   Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
779   Trotter theorem to yield
780 < \begin{equation}
781 < eq here
782 < \label{introEq:Lp4}
783 < \end{equation}
784 < Where $\Delta t = \frac{t}{P}$.
780 > \begin{align}
781 > e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
782 > %
783 >        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
784 > %
785 >        &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
786 >        e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
787 >        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
788 > \end{align}
789 > Where $\Delta t = t/P$.
790   With this, a discrete time operator $G(\Delta t)$ can be defined:
791 < \begin{equation}
792 < eq here
791 > \begin{align}
792 > G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
793 >        e^{iL_1\frac{\Delta t}{2}} \notag \\%
794 > %
795 >        &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
796 >        U_1 \biggl ( \frac{\Delta t}{2} \biggr )
797   \label{introEq:Lp5}
798 < \end{equation}
799 < Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
798 > \end{align}
799 > Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
800   unitary.  Meaning an integrator based on this factorization will be
801   reversible in time.
802  
803   As an example, consider the following decomposition of $L$:
804 + \begin{align}
805 + iL_1 &= \dot{q}\frac{\partial}{\partial q}%
806 + \label{introEq:Lp6a} \\%
807 + %
808 + iL_2 &= F(q)\frac{\partial}{\partial p}%
809 + \label{introEq:Lp6b}
810 + \end{align}
811 + This leads to propagator $G( \Delta t )$ as,
812   \begin{equation}
813 < eq here
814 < \label{introEq:Lp6}
813 > G(\Delta t) =  e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
814 >        e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
815 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
816 > \label{introEq:Lp7}
817   \end{equation}
818 < Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
818 > Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
819   \begin{equation}
820 < eq here
820 > e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
821   \label{introEq:Lp8}
822   \end{equation}
823 < Where $c$ is independent of $q$.  One obtains the following:  
824 < \begin{equation}
825 < eq here
826 < \label{introEq:Lp8}
827 < \end{equation}
823 > Where $c$ is independent of $x$.  One obtains the following:  
824 > \begin{align}
825 > \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
826 >        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
827 > %
828 > q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
829 >        \label{introEq:Lp9b}\\%
830 > %
831 > \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
832 >        \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
833 > \end{align}
834   Or written another way,
835 < \begin{equation}
836 < eq here
837 < \label{intorEq:Lp9}
838 < \end{equation}
835 > \begin{align}
836 > q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
837 >        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
838 > \label{introEq:Lp10a} \\%
839 > %
840 > \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
841 >        \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
842 > \label{introEq:Lp10b}
843 > \end{align}
844   This is the velocity Verlet formulation presented in
845 < Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
845 > Sec.~\ref{introSec:mdIntegrate}.  Because this integration scheme is
846   comprised of unitary propagators, it is symplectic, and therefore area
847 < preserving in phase space.  From the preceeding fatorization, one can
847 > preserving in phase space.  From the preceding factorization, one can
848   see that the integration of the equations of motion would follow:
849   \begin{enumerate}
850   \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
# Line 717 | Line 856 | see that the integration of the equations of motion wo
856   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
857   \end{enumerate}
858  
859 < \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
859 > \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
860  
861   Based on the factorization from the previous section,
862 < Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
862 > Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
863   symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
864   alternative method for the integration of orientational degrees of
865   freedom. The method starts with a straightforward splitting of the
866   Liouville operator:
867 < \begin{equation}
868 < eq here
869 < \label{introEq:SR1}
870 < \end{equation}
871 < Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
872 < due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
867 > \begin{align}
868 > iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
869 >        \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
870 > \label{introEq:SR1a} \\%
871 > %
872 > iL_F &= F(q)\frac{\partial}{\partial p}
873 > \label{introEq:SR1b} \\%
874 > iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
875 > \label{introEq:SR1b} \\%
876 > \end{align}
877 > Where $\tau(\mathbf{A})$ is the torque of the system
878 > due to the configuration, and $\pi$ is the conjugate
879   angular momenta of the system. The propagator, $G(\Delta t)$, becomes
880   \begin{equation}
881 < eq here
881 > G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
882 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
883 >        e^{\Delta t\,iL_{\text{pos}}} \,
884 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
885 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
886   \label{introEq:SR2}
887   \end{equation}
888 < Propagation fo the linear and angular momenta follows as in the Verlet
889 < scheme.  The propagation of positions also follows the verlet scheme
888 > Propagation of the linear and angular momenta follows as in the Verlet
889 > scheme.  The propagation of positions also follows the Verlet scheme
890   with the addition of a further symplectic splitting of the rotation
891 < matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
891 > matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
892 > $U_{\text{pos}}(\Delta t)$.
893   \begin{equation}
894 < eq here
894 > \mathcal{U}_{\text{rot}}(\Delta t) =
895 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
896 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
897 >        \mathcal{U}_z (\Delta t)\,
898 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
899 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
900   \label{introEq:SR3}
901   \end{equation}
902 < Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
903 < $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
902 > Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
903 > $\pi$ about each axis $j$.  As all propagations are now
904   unitary and symplectic, the entire integration scheme is also
905   symplectic and time reversible.
906  
907   \section{\label{introSec:layout}Dissertation Layout}
908  
909 < This dissertation is divided as follows:Chapt.~\ref{chapt:RSA}
909 > This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
910   presents the random sequential adsorption simulations of related
911 < pthalocyanines on a gold (111) surface. Chapt.~\ref{chapt:OOPSE}
911 > pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
912   is about the writing of the molecular dynamics simulation package
913 < {\sc oopse}, Chapt.~\ref{chapt:lipid} regards the simulations of
914 < phospholipid bilayers using a mesoscale model, and lastly,
915 < Chapt.~\ref{chapt:conclusion} concludes this dissertation with a
913 > {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
914 > phospholipid bilayers using a mesoscale model. And lastly,
915 > Ch.~\ref{chapt:conclusion} concludes this dissertation with a
916   summary of all results. The chapters are arranged in chronological
917   order, and reflect the progression of techniques I employed during my
918   research.  
919  
920 < The chapter concerning random sequential adsorption
921 < simulations is a study in applying the principles of theoretical
922 < research in order to obtain a simple model capaable of explaining the
923 < results.  My advisor, Dr. Gezelter, and I were approached by a
924 < colleague, Dr. Lieberman, about possible explanations for partial
925 < coverge of a gold surface by a particular compound of hers. We
926 < suggested it might be due to the statistical packing fraction of disks
927 < on a plane, and set about to simulate this system.  As the events in
928 < our model were not dynamic in nature, a Monte Carlo method was
929 < emplyed.  Here, if a molecule landed on the surface without
930 < overlapping another, then its landing was accepted.  However, if there
931 < was overlap, the landing we rejected and a new random landing location
932 < was chosen.  This defined our acceptance rules and allowed us to
933 < construct a Markov chain whose limiting distribution was the surface
934 < coverage in which we were interested.
920 > The chapter concerning random sequential adsorption simulations is a
921 > study in applying Statistical Mechanics simulation techniques in order
922 > to obtain a simple model capable of explaining the results.  My
923 > advisor, Dr. Gezelter, and I were approached by a colleague,
924 > Dr. Lieberman, about possible explanations for the  partial coverage of a
925 > gold surface by a particular compound of hers. We suggested it might
926 > be due to the statistical packing fraction of disks on a plane, and
927 > set about to simulate this system.  As the events in our model were
928 > not dynamic in nature, a Monte Carlo method was employed.  Here, if a
929 > molecule landed on the surface without overlapping another, then its
930 > landing was accepted.  However, if there was overlap, the landing we
931 > rejected and a new random landing location was chosen.  This defined
932 > our acceptance rules and allowed us to construct a Markov chain whose
933 > limiting distribution was the surface coverage in which we were
934 > interested.
935  
936   The following chapter, about the simulation package {\sc oopse},
937   describes in detail the large body of scientific code that had to be
938 < written in order to study phospholipid bilayer.  Although there are
938 > written in order to study phospholipid bilayers.  Although there are
939   pre-existing molecular dynamic simulation packages available, none
940   were capable of implementing the models we were developing.{\sc oopse}
941   is a unique package capable of not only integrating the equations of
942 < motion in cartesian space, but is also able to integrate the
942 > motion in Cartesian space, but is also able to integrate the
943   rotational motion of rigid bodies and dipoles.  Add to this the
944   ability to perform calculations across parallel processors and a
945   flexible script syntax for creating systems, and {\sc oopse} becomes a
946   very powerful scientific instrument for the exploration of our model.
947  
948 < Bringing us to Chapt.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
949 < able to parametrize a mesoscale model for phospholipid simulations.
950 < This model retains information about solvent ordering about the
948 > Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
949 > able to parameterize a mesoscale model for phospholipid simulations.
950 > This model retains information about solvent ordering around the
951   bilayer, as well as information regarding the interaction of the
952 < phospholipid head groups' dipole with each other and the surrounding
952 > phospholipid head groups' dipoles with each other and the surrounding
953   solvent.  These simulations give us insight into the dynamic events
954   that lead to the formation of phospholipid bilayers, as well as
955   provide the foundation for future exploration of bilayer phase

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines