ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 1003 by mmeineke, Mon Feb 2 21:56:16 2004 UTC vs.
Revision 1012 by mmeineke, Tue Feb 3 21:14:39 2004 UTC

# Line 6 | Line 6 | Carlo. Molecular Dynamic simulations integrate the equ
6   The techniques used in the course of this research fall under the two
7   main classes of molecular simulation: Molecular Dynamics and Monte
8   Carlo. Molecular Dynamic simulations integrate the equations of motion
9 < for a given system of particles, allowing the researher to gain
9 > for a given system of particles, allowing the researcher to gain
10   insight into the time dependent evolution of a system. Diffusion
11   phenomena are readily studied with this simulation technique, making
12   Molecular Dynamics the main simulation technique used in this
13   research. Other aspects of the research fall under the Monte Carlo
14   class of simulations. In Monte Carlo, the configuration space
15 < available to the collection of particles is sampled stochastichally,
15 > available to the collection of particles is sampled stochastically,
16   or randomly. Each configuration is chosen with a given probability
17 < based on the Maxwell Boltzman distribution. These types of simulations
17 > based on the Maxwell Boltzmann distribution. These types of simulations
18   are best used to probe properties of a system that are only dependent
19   only on the state of the system. Structural information about a system
20   is most readily obtained through these types of methods.
# Line 31 | Line 31 | Statistical Mechanics concepts present in this dissert
31  
32   The following section serves as a brief introduction to some of the
33   Statistical Mechanics concepts present in this dissertation.  What
34 < follows is a brief derivation of Blotzman weighted statistics, and an
34 > follows is a brief derivation of Boltzmann weighted statistics, and an
35   explanation of how one can use the information to calculate an
36   observable for a system.  This section then concludes with a brief
37   discussion of the ergodic hypothesis and its relevance to this
38   research.
39  
40 < \subsection{\label{introSec:boltzman}Boltzman weighted statistics}
40 > \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41  
42   Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
43   Let $\Omega(E_{\gamma})$ represent the number of degenerate ways
# Line 86 | Line 86 | S = k_B \ln \Omega(E)
86   S = k_B \ln \Omega(E)
87   \label{introEq:SM5}
88   \end{equation}
89 < Where $k_B$ is the Boltzman constant.  Having defined entropy, one can
89 > Where $k_B$ is the Boltzmann constant.  Having defined entropy, one can
90   also define the temperature of the system using the relation
91   \begin{equation}
92   \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
# Line 111 | Line 111 | is allowed to fluctuate. Returning to the previous exa
111   the canonical ensemble, the number of particles, $N$, the volume, $V$,
112   and the temperature, $T$, are all held constant while the energy, $E$,
113   is allowed to fluctuate. Returning to the previous example, the bath
114 < system is now an infinitly large thermal bath, whose exchange of
114 > system is now an infinitely large thermal bath, whose exchange of
115   energy with the system $\gamma$ holds the temperature constant.  The
116   partitioning of energy in the bath system is then related to the total
117   energy of both systems and the fluctuations in $E_{\gamma}$:
# Line 127 | Line 127 | Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{
127   \label{introEq:SM10}
128   \end{equation}
129   Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
130 < an integration over all accessable phase space, $P_{\gamma}$ is the
130 > an integration over all accessible phase space, $P_{\gamma}$ is the
131   probability of being in a given phase state and
132   $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
133   phase state.
# Line 156 | Line 156 | P_{\gamma} \propto e^{-\beta E_{\gamma}}
156   P_{\gamma} \propto e^{-\beta E_{\gamma}}
157   \label{introEq:SM13}
158   \end{equation}
159 < Where $\ln \Omega(E)$ has been factored out of the porpotionality as a
159 > Where $\ln \Omega(E)$ has been factored out of the proportionality as a
160   constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
161   d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
162   \begin{equation}
# Line 164 | Line 164 | P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
164   {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
165   \label{introEq:SM14}
166   \end{equation}
167 < This result is the standard Boltzman statistical distribution.
167 > This result is the standard Boltzmann statistical distribution.
168   Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
169   \begin{equation}
170   \langle A \rangle =
# Line 182 | Line 182 | ergodicity allows the unification of a time averaged o
182   systems, this is a valid assumption, except in cases where the system
183   may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
184   ergodicity allows the unification of a time averaged observation and
185 < an ensemble averged one. If an observation is averaged over a
185 > an ensemble averaged one. If an observation is averaged over a
186   sufficiently long time, the system is assumed to visit all
187   appropriately available points in phase space, giving a properly
188   weighted statistical average. This allows the researcher freedom of
# Line 217 | Line 217 | the value of $f(x)$ at each point. The accumulated ave
217   $[a,b]$. The calculation of the integral could then be solved by
218   randomly choosing points along the interval $[a,b]$ and calculating
219   the value of $f(x)$ at each point. The accumulated average would then
220 < approach $I$ in the limit where the number of trials is infintely
220 > approach $I$ in the limit where the number of trials is infinitely
221   large.
222  
223   However, in Statistical Mechanics, one is typically interested in
# Line 235 | Line 235 | brute force method to this system would yield highly i
235   momentum. Therefore the momenta contribution of the integral can be
236   factored out, leaving the configurational integral. Application of the
237   brute force method to this system would yield highly inefficient
238 < results. Due to the Boltzman weighting of this integral, most random
238 > results. Due to the Boltzmann weighting of this integral, most random
239   configurations will have a near zero contribution to the ensemble
240   average. This is where importance sampling comes into
241   play.\cite{allen87:csl}
# Line 263 | Line 263 | Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
263          {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
264   \label{introEq:MCboltzman}
265   \end{equation}
266 < Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
266 > Where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
267   can be rewritten as
268   \begin{equation}
269   \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
# Line 285 | Line 285 | sampled from the distribution $\rho_{kT}(\mathbf{r}^N)
285   \end{equation}
286   The difficulty is selecting points $\mathbf{r}^N$ such that they are
287   sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
288 < was proposed by Metropolis et al.\cite{metropolis:1953} which involved
288 > was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
289   the use of a Markov chain whose limiting distribution was
290   $\rho_{kT}(\mathbf{r}^N)$.
291  
# Line 297 | Line 297 | conditions:\cite{leach01:mm}
297   \item The outcome of each trial depends only on the outcome of the previous trial.
298   \item Each trial belongs to a finite set of outcomes called the state space.
299   \end{enumerate}
300 < If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
301 < $\rho_m$ and $\rho_n$ are the probablilities of being in state
300 > If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
301 > $\rho_m$ and $\rho_n$ are the probabilities of being in state
302   $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
303   states are linked by a transition probability, $\pi_{mn}$, which is the
304   probability of going from state $m$ to state $n$.
# Line 343 | Line 343 | Eq.~\ref{introEq:MCmarkovEquil} is solved such that
343  
344   In the Metropolis method\cite{metropolis:1953}
345   Eq.~\ref{introEq:MCmarkovEquil} is solved such that
346 < $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
346 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
347   of states.  The method accomplishes this by imposing the strong
348   condition of microscopic reversibility on the equilibrium
349   distribution.  Meaning, that at equilibrium the probability of going
# Line 352 | Line 352 | from $m$ to $n$ is the same as going from $n$ to $m$.
352   \rho_m\pi_{mn} = \rho_n\pi_{nm}
353   \label{introEq:MCmicroReverse}
354   \end{equation}
355 < Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
355 > Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
356   the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
357   Eq.~\ref{introEq:MCmicroReverse} becomes
358   \begin{equation}
# Line 360 | Line 360 | Eq.~\ref{introEq:MCmicroReverse} becomes
360          \frac{\rho_n}{\rho_m}
361   \label{introEq:MCmicro2}
362   \end{equation}
363 < For a Boltxman limiting distribution,
363 > For a Boltzmann limiting distribution,
364   \begin{equation}
365   \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
366          = e^{-\beta \Delta \mathcal{U}}
# Line 380 | Line 380 | Metropolis method proceeds as follows
380   Metropolis method proceeds as follows
381   \begin{enumerate}
382   \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
383 < \item Modify $\mathbf{r}^N$, to generate configuratioon $\mathbf{r^{\prime}}^N$.
383 > \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
384   \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
385 < \item Accumulate the average for the configurational observable of intereest.
385 > \item Accumulate the average for the configurational observable of interest.
386   \item Repeat from step 2 until the average converges.
387   \end{enumerate}
388   One important note is that the average is accumulated whether the move
389   is accepted or not, this ensures proper weighting of the average.
390   Using Eq.~\ref{introEq:Importance4} it becomes clear that the
391   accumulated averages are the ensemble averages, as this method ensures
392 < that the limiting distribution is the Boltzman distribution.
392 > that the limiting distribution is the Boltzmann distribution.
393  
394   \section{\label{introSec:MD}Molecular Dynamics Simulations}
395  
# Line 409 | Line 409 | However, when the observable is dependent only on the
409   researcher is interested.  If the observables depend on momenta in
410   any fashion, then the only choice is molecular dynamics in some form.
411   However, when the observable is dependent only on the configuration,
412 < then most of the time Monte Carlo techniques will be more efficent.
412 > then most of the time Monte Carlo techniques will be more efficient.
413  
414   The focus of research in the second half of this dissertation is
415   centered around the dynamic properties of phospholipid bilayers,
# Line 432 | Line 432 | positions were selected that in some cases dispersed t
432   Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
433   phospholipid membranes.  Therefore in these simulations initial
434   positions were selected that in some cases dispersed the lipids in
435 < water, and in other cases structured the lipids into preformed
435 > water, and in other cases structured the lipids into performed
436   bilayers.  Important considerations at this stage of the simulation are:
437   \begin{itemize}
438   \item There are no major overlaps of molecular or atomic orbitals
439 < \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
440 < \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
439 > \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
440 > \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
441   \end{itemize}
442  
443   The first point is important due to the amount of potential energy
444   generated by having two particles too close together.  If overlap
445   occurs, the first evaluation of forces will return numbers so large as
446 < to render the numerical integration of teh motion meaningless.  The
446 > to render the numerical integration of the motion meaningless.  The
447   second consideration keeps the system from drifting or rotating as a
448   whole.  This arises from the fact that most simulations are of systems
449   in equilibrium in the absence of outside forces.  Therefore any net
450   movement would be unphysical and an artifact of the simulation method
451   used.  The final point addresses the selection of the magnitude of the
452 < initial velocities.  For many simulations it is convienient to use
452 > initial velocities.  For many simulations it is convenient to use
453   this opportunity to scale the amount of kinetic energy to reflect the
454   desired thermal distribution of the system.  However, it must be noted
455   that most systems will require further velocity rescaling after the
# Line 474 | Line 474 | group in a vacuum, the behavior of the system will be
474   arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
475   exposed to the surface.  Unless one is simulating an isolated particle
476   group in a vacuum, the behavior of the system will be far from the
477 < desired bulk charecteristics.  To offset this, simulations employ the
477 > desired bulk characteristics.  To offset this, simulations employ the
478   use of periodic boundary images.\cite{born:1912}
479  
480   The technique involves the use of an algorithm that replicates the
481 < simulation box on an infinite lattice in cartesian space.  Any given
481 > simulation box on an infinite lattice in Cartesian space.  Any given
482   particle leaving the simulation box on one side will have an image of
483   itself enter on the opposite side (see Fig.~\ref{introFig:pbc}).  In
484 < addition, this sets that any given particle pair has an image, real or
485 < periodic, within $fix$ of each other.  A discussion of the method used
486 < to calculate the periodic image can be found in
484 > addition, this sets that any two particles have an image, real or
485 > periodic, within $\text{box}/2$ of each other.  A discussion of the
486 > method used to calculate the periodic image can be found in
487   Sec.\ref{oopseSec:pbc}.
488  
489   \begin{figure}
490   \centering
491   \includegraphics[width=\linewidth]{pbcFig.eps}
492 < \caption[An illustration of periodic boundry conditions]{A 2-D illustration of periodic boundry conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
492 > \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
493   \label{introFig:pbc}
494   \end{figure}
495  
# Line 498 | Line 498 | predetermined distance, $r_{\text{cut}}$, are not incl
498   cutoff radius be employed.  Using a cutoff radius improves the
499   efficiency of the force evaluation, as particles farther than a
500   predetermined distance, $r_{\text{cut}}$, are not included in the
501 < calculation.\cite{Frenkel1996} In a simultation with periodic images,
501 > calculation.\cite{Frenkel1996} In a simulation with periodic images,
502   $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
503   Fig.~\ref{introFig:rMax} illustrates how when using an
504   $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
505   $r_{\text{cut}}$ at all, the corners of the simulation box are
506   unequally weighted due to the lack of particle images in the $x$, $y$,
507 < or $z$ directions past a disance of $\text{box} / 2$.
507 > or $z$ directions past a distance of $\text{box} / 2$.
508  
509   \begin{figure}
510   \centering
511   \includegraphics[width=\linewidth]{rCutMaxFig.eps}
512 < \caption
512 > \caption[An explanation of $r_{\text{cut}}$]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
513   \label{introFig:rMax}
514   \end{figure}
515  
516 < With the use of an $fix$, however, comes a discontinuity in the
517 < potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
518 < one calculates the potential energy at the $r_{\text{cut}}$, and add
519 < that value to the potential.  This causes the function to go smoothly
520 < to zero at the cutoff radius.  This ensures conservation of energy
521 < when integrating the Newtonian equations of motion.
516 > With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
517 > the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
518 > discontinuity, one calculates the potential energy at the
519 > $r_{\text{cut}}$, and adds that value to the potential, causing
520 > the function to go smoothly to zero at the cutoff radius.  This
521 > shifted potential ensures conservation of energy when integrating the
522 > Newtonian equations of motion.
523  
524 + \begin{figure}
525 + \centering
526 + \includegraphics[width=\linewidth]{shiftedPot.eps}
527 + \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
528 + \label{introFig:shiftPot}
529 + \end{figure}
530 +
531   The second main simplification used in this research is the Verlet
532   neighbor list. \cite{allen87:csl} In the Verlet method, one generates
533   a list of all neighbor atoms, $j$, surrounding atom $i$ within some
# Line 534 | Line 542 | A starting point for the discussion of molecular dynam
542   \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
543  
544   A starting point for the discussion of molecular dynamics integrators
545 < is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
545 > is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
546   expansion of position in time:
547   \begin{equation}
548 < eq here
548 > q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
549 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
550 >        \mathcal{O}(\Delta t^4)
551   \label{introEq:verletForward}
552   \end{equation}
553   As well as,
554   \begin{equation}
555 < eq here
555 > q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
556 >        \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
557 >        \mathcal{O}(\Delta t^4)
558   \label{introEq:verletBack}
559   \end{equation}
560 < Adding together Eq.~\ref{introEq:verletForward} and
560 > Where $m$ is the mass of the particle, $q(t)$ is the position at time
561 > $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
562 > particle. Adding together Eq.~\ref{introEq:verletForward} and
563   Eq.~\ref{introEq:verletBack} results in,
564   \begin{equation}
565 < eq here
565 > q(t+\Delta t)+q(t-\Delta t) =
566 >        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
567   \label{introEq:verletSum}
568   \end{equation}
569   Or equivalently,
570   \begin{equation}
571 < eq here
571 > q(t+\Delta t) \approx
572 >        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
573   \label{introEq:verletFinal}
574   \end{equation}
575   Which contains an error in the estimate of the new positions on the
576   order of $\Delta t^4$.
577  
578   In practice, however, the simulations in this research were integrated
579 < with a velocity reformulation of teh Verlet method.\cite{allen87:csl}
580 < \begin{equation}
581 < eq here
582 < \label{introEq:MDvelVerletPos}
583 < \end{equation}
584 < \begin{equation}
569 < eq here
579 > with a velocity reformulation of the Verlet method.\cite{allen87:csl}
580 > \begin{align}
581 > q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
582 > \label{introEq:MDvelVerletPos} \\%
583 > %
584 > v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
585   \label{introEq:MDvelVerletVel}
586 < \end{equation}
586 > \end{align}
587   The original Verlet algorithm can be regained by substituting the
588   velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
589   formulations are chosen in this research because the algorithms have
590   very little long term drift in energy conservation.  Energy
591   conservation in a molecular dynamics simulation is of extreme
592   importance, as it is a measure of how closely one is following the
593 < ``true'' trajectory wtih the finite integration scheme.  An exact
593 > ``true'' trajectory with the finite integration scheme.  An exact
594   solution to the integration will conserve area in phase space, as well
595   as be reversible in time, that is, the trajectory integrated forward
596   or backwards will exactly match itself.  Having a finite algorithm
# Line 586 | Line 601 | a pseudo-Hamiltonian which shadows the real one in pha
601   It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
602   does not rigorously preserve the actual Hamiltonian, it does preserve
603   a pseudo-Hamiltonian which shadows the real one in phase space.  This
604 < pseudo-Hamiltonian is proveably area-conserving as well as time
604 > pseudo-Hamiltonian is provably area-conserving as well as time
605   reversible.  The fact that it shadows the true Hamiltonian in phase
606   space is acceptable in actual simulations as one is interested in the
607   ensemble average of the observable being measured.  From the ergodic
608 < hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
608 > hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
609   average will match the ensemble average, therefore two similar
610   trajectories in phase space should give matching statistical averages.
611  
612   \subsection{\label{introSec:MDfurther}Further Considerations}
613 +
614   In the simulations presented in this research, a few additional
615   parameters are needed to describe the motions.  The simulations
616 < involving water and phospholipids in Chapt.~\ref{chaptLipids} are
616 > involving water and phospholipids in Ch.~\ref{chapt:lipid} are
617   required to integrate the equations of motions for dipoles on atoms.
618   This involves an additional three parameters be specified for each
619   dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
620   taken to be the Euler angles, where $\phi$ is a rotation about the
621   $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
622   $\psi$ is a final rotation about the new $z$-axis (see
623 < Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
624 < accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
623 > Fig.~\ref{introFig:eulerAngles}).  This sequence of rotations can be
624 > accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
625   defined as follows:
626   \begin{equation}
627 < eq here
627 > \mathbf{A} =
628 > \begin{bmatrix}
629 >        \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
630 >        \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
631 >        \sin\theta\sin\psi \\%
632 >        %
633 >        -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
634 >        -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
635 >        \sin\theta\cos\psi \\%
636 >        %
637 >        \sin\phi\sin\theta &%
638 >        -\cos\phi\sin\theta &%
639 >        \cos\theta
640 > \end{bmatrix}
641   \label{introEq:EulerRotMat}
642   \end{equation}
643  
644 < The equations of motion for Euler angles can be written down as
645 < \cite{allen87:csl}
646 < \begin{equation}
647 < eq here
648 < \label{introEq:MDeuleeerPsi}
649 < \end{equation}
644 > The equations of motion for Euler angles can be written down
645 > as\cite{allen87:csl}
646 > \begin{align}
647 > \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
648 >        \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
649 >        \omega^s_z
650 > \label{introEq:MDeulerPhi} \\%
651 > %
652 > \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
653 > \label{introEq:MDeulerTheta} \\%
654 > %
655 > \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
656 >        \omega^s_y \frac{\cos\phi}{\sin\theta}
657 > \label{introEq:MDeulerPsi}
658 > \end{align}
659   Where $\omega^s_i$ is the angular velocity in the lab space frame
660 < along cartesian coordinate $i$.  However, a difficulty arises when
660 > along Cartesian coordinate $i$.  However, a difficulty arises when
661   attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
662   Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
663   both equations means there is a non-physical instability present when
664 < $\theta$ is 0 or $\pi$.
665 <
666 < To correct for this, the simulations integrate the rotation matrix,
667 < $\mathbf{A}$, directly, thus avoiding the instability.
630 < This method was proposed by Dullwebber
631 < \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
664 > $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
665 > the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
666 > instability.  This method was proposed by Dullweber
667 > \emph{et. al.}\cite{Dullweber1997}, and is presented in
668   Sec.~\ref{introSec:MDsymplecticRot}.
669  
670 < \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
670 > \subsection{\label{introSec:MDliouville}Liouville Propagator}
671  
672   Before discussing the integration of the rotation matrix, it is
673   necessary to understand the construction of a ``good'' integration
674   scheme.  It has been previously
675 < discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
675 > discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
676   integrator to be symplectic, or time reversible.  The following is an
677   outline of the Trotter factorization of the Liouville Propagator as a
678 < scheme for generating symplectic integrators. \cite{Tuckerman:1992}
678 > scheme for generating symplectic integrators.\cite{Tuckerman92}
679  
680   For a system with $f$ degrees of freedom the Liouville operator can be
681   defined as,
682   \begin{equation}
683 < eq here
683 > iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
684 >        F_j\frac{\partial}{\partial p_j} \biggr ]
685   \label{introEq:LiouvilleOperator}
686   \end{equation}
687 < Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
688 < degree of freedom, and $f_j$ is the force on that degree of freedom.
689 < $\Gamma$ is defined as the set of all positions nad conjugate momenta,
690 < $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
687 > Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
688 > degree of freedom, and $F_j$ is the force on that degree of freedom.
689 > $\Gamma$ is defined as the set of all positions and conjugate momenta,
690 > $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
691   \begin {equation}
692 < eq here
692 > U(t) = e^{iLt}
693   \label{introEq:Lpropagator}
694   \end{equation}
695   This allows the specification of $\Gamma$ at any time $t$ as
696   \begin{equation}
697 < eq here
697 > \Gamma(t) = U(t)\Gamma(0)
698   \label{introEq:Lp2}
699   \end{equation}
700   It is important to note, $U(t)$ is a unitary operator meaning
# Line 668 | Line 705 | Trotter theorem to yield
705  
706   Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
707   Trotter theorem to yield
708 < \begin{equation}
709 < eq here
710 < \label{introEq:Lp4}
711 < \end{equation}
712 < Where $\Delta t = \frac{t}{P}$.
708 > \begin{align}
709 > e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
710 > %
711 >        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
712 > %
713 >        &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
714 >        e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
715 >        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
716 > \end{align}
717 > Where $\Delta t = t/P$.
718   With this, a discrete time operator $G(\Delta t)$ can be defined:
719 < \begin{equation}
720 < eq here
719 > \begin{align}
720 > G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
721 >        e^{iL_1\frac{\Delta t}{2}} \notag \\%
722 > %
723 >        &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
724 >        U_1 \biggl ( \frac{\Delta t}{2} \biggr )
725   \label{introEq:Lp5}
726 < \end{equation}
727 < Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
726 > \end{align}
727 > Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
728   unitary.  Meaning an integrator based on this factorization will be
729   reversible in time.
730  
731   As an example, consider the following decomposition of $L$:
732 + \begin{align}
733 + iL_1 &= \dot{q}\frac{\partial}{\partial q}%
734 + \label{introEq:Lp6a} \\%
735 + %
736 + iL_2 &= F(q)\frac{\partial}{\partial p}%
737 + \label{introEq:Lp6b}
738 + \end{align}
739 + This leads to propagator $G( \Delta t )$ as,
740   \begin{equation}
741 < eq here
742 < \label{introEq:Lp6}
741 > G(\Delta t) =  e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
742 >        e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
743 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
744 > \label{introEq:Lp7}
745   \end{equation}
746 < Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
746 > Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
747   \begin{equation}
748 < eq here
748 > e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
749   \label{introEq:Lp8}
750   \end{equation}
751 < Where $c$ is independent of $q$.  One obtains the following:  
752 < \begin{equation}
753 < eq here
754 < \label{introEq:Lp8}
755 < \end{equation}
751 > Where $c$ is independent of $x$.  One obtains the following:  
752 > \begin{align}
753 > \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
754 >        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
755 > %
756 > q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
757 >        \label{introEq:Lp9b}\\%
758 > %
759 > \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
760 >        \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
761 > \end{align}
762   Or written another way,
763 < \begin{equation}
764 < eq here
765 < \label{intorEq:Lp9}
766 < \end{equation}
763 > \begin{align}
764 > q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
765 >        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
766 > \label{introEq:Lp10a} \\%
767 > %
768 > \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
769 >        \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
770 > \label{introEq:Lp10b}
771 > \end{align}
772   This is the velocity Verlet formulation presented in
773 < Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
773 > Sec.~\ref{introSec:mdIntegrate}.  Because this integration scheme is
774   comprised of unitary propagators, it is symplectic, and therefore area
775 < preserving in phase space.  From the preceeding fatorization, one can
775 > preserving in phase space.  From the preceding factorization, one can
776   see that the integration of the equations of motion would follow:
777   \begin{enumerate}
778   \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
# Line 717 | Line 784 | see that the integration of the equations of motion wo
784   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
785   \end{enumerate}
786  
787 < \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
787 > \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
788  
789   Based on the factorization from the previous section,
790 < Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
790 > Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
791   symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
792   alternative method for the integration of orientational degrees of
793   freedom. The method starts with a straightforward splitting of the
794   Liouville operator:
795 < \begin{equation}
796 < eq here
797 < \label{introEq:SR1}
798 < \end{equation}
799 < Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
800 < due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
795 > \begin{align}
796 > iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
797 >        \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
798 > \label{introEq:SR1a} \\%
799 > %
800 > iL_F &= F(q)\frac{\partial}{\partial p}
801 > \label{introEq:SR1b} \\%
802 > iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
803 > \label{introEq:SR1b} \\%
804 > \end{align}
805 > Where $\tau(\mathbf{A})$ is the torque of the system
806 > due to the configuration, and $\pi$ is the conjugate
807   angular momenta of the system. The propagator, $G(\Delta t)$, becomes
808   \begin{equation}
809 < eq here
809 > G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
810 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
811 >        e^{\Delta t\,iL_{\text{pos}}} \,
812 >        e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
813 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
814   \label{introEq:SR2}
815   \end{equation}
816 < Propagation fo the linear and angular momenta follows as in the Verlet
817 < scheme.  The propagation of positions also follows the verlet scheme
816 > Propagation of the linear and angular momenta follows as in the Verlet
817 > scheme.  The propagation of positions also follows the Verlet scheme
818   with the addition of a further symplectic splitting of the rotation
819 < matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
819 > matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
820 > $U_{\text{pos}}(\Delta t)$.
821   \begin{equation}
822 < eq here
822 > \mathcal{U}_{\text{rot}}(\Delta t) =
823 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
824 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
825 >        \mathcal{U}_z (\Delta t)\,
826 >        \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
827 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
828   \label{introEq:SR3}
829   \end{equation}
830 < Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
831 < $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
830 > Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
831 > $\pi$ about each axis $j$.  As all propagations are now
832   unitary and symplectic, the entire integration scheme is also
833   symplectic and time reversible.
834  
835   \section{\label{introSec:layout}Dissertation Layout}
836  
837 < This dissertation is divided as follows:Chapt.~\ref{chapt:RSA}
837 > This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
838   presents the random sequential adsorption simulations of related
839 < pthalocyanines on a gold (111) surface. Chapt.~\ref{chapt:OOPSE}
839 > pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
840   is about the writing of the molecular dynamics simulation package
841 < {\sc oopse}, Chapt.~\ref{chapt:lipid} regards the simulations of
842 < phospholipid bilayers using a mesoscale model, and lastly,
843 < Chapt.~\ref{chapt:conclusion} concludes this dissertation with a
841 > {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
842 > phospholipid bilayers using a mesoscale model. And lastly,
843 > Ch.~\ref{chapt:conclusion} concludes this dissertation with a
844   summary of all results. The chapters are arranged in chronological
845   order, and reflect the progression of techniques I employed during my
846   research.  
847  
848 < The chapter concerning random sequential adsorption
849 < simulations is a study in applying the principles of theoretical
850 < research in order to obtain a simple model capaable of explaining the
851 < results.  My advisor, Dr. Gezelter, and I were approached by a
852 < colleague, Dr. Lieberman, about possible explanations for partial
853 < coverge of a gold surface by a particular compound of hers. We
854 < suggested it might be due to the statistical packing fraction of disks
855 < on a plane, and set about to simulate this system.  As the events in
856 < our model were not dynamic in nature, a Monte Carlo method was
857 < emplyed.  Here, if a molecule landed on the surface without
858 < overlapping another, then its landing was accepted.  However, if there
859 < was overlap, the landing we rejected and a new random landing location
860 < was chosen.  This defined our acceptance rules and allowed us to
861 < construct a Markov chain whose limiting distribution was the surface
862 < coverage in which we were interested.
848 > The chapter concerning random sequential adsorption simulations is a
849 > study in applying Statistical Mechanics simulation techniques in order
850 > to obtain a simple model capable of explaining the results.  My
851 > advisor, Dr. Gezelter, and I were approached by a colleague,
852 > Dr. Lieberman, about possible explanations for the  partial coverage of a
853 > gold surface by a particular compound of hers. We suggested it might
854 > be due to the statistical packing fraction of disks on a plane, and
855 > set about to simulate this system.  As the events in our model were
856 > not dynamic in nature, a Monte Carlo method was employed.  Here, if a
857 > molecule landed on the surface without overlapping another, then its
858 > landing was accepted.  However, if there was overlap, the landing we
859 > rejected and a new random landing location was chosen.  This defined
860 > our acceptance rules and allowed us to construct a Markov chain whose
861 > limiting distribution was the surface coverage in which we were
862 > interested.
863  
864   The following chapter, about the simulation package {\sc oopse},
865   describes in detail the large body of scientific code that had to be
866 < written in order to study phospholipid bilayer.  Although there are
866 > written in order to study phospholipid bilayers.  Although there are
867   pre-existing molecular dynamic simulation packages available, none
868   were capable of implementing the models we were developing.{\sc oopse}
869   is a unique package capable of not only integrating the equations of
870 < motion in cartesian space, but is also able to integrate the
870 > motion in Cartesian space, but is also able to integrate the
871   rotational motion of rigid bodies and dipoles.  Add to this the
872   ability to perform calculations across parallel processors and a
873   flexible script syntax for creating systems, and {\sc oopse} becomes a
874   very powerful scientific instrument for the exploration of our model.
875  
876 < Bringing us to Chapt.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
877 < able to parametrize a mesoscale model for phospholipid simulations.
878 < This model retains information about solvent ordering about the
876 > Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
877 > able to parameterize a mesoscale model for phospholipid simulations.
878 > This model retains information about solvent ordering around the
879   bilayer, as well as information regarding the interaction of the
880 < phospholipid head groups' dipole with each other and the surrounding
880 > phospholipid head groups' dipoles with each other and the surrounding
881   solvent.  These simulations give us insight into the dynamic events
882   that lead to the formation of phospholipid bilayers, as well as
883   provide the foundation for future exploration of bilayer phase

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines