ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 956 by mmeineke, Sun Jan 18 19:10:32 2004 UTC vs.
Revision 980 by mmeineke, Sat Jan 24 03:09:15 2004 UTC

# Line 53 | Line 53 | The equation can be recast as:
53   \end{equation}
54   The equation can be recast as:
55   \begin{equation}
56 < I = (b-a)<f(x)>
56 > I = (b-a)\langle f(x) \rangle
57   \label{eq:MCex2}
58   \end{equation}
59 < Where $<f(x)>$ is the unweighted average over the interval
59 > Where $\langle f(x) \rangle$ is the unweighted average over the interval
60   $[a,b]$. The calculation of the integral could then be solved by
61   randomly choosing points along the interval $[a,b]$ and calculating
62   the value of $f(x)$ at each point. The accumulated average would then
# Line 66 | Line 66 | integrals of the form:
66   However, in Statistical Mechanics, one is typically interested in
67   integrals of the form:
68   \begin{equation}
69 < <A> = \frac{A}{exp^{-\beta}}
69 > \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
70 >        e^{-\beta V(\mathbf{r}^N)}}%
71 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
72   \label{eq:mcEnsAvg}
73   \end{equation}
74 < Where $r^N$ stands for the coordinates of all $N$ particles and $A$ is
75 < some observable that is only dependent on position. $<A>$ is the
76 < ensemble average of $A$ as presented in
77 < Sec.~\ref{introSec:statThermo}. Because $A$ is independent of
78 < momentum, the momenta contribution of the integral can be factored
79 < out, leaving the configurational integral. Application of the brute
80 < force method to this system would yield highly inefficient
74 > Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
75 > and $A$ is some observable that is only dependent on
76 > position. $\langle A \rangle$ is the ensemble average of $A$ as
77 > presented in Sec.~\ref{introSec:statThermo}. Because $A$ is
78 > independent of momentum, the momenta contribution of the integral can
79 > be factored out, leaving the configurational integral. Application of
80 > the brute force method to this system would yield highly inefficient
81   results. Due to the Boltzman weighting of this integral, most random
82   configurations will have a near zero contribution to the ensemble
83   average. This is where a importance sampling comes into
# Line 86 | Line 88 | Eq.~\ref{eq:MCex1} rewritten to be:
88   efficiently calculate the integral.\cite{Frenkel1996} Consider again
89   Eq.~\ref{eq:MCex1} rewritten to be:
90   \begin{equation}
91 < EQ Here
91 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
92 > \label{introEq:Importance1}
93   \end{equation}
94 < Where $fix$ is an arbitrary probability distribution in $x$.  If one
95 < conducts $fix$ trials selecting a random number, $fix$, from the
96 < distribution $fix$ on the interval $[a,b]$, then Eq.~\ref{fix} becomes
94 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
95 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
96 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
97 > Eq.~\ref{introEq:Importance1} becomes
98   \begin{equation}
99 < EQ Here
99 > I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
100 > \label{introEq:Importance2}
101   \end{equation}
102 < Looking at Eq.~ref{fix}, and realizing
102 > Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
103   \begin {equation}
104 < EQ Here
104 > \rho_{kT}(\mathbf{r}^N) =
105 >        \frac{e^{-\beta V(\mathbf{r}^N)}}
106 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
107 > \label{introEq:MCboltzman}
108   \end{equation}
109 < The ensemble average can be rewritten as
109 > Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
110 > can be rewritten as
111   \begin{equation}
112 < EQ Here
112 > \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
113 >        \rho_{kT}(\mathbf{r}^N)
114 > \label{introEq:Importance3}
115   \end{equation}
116 < Appllying Eq.~ref{fix} one obtains
116 > Applying Eq.~\ref{introEq:Importance1} one obtains
117   \begin{equation}
118 < EQ Here
118 > \langle A \rangle = \biggl \langle
119 >        \frac{ A \rho_{kT}(\mathbf{r}^N) }
120 >        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
121 > \label{introEq:Importance4}
122   \end{equation}
123 < By selecting $fix$ to be $fix$ Eq.~ref{fix} becomes
123 > By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
124 > Eq.~\ref{introEq:Importance4} becomes
125   \begin{equation}
126 < EQ Here
126 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
127 > \label{introEq:Importance5}
128   \end{equation}
129 < The difficulty is selecting points $fix$ such that they are sampled
130 < from the distribution $fix$.  A solution was proposed by Metropolis et
131 < al.\cite{fix} which involved the use of a Markov chain whose limiting
132 < distribution was $fix$.
129 > The difficulty is selecting points $\mathbf{r}^N$ such that they are
130 > sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
131 > was proposed by Metropolis et al.\cite{metropolis:1953} which involved
132 > the use of a Markov chain whose limiting distribution was
133 > $\rho_{kT}(\mathbf{r}^N)$.
134  
135 < \subsection{Markov Chains}
135 > \subsubsection{\label{introSec:markovChains}Markov Chains}
136  
137   A Markov chain is a chain of states satisfying the following
138 < conditions:\cite{fix}
139 < \begin{itemize}
138 > conditions:\cite{leach01:mm}
139 > \begin{enumerate}
140   \item The outcome of each trial depends only on the outcome of the previous trial.
141   \item Each trial belongs to a finite set of outcomes called the state space.
142 < \end{itemize}
143 < If given two configuartions, $fix$ and $fix$, $fix$ and $fix$ are the
144 < probablilities of being in state $fix$ and $fix$ respectively.
145 < Further, the two states are linked by a transition probability, $fix$,
146 < which is the probability of going from state $m$ to state $n$.
142 > \end{enumerate}
143 > If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
144 > $\rho_m$ and $\rho_n$ are the probablilities of being in state
145 > $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
146 > states are linked by a transition probability, $\pi_{mn}$, which is the
147 > probability of going from state $m$ to state $n$.
148  
149 + \newcommand{\accMe}{\operatorname{acc}}
150 +
151   The transition probability is given by the following:
152   \begin{equation}
153 < EQ Here
153 > \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
154 > \label{introEq:MCpi}
155   \end{equation}
156 < Where $fix$ is the probability of attempting the move $fix$, and $fix$
157 < is the probability of accepting the move $fix$.  Defining a
158 < probability vector, $fix$, such that
156 > Where $\alpha_{mn}$ is the probability of attempting the move $m
157 > \rightarrow n$, and $\accMe$ is the probability of accepting the move
158 > $m \rightarrow n$.  Defining a probability vector,
159 > $\boldsymbol{\rho}$, such that
160   \begin{equation}
161 < EQ Here
161 > \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
162 >        \ldots \rho_N \}
163 > \label{introEq:MCrhoVector}
164   \end{equation}
165 < a transition matrix $fix$ can be defined, whose elements are $fix$,
166 < for each given transition.  The limiting distribution of the Markov
167 < chain can then be found by applying the transition matrix an infinite
168 < number of times to the distribution vector.
165 > a transition matrix $\boldsymbol{\Pi}$ can be defined,
166 > whose elements are $\pi_{mn}$, for each given transition.  The
167 > limiting distribution of the Markov chain can then be found by
168 > applying the transition matrix an infinite number of times to the
169 > distribution vector.
170   \begin{equation}
171 < EQ Here
171 > \boldsymbol{\rho}_{\text{limit}} =
172 >        \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
173 >        \boldsymbol{\Pi}^N
174 > \label{introEq:MCmarkovLimit}
175   \end{equation}
148
176   The limiting distribution of the chain is independent of the starting
177   distribution, and successive applications of the transition matrix
178   will only yield the limiting distribution again.
179   \begin{equation}
180 < EQ Here
180 > \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
181 >        \boldsymbol{\Pi}
182 > \label{introEq:MCmarkovEquil}
183   \end{equation}
184  
185 < \subsection{fix}
185 > \subsubsection{\label{introSec:metropolisMethod}The Metropolis Method}
186  
187 < In the Metropolis method \cite{fix} Eq.~ref{fix} is solved such that
188 < $fix$ matches the Boltzman distribution of states.  The method
189 < accomplishes this by imposing the strong condition of microscopic
190 < reversibility on the equilibrium distribution.  Meaning, that at
191 < equilibrium the probability of going from $m$ to $n$ is the same as
192 < going from $n$ to $m$.
187 > In the Metropolis method\cite{metropolis:1953}
188 > Eq.~\ref{introEq:MCmarkovEquil} is solved such that
189 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
190 > of states.  The method accomplishes this by imposing the strong
191 > condition of microscopic reversibility on the equilibrium
192 > distribution.  Meaning, that at equilibrium the probability of going
193 > from $m$ to $n$ is the same as going from $n$ to $m$.
194   \begin{equation}
195 < EQ Here
195 > \rho_m\pi_{mn} = \rho_n\pi_{nm}
196 > \label{introEq:MCmicroReverse}
197   \end{equation}
198 < Further, $fix$ is chosen to be a symetric matrix in the Metropolis
199 < method.  Using Eq.~\ref{fix}, Eq.~\ref{fix} becomes
198 > Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
199 > the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
200 > Eq.~\ref{introEq:MCmicroReverse} becomes
201   \begin{equation}
202 < EQ Here
202 > \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
203 >        \frac{\rho_n}{\rho_m}
204 > \label{introEq:MCmicro2}
205   \end{equation}
206 < For a Boltxman limiting distribution
206 > For a Boltxman limiting distribution,
207   \begin{equation}
208 < EQ Here
208 > \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
209 >        = e^{-\beta \Delta \mathcal{U}}
210 > \label{introEq:MCmicro3}
211   \end{equation}
212   This allows for the following set of acceptance rules be defined:
213   \begin{equation}
# Line 193 | Line 229 | distribution is the Boltzman distribution.
229   the ensemble averages, as this method ensures that the limiting
230   distribution is the Boltzman distribution.
231  
232 < \subsection{\label{introSec:md}Molecular Dynamics Simulations}
232 > \subsection{\label{introSec:MD}Molecular Dynamics Simulations}
233  
234   The main simulation tool used in this research is Molecular Dynamics.
235   Molecular Dynamics is when the equations of motion for a system are
# Line 216 | Line 252 | making molecular dynamics key in the simulation of tho
252   centered around the dynamic properties of phospholipid bilayers,
253   making molecular dynamics key in the simulation of those properties.
254  
255 < \subsection{Molecular dynamics Algorithm}
255 > \subsubsection{Molecular dynamics Algorithm}
256  
257   To illustrate how the molecular dynamics technique is applied, the
258   following sections will describe the sequence involved in a
# Line 225 | Line 261 | discussion with the integration of the equations of mo
261   calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
262   discussion with the integration of the equations of motion. \cite{fix}
263  
264 < \subsection{initialization}
264 > \subsubsection{initialization}
265  
266   When selecting the initial configuration for the simulation it is
267   important to consider what dynamics one is hoping to observe.
# Line 256 | Line 292 | kinetic energy from energy stored in potential degrees
292   first few initial simulation steps due to either loss or gain of
293   kinetic energy from energy stored in potential degrees of freedom.
294  
295 < \subsection{Force Evaluation}
295 > \subsubsection{Force Evaluation}
296  
297   The evaluation of forces is the most computationally expensive portion
298   of a given molecular dynamics simulation.  This is due entirely to the
# Line 300 | Line 336 | disance of $fix$.
336   the lack of particle images in the $x$, $y$, or $z$ directions past a
337   disance of $fix$.
338  
339 < With the use of an $fix$, however, comes a discontinuity in the potential energy curve (Fig.~\ref{fix}).
339 > With the use of an $fix$, however, comes a discontinuity in the
340 > potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
341 > one calculates the potential energy at the $r_{\text{cut}}$, and add
342 > that value to the potential.  This causes the function to go smoothly
343 > to zero at the cutoff radius.  This ensures conservation of energy
344 > when integrating the Newtonian equations of motion.
345 >
346 > The second main simplification used in this research is the Verlet
347 > neighbor list. \cite{allen87:csl} In the Verlet method, one generates
348 > a list of all neighbor atoms, $j$, surrounding atom $i$ within some
349 > cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
350 > This list is created the first time forces are evaluated, then on
351 > subsequent force evaluations, pair calculations are only calculated
352 > from the neighbor lists.  The lists are updated if any given particle
353 > in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
354 > giving rise to the possibility that a particle has left or joined a
355 > neighbor list.
356 >
357 > \subsection{\label{introSec:MDintegrate} Integration of the equations of motion}
358 >
359 > A starting point for the discussion of molecular dynamics integrators
360 > is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
361 > expansion of position in time:
362 > \begin{equation}
363 > eq here
364 > \label{introEq:verletForward}
365 > \end{equation}
366 > As well as,
367 > \begin{equation}
368 > eq here
369 > \label{introEq:verletBack}
370 > \end{equation}
371 > Adding together Eq.~\ref{introEq:verletForward} and
372 > Eq.~\ref{introEq:verletBack} results in,
373 > \begin{equation}
374 > eq here
375 > \label{introEq:verletSum}
376 > \end{equation}
377 > Or equivalently,
378 > \begin{equation}
379 > eq here
380 > \label{introEq:verletFinal}
381 > \end{equation}
382 > Which contains an error in the estimate of the new positions on the
383 > order of $\Delta t^4$.
384  
385 + In practice, however, the simulations in this research were integrated
386 + with a velocity reformulation of teh Verlet method. \cite{allen87:csl}
387 + \begin{equation}
388 + eq here
389 + \label{introEq:MDvelVerletPos}
390 + \end{equation}
391 + \begin{equation}
392 + eq here
393 + \label{introEq:MDvelVerletVel}
394 + \end{equation}
395 + The original Verlet algorithm can be regained by substituting the
396 + velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
397 + formulations are chosen in this research because the algorithms have
398 + very little long term drift in energy conservation.  Energy
399 + conservation in a molecular dynamics simulation is of extreme
400 + importance, as it is a measure of how closely one is following the
401 + ``true'' trajectory wtih the finite integration scheme.  An exact
402 + solution to the integration will conserve area in phase space, as well
403 + as be reversible in time, that is, the trajectory integrated forward
404 + or backwards will exactly match itself.  Having a finite algorithm
405 + that both conserves area in phase space and is time reversible,
406 + therefore increases, but does not guarantee the ``correctness'' or the
407 + integrated trajectory.
408  
409 + It can be shown, \cite{Frenkel1996} that although the Verlet algorithm
410 + does not rigorously preserve the actual Hamiltonian, it does preserve
411 + a pseudo-Hamiltonian which shadows the real one in phase space.  This
412 + pseudo-Hamiltonian is proveably area-conserving as well as time
413 + reversible.  The fact that it shadows the true Hamiltonian in phase
414 + space is acceptable in actual simulations as one is interested in the
415 + ensemble average of the observable being measured.  From the ergodic
416 + hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
417 + average will match the ensemble average, therefore two similar
418 + trajectories in phase space should give matching statistical averages.
419 +
420 + \subsection{\label{introSec:MDfurther}Further Considerations}
421 + In the simulations presented in this research, a few additional
422 + parameters are needed to describe the motions.  The simulations
423 + involving water and phospholipids in Chapt.~\ref{chaptLipids} are
424 + required to integrate the equations of motions for dipoles on atoms.
425 + This involves an additional three parameters be specified for each
426 + dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
427 + taken to be the Euler angles, where $\phi$ is a rotation about the
428 + $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
429 + $\psi$ is a final rotation about the new $z$-axis (see
430 + Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
431 + accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
432 + defined as follows:
433 + \begin{equation}
434 + eq here
435 + \label{introEq:EulerRotMat}
436 + \end{equation}
437 +
438 + The equations of motion for Euler angles can be written down as
439 + \cite{allen87:csl}
440 + \begin{equation}
441 + eq here
442 + \label{introEq:MDeuleeerPsi}
443 + \end{equation}
444 + Where $\omega^s_i$ is the angular velocity in the lab space frame
445 + along cartesian coordinate $i$.  However, a difficulty arises when
446 + attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
447 + Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
448 + both equations means there is a non-physical instability present when
449 + $\theta$ is 0 or $\pi$.
450 +
451 + To correct for this, the simulations integrate the rotation matrix,
452 + $\mathbf{A}$, directly, thus avoiding the instability.
453 + This method was proposed by Dullwebber
454 + \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
455 + Sec.~\ref{introSec:MDsymplecticRot}.
456 +
457 + \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
458 +
459 + Before discussing the integration of the rotation matrix, it is
460 + necessary to understand the construction of a ``good'' integration
461 + scheme.  It has been previously
462 + discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
463 + integrator to be symplectic, or time reversible.  The following is an
464 + outline of the Trotter factorization of the Liouville Propagator as a
465 + scheme for generating symplectic integrators. \cite{Tuckerman:1992}
466 +
467 + For a system with $f$ degrees of freedom the Liouville operator can be
468 + defined as,
469 + \begin{equation}
470 + eq here
471 + \label{introEq:LiouvilleOperator}
472 + \end{equation}
473 + Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
474 + degree of freedom, and $f_j$ is the force on that degree of freedom.
475 + $\Gamma$ is defined as the set of all positions nad conjugate momenta,
476 + $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
477 + \begin {equation}
478 + eq here
479 + \label{introEq:Lpropagator}
480 + \end{equation}
481 + This allows the specification of $\Gamma$ at any time $t$ as
482 + \begin{equation}
483 + eq here
484 + \label{introEq:Lp2}
485 + \end{equation}
486 + It is important to note, $U(t)$ is a unitary operator meaning
487 + \begin{equation}
488 + U(-t)=U^{-1}(t)
489 + \label{introEq:Lp3}
490 + \end{equation}
491 +
492 + Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
493 + Trotter theorem to yield
494 + \begin{equation}
495 + eq here
496 + \label{introEq:Lp4}
497 + \end{equation}
498 + Where $\Delta t = \frac{t}{P}$.
499 + With this, a discrete time operator $G(\Delta t)$ can be defined:
500 + \begin{equation}
501 + eq here
502 + \label{introEq:Lp5}
503 + \end{equation}
504 + Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
505 + unitary.  Meaning an integrator based on this factorization will be
506 + reversible in time.
507 +
508 + As an example, consider the following decomposition of $L$:
509 + \begin{equation}
510 + eq here
511 + \label{introEq:Lp6}
512 + \end{equation}
513 + Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
514 + \begin{equation}
515 + eq here
516 + \label{introEq:Lp8}
517 + \end{equation}
518 + Where $c$ is independent of $q$.  One obtains the following:  
519 + \begin{equation}
520 + eq here
521 + \label{introEq:Lp8}
522 + \end{equation}
523 + Or written another way,
524 + \begin{equation}
525 + eq here
526 + \label{intorEq:Lp9}
527 + \end{equation}
528 + This is the velocity Verlet formulation presented in
529 + Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
530 + comprised of unitary propagators, it is symplectic, and therefore area
531 + preserving in phase space.  From the preceeding fatorization, one can
532 + see that the integration of the equations of motion would follow:
533 + \begin{enumerate}
534 + \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
535 +
536 + \item Use the half step velocities to move positions one whole step, $\Delta t$.
537 +
538 + \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
539 +
540 + \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
541 + \end{enumerate}
542 +
543 + \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
544 +
545 + Based on the factorization from the previous section,
546 + Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
547 + symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
548 + alternative method for the integration of orientational degrees of
549 + freedom. The method starts with a straightforward splitting of the
550 + Liouville operator:
551 + \begin{equation}
552 + eq here
553 + \label{introEq:SR1}
554 + \end{equation}
555 + Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
556 + due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
557 + angular momenta of the system. The propagator, $G(\Delta t)$, becomes
558 + \begin{equation}
559 + eq here
560 + \label{introEq:SR2}
561 + \end{equation}
562 + Propagation fo the linear and angular momenta follows as in the Verlet
563 + scheme.  The propagation of positions also follows the verlet scheme
564 + with the addition of a further symplectic splitting of the rotation
565 + matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
566 + \begin{equation}
567 + eq here
568 + \label{introEq:SR3}
569 + \end{equation}
570 + Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
571 + $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
572 + unitary and symplectic, the entire integration scheme is also
573 + symplectic and time reversible.
574 +
575   \section{\label{introSec:chapterLayout}Chapter Layout}
576  
577   \subsection{\label{introSec:RSA}Random Sequential Adsorption}
578  
579   \subsection{\label{introSec:OOPSE}The OOPSE Simulation Package}
580  
581 < \subsection{\label{introSec:bilayers}A Mesoscale Model for Phospholipid Bilayers}
581 > \subsection{\label{introSec:bilayers}A Mesoscale Model for
582 > Phospholipid Bilayers}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines