ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 914 by mmeineke, Fri Jan 9 02:41:45 2004 UTC vs.
Revision 980 by mmeineke, Sat Jan 24 03:09:15 2004 UTC

# Line 6 | Line 6
6  
7   \section{\label{introSec:theory}Theoretical Background}
8  
9 < The techniques used in the course of this research fall under the two main classes of
10 < molecular simulation: Molecular Dynamics and Monte Carlo. Molecular Dynamic simulations
11 < integrate the equations of motion for a given system of particles, allowing the researher
12 < to gain insight into the time dependent evolution of a system. Diffusion phenomena are
13 < readily studied with this simulation technique, making Molecular Dynamics the main simulation
14 < technique used in this research. Other aspects of the research fall under the Monte Carlo
15 < class of simulations. In Monte Carlo, the configuration space available to the collection
16 < of particles is sampled stochastichally, or randomly. Each configuration is chosen with
17 < a given probability based on the Maxwell Boltzman distribution. These types of simulations
18 < are best used to probe properties of a system that are only dependent only on the state of
19 < the system. Structural information about a system is most readily obtained through
20 < these types of methods.
9 > The techniques used in the course of this research fall under the two
10 > main classes of molecular simulation: Molecular Dynamics and Monte
11 > Carlo. Molecular Dynamic simulations integrate the equations of motion
12 > for a given system of particles, allowing the researher to gain
13 > insight into the time dependent evolution of a system. Diffusion
14 > phenomena are readily studied with this simulation technique, making
15 > Molecular Dynamics the main simulation technique used in this
16 > research. Other aspects of the research fall under the Monte Carlo
17 > class of simulations. In Monte Carlo, the configuration space
18 > available to the collection of particles is sampled stochastichally,
19 > or randomly. Each configuration is chosen with a given probability
20 > based on the Maxwell Boltzman distribution. These types of simulations
21 > are best used to probe properties of a system that are only dependent
22 > only on the state of the system. Structural information about a system
23 > is most readily obtained through these types of methods.
24  
25 < Although the two techniques employed seem dissimilar, they are both linked by the overarching
26 < principles of Statistical Thermodynamics. Statistical Thermodynamics governs the behavior of
27 < both classes of simulations and dictates what each method can and cannot do. When investigating
28 < a system, one most first analyze what thermodynamic properties of the system are being probed,
29 < then chose which method best suits that objective.
25 > Although the two techniques employed seem dissimilar, they are both
26 > linked by the overarching principles of Statistical
27 > Thermodynamics. Statistical Thermodynamics governs the behavior of
28 > both classes of simulations and dictates what each method can and
29 > cannot do. When investigating a system, one most first analyze what
30 > thermodynamic properties of the system are being probed, then chose
31 > which method best suits that objective.
32  
33   \subsection{\label{introSec:statThermo}Statistical Thermodynamics}
34  
# Line 33 | Line 38 | enesemble averages
38  
39   \subsection{\label{introSec:monteCarlo}Monte Carlo Simulations}
40  
41 < Stochastic sampling
41 > The Monte Carlo method was developed by Metropolis and Ulam for their
42 > work in fissionable material.\cite{metropolis:1949} The method is so
43 > named, because it heavily uses random numbers in its
44 > solution.\cite{allen87:csl} The Monte Carlo method allows for the
45 > solution of integrals through the stochastic sampling of the values
46 > within the integral. In the simplest case, the evaluation of an
47 > integral would follow a brute force method of
48 > sampling.\cite{Frenkel1996} Consider the following single dimensional
49 > integral:
50 > \begin{equation}
51 > I = f(x)dx
52 > \label{eq:MCex1}
53 > \end{equation}
54 > The equation can be recast as:
55 > \begin{equation}
56 > I = (b-a)\langle f(x) \rangle
57 > \label{eq:MCex2}
58 > \end{equation}
59 > Where $\langle f(x) \rangle$ is the unweighted average over the interval
60 > $[a,b]$. The calculation of the integral could then be solved by
61 > randomly choosing points along the interval $[a,b]$ and calculating
62 > the value of $f(x)$ at each point. The accumulated average would then
63 > approach $I$ in the limit where the number of trials is infintely
64 > large.
65  
66 < detatiled balance
66 > However, in Statistical Mechanics, one is typically interested in
67 > integrals of the form:
68 > \begin{equation}
69 > \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
70 >        e^{-\beta V(\mathbf{r}^N)}}%
71 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
72 > \label{eq:mcEnsAvg}
73 > \end{equation}
74 > Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
75 > and $A$ is some observable that is only dependent on
76 > position. $\langle A \rangle$ is the ensemble average of $A$ as
77 > presented in Sec.~\ref{introSec:statThermo}. Because $A$ is
78 > independent of momentum, the momenta contribution of the integral can
79 > be factored out, leaving the configurational integral. Application of
80 > the brute force method to this system would yield highly inefficient
81 > results. Due to the Boltzman weighting of this integral, most random
82 > configurations will have a near zero contribution to the ensemble
83 > average. This is where a importance sampling comes into
84 > play.\cite{allen87:csl}
85  
86 < metropilis monte carlo
86 > Importance Sampling is a method where one selects a distribution from
87 > which the random configurations are chosen in order to more
88 > efficiently calculate the integral.\cite{Frenkel1996} Consider again
89 > Eq.~\ref{eq:MCex1} rewritten to be:
90 > \begin{equation}
91 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
92 > \label{introEq:Importance1}
93 > \end{equation}
94 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
95 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
96 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
97 > Eq.~\ref{introEq:Importance1} becomes
98 > \begin{equation}
99 > I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
100 > \label{introEq:Importance2}
101 > \end{equation}
102 > Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
103 > \begin {equation}
104 > \rho_{kT}(\mathbf{r}^N) =
105 >        \frac{e^{-\beta V(\mathbf{r}^N)}}
106 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
107 > \label{introEq:MCboltzman}
108 > \end{equation}
109 > Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
110 > can be rewritten as
111 > \begin{equation}
112 > \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
113 >        \rho_{kT}(\mathbf{r}^N)
114 > \label{introEq:Importance3}
115 > \end{equation}
116 > Applying Eq.~\ref{introEq:Importance1} one obtains
117 > \begin{equation}
118 > \langle A \rangle = \biggl \langle
119 >        \frac{ A \rho_{kT}(\mathbf{r}^N) }
120 >        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
121 > \label{introEq:Importance4}
122 > \end{equation}
123 > By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
124 > Eq.~\ref{introEq:Importance4} becomes
125 > \begin{equation}
126 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
127 > \label{introEq:Importance5}
128 > \end{equation}
129 > The difficulty is selecting points $\mathbf{r}^N$ such that they are
130 > sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
131 > was proposed by Metropolis et al.\cite{metropolis:1953} which involved
132 > the use of a Markov chain whose limiting distribution was
133 > $\rho_{kT}(\mathbf{r}^N)$.
134  
135 < \subsection{\label{introSec:md}Molecular Dynamics Simulations}
135 > \subsubsection{\label{introSec:markovChains}Markov Chains}
136  
137 < time averages
137 > A Markov chain is a chain of states satisfying the following
138 > conditions:\cite{leach01:mm}
139 > \begin{enumerate}
140 > \item The outcome of each trial depends only on the outcome of the previous trial.
141 > \item Each trial belongs to a finite set of outcomes called the state space.
142 > \end{enumerate}
143 > If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
144 > $\rho_m$ and $\rho_n$ are the probablilities of being in state
145 > $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
146 > states are linked by a transition probability, $\pi_{mn}$, which is the
147 > probability of going from state $m$ to state $n$.
148  
149 < time integrating schemes
149 > \newcommand{\accMe}{\operatorname{acc}}
150  
151 < time reversible
151 > The transition probability is given by the following:
152 > \begin{equation}
153 > \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
154 > \label{introEq:MCpi}
155 > \end{equation}
156 > Where $\alpha_{mn}$ is the probability of attempting the move $m
157 > \rightarrow n$, and $\accMe$ is the probability of accepting the move
158 > $m \rightarrow n$.  Defining a probability vector,
159 > $\boldsymbol{\rho}$, such that
160 > \begin{equation}
161 > \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
162 >        \ldots \rho_N \}
163 > \label{introEq:MCrhoVector}
164 > \end{equation}
165 > a transition matrix $\boldsymbol{\Pi}$ can be defined,
166 > whose elements are $\pi_{mn}$, for each given transition.  The
167 > limiting distribution of the Markov chain can then be found by
168 > applying the transition matrix an infinite number of times to the
169 > distribution vector.
170 > \begin{equation}
171 > \boldsymbol{\rho}_{\text{limit}} =
172 >        \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
173 >        \boldsymbol{\Pi}^N
174 > \label{introEq:MCmarkovLimit}
175 > \end{equation}
176 > The limiting distribution of the chain is independent of the starting
177 > distribution, and successive applications of the transition matrix
178 > will only yield the limiting distribution again.
179 > \begin{equation}
180 > \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
181 >        \boldsymbol{\Pi}
182 > \label{introEq:MCmarkovEquil}
183 > \end{equation}
184  
185 < symplectic methods
185 > \subsubsection{\label{introSec:metropolisMethod}The Metropolis Method}
186  
187 < Extended ensembles (NVT NPT)
187 > In the Metropolis method\cite{metropolis:1953}
188 > Eq.~\ref{introEq:MCmarkovEquil} is solved such that
189 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
190 > of states.  The method accomplishes this by imposing the strong
191 > condition of microscopic reversibility on the equilibrium
192 > distribution.  Meaning, that at equilibrium the probability of going
193 > from $m$ to $n$ is the same as going from $n$ to $m$.
194 > \begin{equation}
195 > \rho_m\pi_{mn} = \rho_n\pi_{nm}
196 > \label{introEq:MCmicroReverse}
197 > \end{equation}
198 > Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
199 > the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
200 > Eq.~\ref{introEq:MCmicroReverse} becomes
201 > \begin{equation}
202 > \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
203 >        \frac{\rho_n}{\rho_m}
204 > \label{introEq:MCmicro2}
205 > \end{equation}
206 > For a Boltxman limiting distribution,
207 > \begin{equation}
208 > \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
209 >        = e^{-\beta \Delta \mathcal{U}}
210 > \label{introEq:MCmicro3}
211 > \end{equation}
212 > This allows for the following set of acceptance rules be defined:
213 > \begin{equation}
214 > EQ Here
215 > \end{equation}
216  
217 < constrained dynamics
217 > Using the acceptance criteria from Eq.~\ref{fix} the Metropolis method
218 > proceeds as follows
219 > \begin{itemize}
220 > \item Generate an initial configuration $fix$ which has some finite probability in $fix$.
221 > \item Modify $fix$, to generate configuratioon $fix$.
222 > \item If configuration $n$ lowers the energy of the system, accept the move with unity ($fix$ becomes $fix$).  Otherwise accept with probability $fix$.
223 > \item Accumulate the average for the configurational observable of intereest.
224 > \item Repeat from step 2 until average converges.
225 > \end{itemize}
226 > One important note is that the average is accumulated whether the move
227 > is accepted or not, this ensures proper weighting of the average.
228 > Using Eq.~\ref{fix} it becomes clear that the accumulated averages are
229 > the ensemble averages, as this method ensures that the limiting
230 > distribution is the Boltzman distribution.
231 >
232 > \subsection{\label{introSec:MD}Molecular Dynamics Simulations}
233 >
234 > The main simulation tool used in this research is Molecular Dynamics.
235 > Molecular Dynamics is when the equations of motion for a system are
236 > integrated in order to obtain information about both the positions and
237 > momentum of a system, allowing the calculation of not only
238 > configurational observables, but momenta dependent ones as well:
239 > diffusion constants, velocity auto correlations, folding/unfolding
240 > events, etc.  Due to the principle of ergodicity, Eq.~\ref{fix}, the
241 > average of these observables over the time period of the simulation
242 > are taken to be the ensemble averages for the system.
243 >
244 > The choice of when to use molecular dynamics over Monte Carlo
245 > techniques, is normally decided by the observables in which the
246 > researcher is interested.  If the observabvles depend on momenta in
247 > any fashion, then the only choice is molecular dynamics in some form.
248 > However, when the observable is dependent only on the configuration,
249 > then most of the time Monte Carlo techniques will be more efficent.
250 >
251 > The focus of research in the second half of this dissertation is
252 > centered around the dynamic properties of phospholipid bilayers,
253 > making molecular dynamics key in the simulation of those properties.
254 >
255 > \subsubsection{Molecular dynamics Algorithm}
256 >
257 > To illustrate how the molecular dynamics technique is applied, the
258 > following sections will describe the sequence involved in a
259 > simulation.  Sec.~\ref{fix} deals with the initialization of a
260 > simulation.  Sec.~\ref{fix} discusses issues involved with the
261 > calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
262 > discussion with the integration of the equations of motion. \cite{fix}
263 >
264 > \subsubsection{initialization}
265 >
266 > When selecting the initial configuration for the simulation it is
267 > important to consider what dynamics one is hoping to observe.
268 > Ch.~\ref{fix} deals with the formation and equilibrium dynamics of
269 > phospholipid membranes.  Therefore in these simulations initial
270 > positions were selected that in some cases dispersed the lipids in
271 > water, and in other cases structured the lipids into preformed
272 > bilayers.  Important considerations at this stage of the simulation are:
273 > \begin{itemize}
274 > \item There are no major overlaps of molecular or atomic orbitals
275 > \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
276 > \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
277 > \end{itemize}
278 >
279 > The first point is important due to the amount of potential energy
280 > generated by having two particles too close together.  If overlap
281 > occurs, the first evaluation of forces will return numbers so large as
282 > to render the numerical integration of teh motion meaningless.  The
283 > second consideration keeps the system from drifting or rotating as a
284 > whole.  This arises from the fact that most simulations are of systems
285 > in equilibrium in the absence of outside forces.  Therefore any net
286 > movement would be unphysical and an artifact of the simulation method
287 > used.  The final point addresses teh selection of the magnitude of the
288 > initial velocities.  For many simulations it is convienient to use
289 > this opportunity to scale the amount of kinetic energy to reflect the
290 > desired thermal distribution of the system.  However, it must be noted
291 > that most systems will require further velocity rescaling after the
292 > first few initial simulation steps due to either loss or gain of
293 > kinetic energy from energy stored in potential degrees of freedom.
294  
295 + \subsubsection{Force Evaluation}
296 +
297 + The evaluation of forces is the most computationally expensive portion
298 + of a given molecular dynamics simulation.  This is due entirely to the
299 + evaluation of long range forces in a simulation, typically pair-wise.
300 + These forces are most commonly the Van der Waals force, and sometimes
301 + Coulombic forces as well.  For a pair-wise force, there are $fix$
302 + pairs to be evaluated, where $n$ is the number of particles in the
303 + system.  This leads to the calculations scaling as $fix$, making large
304 + simulations prohibitive in the absence of any computation saving
305 + techniques.
306 +
307 + Another consideration one must resolve, is that in a given simulation
308 + a disproportionate number of the particles will feel the effects of
309 + the surface. \cite{fix} For a cubic system of 1000 particles arranged
310 + in a $10x10x10$ cube, 488 particles will be exposed to the surface.
311 + Unless one is simulating an isolated particle group in a vacuum, the
312 + behavior of the system will be far from the desired bulk
313 + charecteristics.  To offset this, simulations employ the use of
314 + periodic boundary images. \cite{fix}
315 +
316 + The technique involves the use of an algorithm that replicates the
317 + simulation box on an infinite lattice in cartesian space.  Any given
318 + particle leaving the simulation box on one side will have an image of
319 + itself enter on the opposite side (see Fig.~\ref{fix}).
320 + \begin{equation}
321 + EQ Here
322 + \end{equation}
323 + In addition, this sets that any given particle pair has an image, real
324 + or periodic, within $fix$ of each other.  A discussion of the method
325 + used to calculate the periodic image can be found in Sec.\ref{fix}.
326 +
327 + Returning to the topic of the computational scale of the force
328 + evaluation, the use of periodic boundary conditions requires that a
329 + cutoff radius be employed.  Using a cutoff radius improves the
330 + efficiency of the force evaluation, as particles farther than a
331 + predetermined distance, $fix$, are not included in the
332 + calculation. \cite{fix} In a simultation with periodic images, $fix$
333 + has a maximum value of $fix$.  Fig.~\ref{fix} illustrates how using an
334 + $fix$ larger than this value, or in the extreme limit of no $fix$ at
335 + all, the corners of the simulation box are unequally weighted due to
336 + the lack of particle images in the $x$, $y$, or $z$ directions past a
337 + disance of $fix$.
338 +
339 + With the use of an $fix$, however, comes a discontinuity in the
340 + potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
341 + one calculates the potential energy at the $r_{\text{cut}}$, and add
342 + that value to the potential.  This causes the function to go smoothly
343 + to zero at the cutoff radius.  This ensures conservation of energy
344 + when integrating the Newtonian equations of motion.
345 +
346 + The second main simplification used in this research is the Verlet
347 + neighbor list. \cite{allen87:csl} In the Verlet method, one generates
348 + a list of all neighbor atoms, $j$, surrounding atom $i$ within some
349 + cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
350 + This list is created the first time forces are evaluated, then on
351 + subsequent force evaluations, pair calculations are only calculated
352 + from the neighbor lists.  The lists are updated if any given particle
353 + in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
354 + giving rise to the possibility that a particle has left or joined a
355 + neighbor list.
356 +
357 + \subsection{\label{introSec:MDintegrate} Integration of the equations of motion}
358 +
359 + A starting point for the discussion of molecular dynamics integrators
360 + is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor
361 + expansion of position in time:
362 + \begin{equation}
363 + eq here
364 + \label{introEq:verletForward}
365 + \end{equation}
366 + As well as,
367 + \begin{equation}
368 + eq here
369 + \label{introEq:verletBack}
370 + \end{equation}
371 + Adding together Eq.~\ref{introEq:verletForward} and
372 + Eq.~\ref{introEq:verletBack} results in,
373 + \begin{equation}
374 + eq here
375 + \label{introEq:verletSum}
376 + \end{equation}
377 + Or equivalently,
378 + \begin{equation}
379 + eq here
380 + \label{introEq:verletFinal}
381 + \end{equation}
382 + Which contains an error in the estimate of the new positions on the
383 + order of $\Delta t^4$.
384 +
385 + In practice, however, the simulations in this research were integrated
386 + with a velocity reformulation of teh Verlet method. \cite{allen87:csl}
387 + \begin{equation}
388 + eq here
389 + \label{introEq:MDvelVerletPos}
390 + \end{equation}
391 + \begin{equation}
392 + eq here
393 + \label{introEq:MDvelVerletVel}
394 + \end{equation}
395 + The original Verlet algorithm can be regained by substituting the
396 + velocity back into Eq.~\ref{introEq:MDvelVerletPos}.  The Verlet
397 + formulations are chosen in this research because the algorithms have
398 + very little long term drift in energy conservation.  Energy
399 + conservation in a molecular dynamics simulation is of extreme
400 + importance, as it is a measure of how closely one is following the
401 + ``true'' trajectory wtih the finite integration scheme.  An exact
402 + solution to the integration will conserve area in phase space, as well
403 + as be reversible in time, that is, the trajectory integrated forward
404 + or backwards will exactly match itself.  Having a finite algorithm
405 + that both conserves area in phase space and is time reversible,
406 + therefore increases, but does not guarantee the ``correctness'' or the
407 + integrated trajectory.
408 +
409 + It can be shown, \cite{Frenkel1996} that although the Verlet algorithm
410 + does not rigorously preserve the actual Hamiltonian, it does preserve
411 + a pseudo-Hamiltonian which shadows the real one in phase space.  This
412 + pseudo-Hamiltonian is proveably area-conserving as well as time
413 + reversible.  The fact that it shadows the true Hamiltonian in phase
414 + space is acceptable in actual simulations as one is interested in the
415 + ensemble average of the observable being measured.  From the ergodic
416 + hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time
417 + average will match the ensemble average, therefore two similar
418 + trajectories in phase space should give matching statistical averages.
419 +
420 + \subsection{\label{introSec:MDfurther}Further Considerations}
421 + In the simulations presented in this research, a few additional
422 + parameters are needed to describe the motions.  The simulations
423 + involving water and phospholipids in Chapt.~\ref{chaptLipids} are
424 + required to integrate the equations of motions for dipoles on atoms.
425 + This involves an additional three parameters be specified for each
426 + dipole atom: $\phi$, $\theta$, and $\psi$.  These three angles are
427 + taken to be the Euler angles, where $\phi$ is a rotation about the
428 + $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
429 + $\psi$ is a final rotation about the new $z$-axis (see
430 + Fig.~\ref{introFig:euleerAngles}).  This sequence of rotations can be
431 + accumulated into a single $3 \times 3$ matrix $\mathbf{A}$
432 + defined as follows:
433 + \begin{equation}
434 + eq here
435 + \label{introEq:EulerRotMat}
436 + \end{equation}
437 +
438 + The equations of motion for Euler angles can be written down as
439 + \cite{allen87:csl}
440 + \begin{equation}
441 + eq here
442 + \label{introEq:MDeuleeerPsi}
443 + \end{equation}
444 + Where $\omega^s_i$ is the angular velocity in the lab space frame
445 + along cartesian coordinate $i$.  However, a difficulty arises when
446 + attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
447 + Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
448 + both equations means there is a non-physical instability present when
449 + $\theta$ is 0 or $\pi$.
450 +
451 + To correct for this, the simulations integrate the rotation matrix,
452 + $\mathbf{A}$, directly, thus avoiding the instability.
453 + This method was proposed by Dullwebber
454 + \emph{et. al.}\cite{Dullwebber:1997}, and is presented in
455 + Sec.~\ref{introSec:MDsymplecticRot}.
456 +
457 + \subsubsection{\label{introSec:MDliouville}Liouville Propagator}
458 +
459 + Before discussing the integration of the rotation matrix, it is
460 + necessary to understand the construction of a ``good'' integration
461 + scheme.  It has been previously
462 + discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an
463 + integrator to be symplectic, or time reversible.  The following is an
464 + outline of the Trotter factorization of the Liouville Propagator as a
465 + scheme for generating symplectic integrators. \cite{Tuckerman:1992}
466 +
467 + For a system with $f$ degrees of freedom the Liouville operator can be
468 + defined as,
469 + \begin{equation}
470 + eq here
471 + \label{introEq:LiouvilleOperator}
472 + \end{equation}
473 + Here, $r_j$ and $p_j$ are the position and conjugate momenta of a
474 + degree of freedom, and $f_j$ is the force on that degree of freedom.
475 + $\Gamma$ is defined as the set of all positions nad conjugate momenta,
476 + $\{r_j,p_j\}$, and the propagator, $U(t)$, is defined
477 + \begin {equation}
478 + eq here
479 + \label{introEq:Lpropagator}
480 + \end{equation}
481 + This allows the specification of $\Gamma$ at any time $t$ as
482 + \begin{equation}
483 + eq here
484 + \label{introEq:Lp2}
485 + \end{equation}
486 + It is important to note, $U(t)$ is a unitary operator meaning
487 + \begin{equation}
488 + U(-t)=U^{-1}(t)
489 + \label{introEq:Lp3}
490 + \end{equation}
491 +
492 + Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
493 + Trotter theorem to yield
494 + \begin{equation}
495 + eq here
496 + \label{introEq:Lp4}
497 + \end{equation}
498 + Where $\Delta t = \frac{t}{P}$.
499 + With this, a discrete time operator $G(\Delta t)$ can be defined:
500 + \begin{equation}
501 + eq here
502 + \label{introEq:Lp5}
503 + \end{equation}
504 + Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also
505 + unitary.  Meaning an integrator based on this factorization will be
506 + reversible in time.
507 +
508 + As an example, consider the following decomposition of $L$:
509 + \begin{equation}
510 + eq here
511 + \label{introEq:Lp6}
512 + \end{equation}
513 + Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property
514 + \begin{equation}
515 + eq here
516 + \label{introEq:Lp8}
517 + \end{equation}
518 + Where $c$ is independent of $q$.  One obtains the following:  
519 + \begin{equation}
520 + eq here
521 + \label{introEq:Lp8}
522 + \end{equation}
523 + Or written another way,
524 + \begin{equation}
525 + eq here
526 + \label{intorEq:Lp9}
527 + \end{equation}
528 + This is the velocity Verlet formulation presented in
529 + Sec.~\ref{introSec:MDintegrate}.  Because this integration scheme is
530 + comprised of unitary propagators, it is symplectic, and therefore area
531 + preserving in phase space.  From the preceeding fatorization, one can
532 + see that the integration of the equations of motion would follow:
533 + \begin{enumerate}
534 + \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
535 +
536 + \item Use the half step velocities to move positions one whole step, $\Delta t$.
537 +
538 + \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
539 +
540 + \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
541 + \end{enumerate}
542 +
543 + \subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
544 +
545 + Based on the factorization from the previous section,
546 + Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the
547 + symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
548 + alternative method for the integration of orientational degrees of
549 + freedom. The method starts with a straightforward splitting of the
550 + Liouville operator:
551 + \begin{equation}
552 + eq here
553 + \label{introEq:SR1}
554 + \end{equation}
555 + Where $\boldsymbol{\tau}(\mathbf{A})$ are the tourques of the system
556 + due to the configuration, and $\boldsymbol{/pi}$ are the conjugate
557 + angular momenta of the system. The propagator, $G(\Delta t)$, becomes
558 + \begin{equation}
559 + eq here
560 + \label{introEq:SR2}
561 + \end{equation}
562 + Propagation fo the linear and angular momenta follows as in the Verlet
563 + scheme.  The propagation of positions also follows the verlet scheme
564 + with the addition of a further symplectic splitting of the rotation
565 + matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$.
566 + \begin{equation}
567 + eq here
568 + \label{introEq:SR3}
569 + \end{equation}
570 + Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and
571 + $\boldsymbol{\pi}$ about each axis $j$.  As all propagations are now
572 + unitary and symplectic, the entire integration scheme is also
573 + symplectic and time reversible.
574 +
575   \section{\label{introSec:chapterLayout}Chapter Layout}
576  
577   \subsection{\label{introSec:RSA}Random Sequential Adsorption}
578  
579   \subsection{\label{introSec:OOPSE}The OOPSE Simulation Package}
580  
581 < \subsection{\label{introSec:bilayers}A Mesoscale Model for Phospholipid Bilayers}
581 > \subsection{\label{introSec:bilayers}A Mesoscale Model for
582 > Phospholipid Bilayers}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines