ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2907 by tim, Thu Jun 29 16:57:37 2006 UTC vs.
Revision 2941 by tim, Mon Jul 17 20:01:05 2006 UTC

# Line 67 | Line 67 | All of these conserved quantities are important factor
67   \begin{equation}E = T + V. \label{introEquation:energyConservation}
68   \end{equation}
69   All of these conserved quantities are important factors to determine
70 < the quality of numerical integration schemes for rigid bodies
71 < \cite{Dullweber1997}.
70 > the quality of numerical integration schemes for rigid
71 > bodies.\cite{Dullweber1997}
72  
73   \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
# Line 178 | Line 178 | equation of motion. Due to their symmetrical formula,
178   where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179   Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180   equation of motion. Due to their symmetrical formula, they are also
181 < known as the canonical equations of motions \cite{Goldstein2001}.
181 > known as the canonical equations of motions.\cite{Goldstein2001}
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
# Line 188 | Line 188 | coordinate and its time derivative as independent vari
188   Hamiltonian Mechanics is more appropriate for application to
189   statistical mechanics and quantum mechanics, since it treats the
190   coordinate and its time derivative as independent variables and it
191 < only works with 1st-order differential equations\cite{Marion1990}.
191 > only works with 1st-order differential equations.\cite{Marion1990}
192   In Newtonian Mechanics, a system described by conservative forces
193   conserves the total energy
194   (Eq.~\ref{introEquation:energyConservation}). It follows that
# Line 208 | Line 208 | The following section will give a brief introduction t
208   The thermodynamic behaviors and properties of Molecular Dynamics
209   simulation are governed by the principle of Statistical Mechanics.
210   The following section will give a brief introduction to some of the
211 < Statistical Mechanics concepts and theorem presented in this
211 > Statistical Mechanics concepts and theorems presented in this
212   dissertation.
213  
214   \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
# Line 283 | Line 283 | space of the system,
283   \label{introEquation:ensembelAverage}
284   \end{equation}
285  
286 There are several different types of ensembles with different
287 statistical characteristics. As a function of macroscopic
288 parameters, such as temperature \textit{etc}, the partition function
289 can be used to describe the statistical properties of a system in
290 thermodynamic equilibrium. As an ensemble of systems, each of which
291 is known to be thermally isolated and conserve energy, the
292 Microcanonical ensemble (NVE) has a partition function like,
293 \begin{equation}
294 \Omega (N,V,E) = e^{\beta TS}. \label{introEquation:NVEPartition}
295 \end{equation}
296 A canonical ensemble (NVT) is an ensemble of systems, each of which
297 can share its energy with a large heat reservoir. The distribution
298 of the total energy amongst the possible dynamical states is given
299 by the partition function,
300 \begin{equation}
301 \Omega (N,V,T) = e^{ - \beta A}.
302 \label{introEquation:NVTPartition}
303 \end{equation}
304 Here, $A$ is the Helmholtz free energy which is defined as $ A = U -
305 TS$. Since most experiments are carried out under constant pressure
306 condition, the isothermal-isobaric ensemble (NPT) plays a very
307 important role in molecular simulations. The isothermal-isobaric
308 ensemble allow the system to exchange energy with a heat bath of
309 temperature $T$ and to change the volume as well. Its partition
310 function is given as
311 \begin{equation}
312 \Delta (N,P,T) =  - e^{\beta G}.
313 \label{introEquation:NPTPartition}
314 \end{equation}
315 Here, $G = U - TS + PV$, and $G$ is called Gibbs free energy.
316
286   \subsection{\label{introSection:liouville}Liouville's theorem}
287  
288   Liouville's theorem is the foundation on which statistical mechanics
# Line 403 | Line 372 | $F$ and $G$ of the coordinates and momenta of a system
372   Liouville's theorem can be expressed in a variety of different forms
373   which are convenient within different contexts. For any two function
374   $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 < bracket ${F, G}$ is defined as
375 > bracket $\{F,G\}$ is defined as
376   \begin{equation}
377   \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378   F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
# Line 447 | Line 416 | average. It states that the time average and average o
416   many-body system in Statistical Mechanics. Fortunately, the Ergodic
417   Hypothesis makes a connection between time average and the ensemble
418   average. It states that the time average and average over the
419 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
419 > statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
420   \begin{equation}
421   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 465 | Line 434 | Sec.~\ref{introSection:molecularDynamics} will be the
434   utilized. Or if the system lends itself to a time averaging
435   approach, the Molecular Dynamics techniques in
436   Sec.~\ref{introSection:molecularDynamics} will be the best
437 < choice\cite{Frenkel1996}.
437 > choice.\cite{Frenkel1996}
438  
439   \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440   A variety of numerical integrators have been proposed to simulate
441   the motions of atoms in MD simulation. They usually begin with
442 < initial conditionals and move the objects in the direction governed
443 < by the differential equations. However, most of them ignore the
444 < hidden physical laws contained within the equations. Since 1990,
445 < geometric integrators, which preserve various phase-flow invariants
446 < such as symplectic structure, volume and time reversal symmetry,
447 < were developed to address this issue\cite{Dullweber1997,
448 < McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which
449 < happens to be a simple example of symplectic integrator, continues
450 < to gain popularity in the molecular dynamics community. This fact
451 < can be partly explained by its geometric nature.
442 > initial conditions and move the objects in the direction governed by
443 > the differential equations. However, most of them ignore the hidden
444 > physical laws contained within the equations. Since 1990, geometric
445 > integrators, which preserve various phase-flow invariants such as
446 > symplectic structure, volume and time reversal symmetry, were
447 > developed to address this issue.\cite{Dullweber1997, McLachlan1998,
448 > Leimkuhler1999} The velocity Verlet method, which happens to be a
449 > simple example of symplectic integrator, continues to gain
450 > popularity in the molecular dynamics community. This fact can be
451 > partly explained by its geometric nature.
452  
453   \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454   A \emph{manifold} is an abstract mathematical space. It looks
# Line 488 | Line 457 | viewed as a whole. A \emph{differentiable manifold} (a
457   surface of Earth. It seems to be flat locally, but it is round if
458   viewed as a whole. A \emph{differentiable manifold} (also known as
459   \emph{smooth manifold}) is a manifold on which it is possible to
460 < apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is
460 > apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
461   defined as a pair $(M, \omega)$ which consists of a
462 < \emph{differentiable manifold} $M$ and a close, non-degenerated,
462 > \emph{differentiable manifold} $M$ and a close, non-degenerate,
463   bilinear symplectic form, $\omega$. A symplectic form on a vector
464   space $V$ is a function $\omega(x, y)$ which satisfies
465   $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466   \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 < $\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in
467 > $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
468   vector field is an example of symplectic form. One of the
469   motivations to study \emph{symplectic manifolds} in Hamiltonian
470   Mechanics is that a symplectic manifold can represent all possible
471   configurations of the system and the phase space of the system can
472 < be described by it's cotangent bundle\cite{Jost2002}. Every
472 > be described by it's cotangent bundle.\cite{Jost2002} Every
473   symplectic manifold is even dimensional. For instance, in Hamilton
474   equations, coordinate and momentum always appear in pairs.
475  
# Line 510 | Line 479 | For an ordinary differential system defined as
479   \begin{equation}
480   \dot x = f(x)
481   \end{equation}
482 < where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
482 > where $x = x(q,p)$, this system is a canonical Hamiltonian, if
483   $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484   function and $J$ is the skew-symmetric matrix
485   \begin{equation}
# Line 527 | Line 496 | called a \emph{Hamiltonian vector field}. Another gene
496   \label{introEquation:compactHamiltonian}
497   \end{equation}In this case, $f$ is
498   called a \emph{Hamiltonian vector field}. Another generalization of
499 < Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
499 > Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
500   \begin{equation}
501   \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502   \end{equation}
503 < The most obvious change being that matrix $J$ now depends on $x$.
503 > where the most obvious change being that matrix $J$ now depends on
504 > $x$.
505  
506   \subsection{\label{introSection:exactFlow}Exact Propagator}
507  
508   Let $x(t)$ be the exact solution of the ODE
509 < system,$\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}$, we can
510 < define its exact propagator(solution) $\varphi_\tau$
509 > system,
510 > \begin{equation}
511 > \frac{{dx}}{{dt}} = f(x), \label{introEquation:ODE}
512 > \end{equation} we can
513 > define its exact propagator $\varphi_\tau$:
514   \[ x(t+\tau)
515   =\varphi_\tau(x(t))
516   \]
# Line 555 | Line 528 | Therefore, the exact propagator is self-adjoint,
528   \begin{equation}
529   \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
530   \end{equation}
531 < The exact propagator can also be written in terms of operator,
531 > The exact propagator can also be written as an operator,
532   \begin{equation}
533   \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
534   }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
# Line 609 | Line 582 | Using the chain rule, one may obtain,
582   \]
583   Using the chain rule, one may obtain,
584   \[
585 < \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \dot \nabla G,
585 > \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \cdot \nabla G,
586   \]
587   which is the condition for conserved quantities. For a canonical
588   Hamiltonian system, the time evolution of an arbitrary smooth
# Line 648 | Line 621 | variational methods can capture the decay of energy
621   Generating functions\cite{Channell1990} tend to lead to methods
622   which are cumbersome and difficult to use. In dissipative systems,
623   variational methods can capture the decay of energy
624 < accurately\cite{Kane2000}. Since they are geometrically unstable
624 > accurately.\cite{Kane2000} Since they are geometrically unstable
625   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626   methods are not suitable for Hamiltonian system. Recently, various
627   high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
# Line 657 | Line 630 | accepted since they exploit natural decompositions of
630   methods, they have not attracted much attention from the Molecular
631   Dynamics community. Instead, splitting methods have been widely
632   accepted since they exploit natural decompositions of the
633 < system\cite{Tuckerman1992, McLachlan1998}.
633 > system.\cite{McLachlan1998, Tuckerman1992}
634  
635   \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
636  
# Line 701 | Line 674 | local errors proportional to $h^2$, while the Strang s
674   The Lie-Trotter
675   splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
676   local errors proportional to $h^2$, while the Strang splitting gives
677 < a second-order decomposition,
677 > a second-order decomposition,\cite{Strang1968}
678   \begin{equation}
679   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
680   _{1,h/2} , \label{introEquation:secondOrderSplitting}
# Line 734 | Line 707 | known as \emph{velocity verlet} which is
707   \end{align}
708   where $F(t)$ is the force at time $t$. This integration scheme is
709   known as \emph{velocity verlet} which is
710 < symplectic(\ref{introEquation:SymplecticFlowComposition}),
711 < time-reversible(\ref{introEquation:timeReversible}) and
712 < volume-preserving (\ref{introEquation:volumePreserving}). These
710 > symplectic(Eq.~\ref{introEquation:SymplecticFlowComposition}),
711 > time-reversible(Eq.~\ref{introEquation:timeReversible}) and
712 > volume-preserving (Eq.~\ref{introEquation:volumePreserving}). These
713   geometric properties attribute to its long-time stability and its
714   popularity in the community. However, the most commonly used
715   velocity verlet integration scheme is written as below,
# Line 757 | Line 730 | the equations of motion would follow:
730  
731   \item Use the half step velocities to move positions one whole step, $\Delta t$.
732  
733 < \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
733 > \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
734  
735   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
736   \end{enumerate}
# Line 776 | Line 749 | q(\Delta t)} \right]. %
749  
750   \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
751  
752 < The Baker-Campbell-Hausdorff formula can be used to determine the
753 < local error of a splitting method in terms of the commutator of the
754 < operators(\ref{introEquation:exponentialOperator}) associated with
755 < the sub-propagator. For operators $hX$ and $hY$ which are associated
756 < with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
752 > The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
753 > to determine the local error of a splitting method in terms of the
754 > commutator of the
755 > operators(Eq.~\ref{introEquation:exponentialOperator}) associated
756 > with the sub-propagator. For operators $hX$ and $hY$ which are
757 > associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we
758 > have
759   \begin{equation}
760   \exp (hX + hY) = \exp (hZ)
761   \end{equation}
# Line 810 | Line 785 | order methods. Yoshida proposed an elegant way to comp
785   \end{equation}
786   A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
787   order methods. Yoshida proposed an elegant way to compose higher
788 < order methods based on symmetric splitting\cite{Yoshida1990}. Given
788 > order methods based on symmetric splitting.\cite{Yoshida1990} Given
789   a symmetric second order base method $ \varphi _h^{(2)} $, a
790   fourth-order symmetric method can be constructed by composing,
791   \[
# Line 862 | Line 837 | initialization of a simulation. Sec.~\ref{introSection
837   These three individual steps will be covered in the following
838   sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
839   initialization of a simulation. Sec.~\ref{introSection:production}
840 < will discuss issues of production runs.
840 > discusses issues of production runs.
841   Sec.~\ref{introSection:Analysis} provides the theoretical tools for
842   analysis of trajectories.
843  
# Line 896 | Line 871 | surface and to locate the local minimum. While converg
871   minimization to find a more reasonable conformation. Several energy
872   minimization methods have been developed to exploit the energy
873   surface and to locate the local minimum. While converging slowly
874 < near the minimum, steepest descent method is extremely robust when
874 > near the minimum, the steepest descent method is extremely robust when
875   systems are strongly anharmonic. Thus, it is often used to refine
876   structures from crystallographic data. Relying on the Hessian,
877   advanced methods like Newton-Raphson converge rapidly to a local
# Line 915 | Line 890 | end up setting the temperature of the system to a fina
890   temperature. Beginning at a lower temperature and gradually
891   increasing the temperature by assigning larger random velocities, we
892   end up setting the temperature of the system to a final temperature
893 < at which the simulation will be conducted. In heating phase, we
893 > at which the simulation will be conducted. In the heating phase, we
894   should also keep the system from drifting or rotating as a whole. To
895   do this, the net linear momentum and angular momentum of the system
896   is shifted to zero after each resampling from the Maxwell -Boltzman
# Line 930 | Line 905 | equilibration process is long enough. However, these s
905   properties \textit{etc}, become independent of time. Strictly
906   speaking, minimization and heating are not necessary, provided the
907   equilibration process is long enough. However, these steps can serve
908 < as a means to arrive at an equilibrated structure in an effective
908 > as a mean to arrive at an equilibrated structure in an effective
909   way.
910  
911   \subsection{\label{introSection:production}Production}
# Line 971 | Line 946 | evaluation is to apply spherical cutoffs where particl
946   %cutoff and minimum image convention
947   Another important technique to improve the efficiency of force
948   evaluation is to apply spherical cutoffs where particles farther
949 < than a predetermined distance are not included in the calculation
950 < \cite{Frenkel1996}. The use of a cutoff radius will cause a
951 < discontinuity in the potential energy curve. Fortunately, one can
949 > than a predetermined distance are not included in the
950 > calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
951 > a discontinuity in the potential energy curve. Fortunately, one can
952   shift a simple radial potential to ensure the potential curve go
953   smoothly to zero at the cutoff radius. The cutoff strategy works
954   well for Lennard-Jones interaction because of its short range
# Line 982 | Line 957 | with rapid and absolute convergence, has proved to min
957   in simulations. The Ewald summation, in which the slowly decaying
958   Coulomb potential is transformed into direct and reciprocal sums
959   with rapid and absolute convergence, has proved to minimize the
960 < periodicity artifacts in liquid simulations. Taking the advantages
961 < of the fast Fourier transform (FFT) for calculating discrete Fourier
962 < transforms, the particle mesh-based
960 > periodicity artifacts in liquid simulations. Taking advantage of
961 > fast Fourier transform (FFT) techniques for calculating discrete
962 > Fourier transforms, the particle mesh-based
963   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
964   $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
965   \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
# Line 994 | Line 969 | charge-neutralized Coulomb potential method developed
969   simulation community, these two methods are difficult to implement
970   correctly and efficiently. Instead, we use a damped and
971   charge-neutralized Coulomb potential method developed by Wolf and
972 < his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
972 > his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
973   particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
974   \begin{equation}
975   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
976   r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
977   R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
978 < r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
978 > r_{ij})}{r_{ij}}\right\}, \label{introEquation:shiftedCoulomb}
979   \end{equation}
980   where $\alpha$ is the convergence parameter. Due to the lack of
981   inherent periodicity and rapid convergence,this method is extremely
# Line 1017 | Line 992 | illustration of shifted Coulomb potential.}
992  
993   \subsection{\label{introSection:Analysis} Analysis}
994  
995 < Recently, advanced visualization technique have become applied to
995 > Recently, advanced visualization techniques have been applied to
996   monitor the motions of molecules. Although the dynamics of the
997   system can be described qualitatively from animation, quantitative
998   trajectory analysis is more useful. According to the principles of
# Line 1057 | Line 1032 | Fourier transforming raw data from a series of neutron
1032   function}, is of most fundamental importance to liquid theory.
1033   Experimentally, pair distribution functions can be gathered by
1034   Fourier transforming raw data from a series of neutron diffraction
1035 < experiments and integrating over the surface factor
1036 < \cite{Powles1973}. The experimental results can serve as a criterion
1037 < to justify the correctness of a liquid model. Moreover, various
1038 < equilibrium thermodynamic and structural properties can also be
1039 < expressed in terms of the radial distribution function
1040 < \cite{Allen1987}. The pair distribution functions $g(r)$ gives the
1041 < probability that a particle $i$ will be located at a distance $r$
1042 < from a another particle $j$ in the system
1035 > experiments and integrating over the surface
1036 > factor.\cite{Powles1973} The experimental results can serve as a
1037 > criterion to justify the correctness of a liquid model. Moreover,
1038 > various equilibrium thermodynamic and structural properties can also
1039 > be expressed in terms of the radial distribution
1040 > function.\cite{Allen1987} The pair distribution functions $g(r)$
1041 > gives the probability that a particle $i$ will be located at a
1042 > distance $r$ from a another particle $j$ in the system
1043   \begin{equation}
1044   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1045   \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
# Line 1087 | Line 1062 | If $A$ and $B$ refer to same variable, this kind of co
1062   \label{introEquation:timeCorrelationFunction}
1063   \end{equation}
1064   If $A$ and $B$ refer to same variable, this kind of correlation
1065 < function is called an \emph{autocorrelation function}. One example
1091 < of an auto correlation function is the velocity auto-correlation
1065 > functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1066   function which is directly related to transport properties of
1067   molecular liquids:
1068 < \[
1068 > \begin{equation}
1069   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1070   \right\rangle } dt
1071 < \]
1071 > \end{equation}
1072   where $D$ is diffusion constant. Unlike the velocity autocorrelation
1073   function, which is averaged over time origins and over all the
1074   atoms, the dipole autocorrelation functions is calculated for the
1075   entire system. The dipole autocorrelation function is given by:
1076 < \[
1076 > \begin{equation}
1077   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1078   \right\rangle
1079 < \]
1079 > \end{equation}
1080   Here $u_{tot}$ is the net dipole of the entire system and is given
1081   by
1082 < \[
1082 > \begin{equation}
1083   u_{tot} (t) = \sum\limits_i {u_i (t)}.
1084 < \]
1084 > \end{equation}
1085   In principle, many time correlation functions can be related to
1086   Fourier transforms of the infrared, Raman, and inelastic neutron
1087   scattering spectra of molecular liquids. In practice, one can
1088   extract the IR spectrum from the intensity of the molecular dipole
1089   fluctuation at each frequency using the following relationship:
1090 < \[
1090 > \begin{equation}
1091   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1092   i2\pi vt} dt}.
1093 < \]
1093 > \end{equation}
1094  
1095   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1096  
1097   Rigid bodies are frequently involved in the modeling of different
1098 < areas, from engineering, physics, to chemistry. For example,
1098 > areas, including engineering, physics and chemistry. For example,
1099   missiles and vehicles are usually modeled by rigid bodies.  The
1100   movement of the objects in 3D gaming engines or other physics
1101   simulators is governed by rigid body dynamics. In molecular
1102   simulations, rigid bodies are used to simplify protein-protein
1103 < docking studies\cite{Gray2003}.
1103 > docking studies.\cite{Gray2003}
1104  
1105   It is very important to develop stable and efficient methods to
1106   integrate the equations of motion for orientational degrees of
# Line 1138 | Line 1112 | still remain. A singularity-free representation utiliz
1112   angles can overcome this difficulty\cite{Barojas1973}, the
1113   computational penalty and the loss of angular momentum conservation
1114   still remain. A singularity-free representation utilizing
1115 < quaternions was developed by Evans in 1977\cite{Evans1977}.
1116 < Unfortunately, this approach uses a nonseparable Hamiltonian
1117 < resulting from the quaternion representation, which prevents the
1115 > quaternions was developed by Evans in 1977.\cite{Evans1977}
1116 > Unfortunately, this approach used a nonseparable Hamiltonian
1117 > resulting from the quaternion representation, which prevented the
1118   symplectic algorithm from being utilized. Another different approach
1119   is to apply holonomic constraints to the atoms belonging to the
1120   rigid body. Each atom moves independently under the normal forces
1121   deriving from potential energy and constraint forces which are used
1122   to guarantee the rigidness. However, due to their iterative nature,
1123   the SHAKE and Rattle algorithms also converge very slowly when the
1124 < number of constraints increases\cite{Ryckaert1977, Andersen1983}.
1124 > number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1125  
1126   A break-through in geometric literature suggests that, in order to
1127   develop a long-term integration scheme, one should preserve the
# Line 1157 | Line 1131 | An alternative method using the quaternion representat
1131   proposed to evolve the Hamiltonian system in a constraint manifold
1132   by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1133   An alternative method using the quaternion representation was
1134 < developed by Omelyan\cite{Omelyan1998}. However, both of these
1134 > developed by Omelyan.\cite{Omelyan1998} However, both of these
1135   methods are iterative and inefficient. In this section, we descibe a
1136   symplectic Lie-Poisson integrator for rigid bodies developed by
1137   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1138  
1139   \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1140 < The motion of a rigid body is Hamiltonian with the Hamiltonian
1167 < function
1140 > The Hamiltonian of a rigid body is given by
1141   \begin{equation}
1142   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1143   V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ].
1144   \label{introEquation:RBHamiltonian}
1145   \end{equation}
1146 < Here, $q$ and $Q$  are the position and rotation matrix for the
1147 < rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ , and
1148 < $J$, a diagonal matrix, is defined by
1146 > Here, $q$ and $Q$  are the position vector and rotation matrix for
1147 > the rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ ,
1148 > and $J$, a diagonal matrix, is defined by
1149   \[
1150   I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1151   \]
# Line 1182 | Line 1155 | Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1155   \begin{equation}
1156   Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1157   \end{equation}
1158 < which is used to ensure rotation matrix's unitarity. Differentiating
1159 < Eq.~\ref{introEquation:orthogonalConstraint} and using
1160 < Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1188 < \begin{equation}
1189 < Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1190 < \label{introEquation:RBFirstOrderConstraint}
1191 < \end{equation}
1192 < Using Equation (\ref{introEquation:motionHamiltonianCoordinate},
1193 < \ref{introEquation:motionHamiltonianMomentum}), one can write down
1158 > which is used to ensure the rotation matrix's unitarity. Using
1159 > Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~
1160 > \ref{introEquation:motionHamiltonianMomentum}, one can write down
1161   the equations of motion,
1162   \begin{eqnarray}
1163   \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
# Line 1198 | Line 1165 | the equations of motion,
1165   \frac{{dQ}}{{dt}} & = & PJ^{ - 1},  \label{introEquation:RBMotionRotation}\\
1166   \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1167   \end{eqnarray}
1168 + Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and
1169 + using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1170 + \begin{equation}
1171 + Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1172 + \label{introEquation:RBFirstOrderConstraint}
1173 + \end{equation}
1174   In general, there are two ways to satisfy the holonomic constraints.
1175   We can use a constraint force provided by a Lagrange multiplier on
1176 < the normal manifold to keep the motion on constraint space. Or we
1177 < can simply evolve the system on the constraint manifold. These two
1178 < methods have been proved to be equivalent. The holonomic constraint
1179 < and equations of motions define a constraint manifold for rigid
1180 < bodies
1176 > the normal manifold to keep the motion on the constraint space. Or
1177 > we can simply evolve the system on the constraint manifold. These
1178 > two methods have been proved to be equivalent. The holonomic
1179 > constraint and equations of motions define a constraint manifold for
1180 > rigid bodies
1181   \[
1182   M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0}
1183   \right\}.
1184   \]
1185 < Unfortunately, this constraint manifold is not the cotangent bundle
1186 < $T^* SO(3)$ which can be consider as a symplectic manifold on Lie
1187 < rotation group $SO(3)$. However, it turns out that under symplectic
1188 < transformation, the cotangent space and the phase space are
1216 < diffeomorphic. By introducing
1185 > Unfortunately, this constraint manifold is not $T^* SO(3)$ which is
1186 > a symplectic manifold on Lie rotation group $SO(3)$. However, it
1187 > turns out that under symplectic transformation, the cotangent space
1188 > and the phase space are diffeomorphic. By introducing
1189   \[
1190   \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1191   \]
1192 < the mechanical system subject to a holonomic constraint manifold $M$
1192 > the mechanical system subjected to a holonomic constraint manifold $M$
1193   can be re-formulated as a Hamiltonian system on the cotangent space
1194   \[
1195   T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
# Line 1279 | Line 1251 | motion. This unique property eliminates the requiremen
1251   Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1252   Lagrange multiplier $\Lambda$ is absent from the equations of
1253   motion. This unique property eliminates the requirement of
1254 < iterations which can not be avoided in other methods\cite{Kol1997,
1255 < Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1256 < equation of motion for angular momentum on body frame
1254 > iterations which can not be avoided in other methods.\cite{Kol1997,
1255 > Omelyan1998} Applying the hat-map isomorphism, we obtain the
1256 > equation of motion for angular momentum in the body frame
1257   \begin{equation}
1258   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1259   F_i (r,Q)} \right) \times X_i }.
# Line 1294 | Line 1266 | given by
1266   \]
1267  
1268   \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1269 < Lie-Poisson Integrator for Free Rigid Body}
1269 > Lie-Poisson Integrator for Free Rigid Bodies}
1270  
1271   If there are no external forces exerted on the rigid body, the only
1272   contribution to the rotational motion is from the kinetic energy
# Line 1346 | Line 1318 | To reduce the cost of computing expensive functions in
1318   \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1319   \]
1320   To reduce the cost of computing expensive functions in $e^{\Delta
1321 < tR_1 }$, we can use Cayley transformation to obtain a single-aixs
1322 < propagator,
1323 < \[
1324 < e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1325 < ).
1326 < \]
1327 < The propagator maps for $T_2^r$ and $T_3^r$ can be found in the same
1321 > tR_1 }$, we can use the Cayley transformation to obtain a
1322 > single-aixs propagator,
1323 > \begin{eqnarray*}
1324 > e^{\Delta tR_1 }  & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta
1325 > tR_1 ) \\
1326 > %
1327 > & \approx & \left( \begin{array}{ccc}
1328 > 1 & 0 & 0 \\
1329 > 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4}  & -\frac{\theta}{1+
1330 > \theta^2 / 4} \\
1331 > 0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 +
1332 > \theta^2 / 4}
1333 > \end{array}
1334 > \right).
1335 > \end{eqnarray*}
1336 > The propagators for $T_2^r$ and $T_3^r$ can be found in the same
1337   manner. In order to construct a second-order symplectic method, we
1338   split the angular kinetic Hamiltonian function into five terms
1339   \[
# Line 1368 | Line 1349 | _1 }.
1349   \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1350   _1 }.
1351   \]
1352 < The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1372 < $F(\pi )$ and $G(\pi )$ is defined by
1352 > The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1353   \[
1354   \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1355   ).
# Line 1378 | Line 1358 | norm of the angular momentum, $\parallel \pi
1358   function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1359   conserved quantity in Poisson system. We can easily verify that the
1360   norm of the angular momentum, $\parallel \pi
1361 < \parallel$, is a \emph{Casimir}. Let$ F(\pi ) = S(\frac{{\parallel
1361 > \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1362   \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1363   then by the chain rule
1364   \[
# Line 1397 | Line 1377 | The Hamiltonian of rigid body can be separated in term
1377   Splitting for Rigid Body}
1378  
1379   The Hamiltonian of rigid body can be separated in terms of kinetic
1380 < energy and potential energy,$H = T(p,\pi ) + V(q,Q)$. The equations
1380 > energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1381   of motion corresponding to potential energy and kinetic energy are
1382 < listed in the below table,
1382 > listed in Table~\ref{introTable:rbEquations}.
1383   \begin{table}
1384   \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1385 + \label{introTable:rbEquations}
1386   \begin{center}
1387   \begin{tabular}{|l|l|}
1388    \hline
# Line 1448 | Line 1429 | moving rigid bodies
1429   \begin{eqnarray}
1430   \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \notag\\
1431    & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \notag\\
1432 <  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\
1432 >  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .
1433   \label{introEquation:overallRBFlowMaps}
1434   \end{eqnarray}
1435  
# Line 1456 | Line 1437 | has been applied in a variety of studies. This section
1437   As an alternative to newtonian dynamics, Langevin dynamics, which
1438   mimics a simple heat bath with stochastic and dissipative forces,
1439   has been applied in a variety of studies. This section will review
1440 < the theory of Langevin dynamics. A brief derivation of generalized
1440 > the theory of Langevin dynamics. A brief derivation of the generalized
1441   Langevin equation will be given first. Following that, we will
1442 < discuss the physical meaning of the terms appearing in the equation
1462 < as well as the calculation of friction tensor from hydrodynamics
1463 < theory.
1442 > discuss the physical meaning of the terms appearing in the equation.
1443  
1444   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1445  
# Line 1469 | Line 1448 | Harmonic bath model is the derivation of the Generaliz
1448   environment, has been widely used in quantum chemistry and
1449   statistical mechanics. One of the successful applications of
1450   Harmonic bath model is the derivation of the Generalized Langevin
1451 < Dynamics (GLE). Lets consider a system, in which the degree of
1451 > Dynamics (GLE). Consider a system, in which the degree of
1452   freedom $x$ is assumed to couple to the bath linearly, giving a
1453   Hamiltonian of the form
1454   \begin{equation}
# Line 1480 | Line 1459 | H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_
1459   with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1460   \[
1461   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1462 < }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
1462 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  x_\alpha ^2 }
1463   \right\}}
1464   \]
1465   where the index $\alpha$ runs over all the bath degrees of freedom,
# Line 1525 | Line 1504 | differential equations into simple algebra problems wh
1504   differential equations,the Laplace transform is the appropriate tool
1505   to solve this problem. The basic idea is to transform the difficult
1506   differential equations into simple algebra problems which can be
1507 < solved easily. Then, by applying the inverse Laplace transform, also
1508 < known as the Bromwich integral, we can retrieve the solutions of the
1509 < original problems. Let $f(t)$ be a function defined on $ [0,\infty )
1510 < $, the Laplace transform of $f(t)$ is a new function defined as
1507 > solved easily. Then, by applying the inverse Laplace transform, we
1508 > can retrieve the solutions of the original problems. Let $f(t)$ be a
1509 > function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$
1510 > is a new function defined as
1511   \[
1512   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1513   \]
1514   where  $p$ is real and  $L$ is called the Laplace Transform
1515 < Operator. Below are some important properties of Laplace transform
1515 > Operator. Below are some important properties of the Laplace transform
1516   \begin{eqnarray*}
1517   L(x + y)  & = & L(x) + L(y) \\
1518   L(ax)     & = & aL(x) \\
# Line 1546 | Line 1525 | L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega
1525   p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x), \\
1526   L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}. \\
1527   \end{eqnarray*}
1528 < By the same way, the system coordinates become
1528 > In the same way, the system coordinates become
1529   \begin{eqnarray*}
1530   mL(\ddot x) & = &
1531    - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
# Line 1570 | Line 1549 | x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _
1549   & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1550   x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}}
1551   \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1552 < (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}
1553 < \end{eqnarray*}
1554 < \begin{eqnarray*}
1555 < m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1556 < {\sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1557 < }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha
1552 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}\\
1553 > %
1554 > & = & -
1555 > \frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha
1556 > = 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha
1557 > ^2 }}} \right)\cos (\omega _\alpha
1558   t)\dot x(t - \tau )d} \tau }  \\
1559   & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1560   x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha  }}}
# Line 1602 | Line 1581 | m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int
1581   (t)\dot x(t - \tau )d\tau }  + R(t)
1582   \label{introEuqation:GeneralizedLangevinDynamics}
1583   \end{equation}
1584 < which is known as the \emph{generalized Langevin equation}.
1584 > which is known as the \emph{generalized Langevin equation} (GLE).
1585  
1586   \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1587  
1588   One may notice that $R(t)$ depends only on initial conditions, which
1589   implies it is completely deterministic within the context of a
1590   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1591 < uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1591 > uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1592   \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1593   0.$ This property is what we expect from a truly random process. As
1594   long as the model chosen for $R(t)$ was a gaussian distribution in
# Line 1638 | Line 1617 | taken as a $delta$ function in time:
1617   infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1618   taken as a $delta$ function in time:
1619   \[
1620 < \xi (t) = 2\xi _0 \delta (t)
1620 > \xi (t) = 2\xi _0 \delta (t).
1621   \]
1622   Hence, the convolution integral becomes
1623   \[
# Line 1653 | Line 1632 | or be determined by Stokes' law for regular shaped par
1632   which is known as the Langevin equation. The static friction
1633   coefficient $\xi _0$ can either be calculated from spectral density
1634   or be determined by Stokes' law for regular shaped particles. A
1635 < briefly review on calculating friction tensor for arbitrary shaped
1635 > brief review on calculating friction tensors for arbitrary shaped
1636   particles is given in Sec.~\ref{introSection:frictionTensor}.
1637  
1638   \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
# Line 1663 | Line 1642 | q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \o
1642   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1643   ^2 }}x(0),
1644   \]
1645 < we can rewrite $R(T)$ as
1645 > we can rewrite $R(t)$ as
1646   \[
1647   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1648   \]
# Line 1674 | Line 1653 | And since the $q$ coordinates are harmonic oscillators
1653   \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\
1654   \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\
1655    & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\
1656 <  & = &kT\xi (t) \\
1656 >  & = &kT\xi (t)
1657   \end{eqnarray*}
1658   Thus, we recover the \emph{second fluctuation dissipation theorem}
1659   \begin{equation}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines