ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2907 by tim, Thu Jun 29 16:57:37 2006 UTC vs.
Revision 2942 by tim, Mon Jul 17 20:54:17 2006 UTC

# Line 67 | Line 67 | All of these conserved quantities are important factor
67   \begin{equation}E = T + V. \label{introEquation:energyConservation}
68   \end{equation}
69   All of these conserved quantities are important factors to determine
70 < the quality of numerical integration schemes for rigid bodies
71 < \cite{Dullweber1997}.
70 > the quality of numerical integration schemes for rigid
71 > bodies.\cite{Dullweber1997}
72  
73   \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
# Line 178 | Line 178 | equation of motion. Due to their symmetrical formula,
178   where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179   Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180   equation of motion. Due to their symmetrical formula, they are also
181 < known as the canonical equations of motions \cite{Goldstein2001}.
181 > known as the canonical equations of motions.\cite{Goldstein2001}
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
# Line 188 | Line 188 | coordinate and its time derivative as independent vari
188   Hamiltonian Mechanics is more appropriate for application to
189   statistical mechanics and quantum mechanics, since it treats the
190   coordinate and its time derivative as independent variables and it
191 < only works with 1st-order differential equations\cite{Marion1990}.
191 > only works with 1st-order differential equations.\cite{Marion1990}
192   In Newtonian Mechanics, a system described by conservative forces
193   conserves the total energy
194   (Eq.~\ref{introEquation:energyConservation}). It follows that
# Line 208 | Line 208 | The following section will give a brief introduction t
208   The thermodynamic behaviors and properties of Molecular Dynamics
209   simulation are governed by the principle of Statistical Mechanics.
210   The following section will give a brief introduction to some of the
211 < Statistical Mechanics concepts and theorem presented in this
211 > Statistical Mechanics concepts and theorems presented in this
212   dissertation.
213  
214   \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
# Line 281 | Line 281 | space of the system,
281   (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
282   (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
283   \label{introEquation:ensembelAverage}
284 \end{equation}
285
286 There are several different types of ensembles with different
287 statistical characteristics. As a function of macroscopic
288 parameters, such as temperature \textit{etc}, the partition function
289 can be used to describe the statistical properties of a system in
290 thermodynamic equilibrium. As an ensemble of systems, each of which
291 is known to be thermally isolated and conserve energy, the
292 Microcanonical ensemble (NVE) has a partition function like,
293 \begin{equation}
294 \Omega (N,V,E) = e^{\beta TS}. \label{introEquation:NVEPartition}
295 \end{equation}
296 A canonical ensemble (NVT) is an ensemble of systems, each of which
297 can share its energy with a large heat reservoir. The distribution
298 of the total energy amongst the possible dynamical states is given
299 by the partition function,
300 \begin{equation}
301 \Omega (N,V,T) = e^{ - \beta A}.
302 \label{introEquation:NVTPartition}
284   \end{equation}
304 Here, $A$ is the Helmholtz free energy which is defined as $ A = U -
305 TS$. Since most experiments are carried out under constant pressure
306 condition, the isothermal-isobaric ensemble (NPT) plays a very
307 important role in molecular simulations. The isothermal-isobaric
308 ensemble allow the system to exchange energy with a heat bath of
309 temperature $T$ and to change the volume as well. Its partition
310 function is given as
311 \begin{equation}
312 \Delta (N,P,T) =  - e^{\beta G}.
313 \label{introEquation:NPTPartition}
314 \end{equation}
315 Here, $G = U - TS + PV$, and $G$ is called Gibbs free energy.
285  
286   \subsection{\label{introSection:liouville}Liouville's theorem}
287  
# Line 403 | Line 372 | $F$ and $G$ of the coordinates and momenta of a system
372   Liouville's theorem can be expressed in a variety of different forms
373   which are convenient within different contexts. For any two function
374   $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 < bracket ${F, G}$ is defined as
375 > bracket $\{F,G\}$ is defined as
376   \begin{equation}
377   \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378   F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
# Line 447 | Line 416 | average. It states that the time average and average o
416   many-body system in Statistical Mechanics. Fortunately, the Ergodic
417   Hypothesis makes a connection between time average and the ensemble
418   average. It states that the time average and average over the
419 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
419 > statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
420   \begin{equation}
421   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 465 | Line 434 | Sec.~\ref{introSection:molecularDynamics} will be the
434   utilized. Or if the system lends itself to a time averaging
435   approach, the Molecular Dynamics techniques in
436   Sec.~\ref{introSection:molecularDynamics} will be the best
437 < choice\cite{Frenkel1996}.
437 > choice.\cite{Frenkel1996}
438  
439   \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440   A variety of numerical integrators have been proposed to simulate
441   the motions of atoms in MD simulation. They usually begin with
442 < initial conditionals and move the objects in the direction governed
443 < by the differential equations. However, most of them ignore the
444 < hidden physical laws contained within the equations. Since 1990,
445 < geometric integrators, which preserve various phase-flow invariants
446 < such as symplectic structure, volume and time reversal symmetry,
447 < were developed to address this issue\cite{Dullweber1997,
448 < McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which
449 < happens to be a simple example of symplectic integrator, continues
450 < to gain popularity in the molecular dynamics community. This fact
451 < can be partly explained by its geometric nature.
442 > initial conditions and move the objects in the direction governed by
443 > the differential equations. However, most of them ignore the hidden
444 > physical laws contained within the equations. Since 1990, geometric
445 > integrators, which preserve various phase-flow invariants such as
446 > symplectic structure, volume and time reversal symmetry, were
447 > developed to address this issue.\cite{Dullweber1997, McLachlan1998,
448 > Leimkuhler1999} The velocity Verlet method, which happens to be a
449 > simple example of symplectic integrator, continues to gain
450 > popularity in the molecular dynamics community. This fact can be
451 > partly explained by its geometric nature.
452  
453   \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454   A \emph{manifold} is an abstract mathematical space. It looks
# Line 488 | Line 457 | viewed as a whole. A \emph{differentiable manifold} (a
457   surface of Earth. It seems to be flat locally, but it is round if
458   viewed as a whole. A \emph{differentiable manifold} (also known as
459   \emph{smooth manifold}) is a manifold on which it is possible to
460 < apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is
460 > apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
461   defined as a pair $(M, \omega)$ which consists of a
462 < \emph{differentiable manifold} $M$ and a close, non-degenerated,
462 > \emph{differentiable manifold} $M$ and a close, non-degenerate,
463   bilinear symplectic form, $\omega$. A symplectic form on a vector
464   space $V$ is a function $\omega(x, y)$ which satisfies
465   $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466   \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 < $\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in
467 > $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
468   vector field is an example of symplectic form. One of the
469   motivations to study \emph{symplectic manifolds} in Hamiltonian
470   Mechanics is that a symplectic manifold can represent all possible
471   configurations of the system and the phase space of the system can
472 < be described by it's cotangent bundle\cite{Jost2002}. Every
472 > be described by it's cotangent bundle.\cite{Jost2002} Every
473   symplectic manifold is even dimensional. For instance, in Hamilton
474   equations, coordinate and momentum always appear in pairs.
475  
# Line 510 | Line 479 | For an ordinary differential system defined as
479   \begin{equation}
480   \dot x = f(x)
481   \end{equation}
482 < where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
482 > where $x = x(q,p)$, this system is a canonical Hamiltonian, if
483   $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484   function and $J$ is the skew-symmetric matrix
485   \begin{equation}
# Line 527 | Line 496 | called a \emph{Hamiltonian vector field}. Another gene
496   \label{introEquation:compactHamiltonian}
497   \end{equation}In this case, $f$ is
498   called a \emph{Hamiltonian vector field}. Another generalization of
499 < Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
499 > Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
500   \begin{equation}
501   \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502   \end{equation}
503 < The most obvious change being that matrix $J$ now depends on $x$.
503 > where the most obvious change being that matrix $J$ now depends on
504 > $x$.
505  
506   \subsection{\label{introSection:exactFlow}Exact Propagator}
507  
508   Let $x(t)$ be the exact solution of the ODE
509 < system,$\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}$, we can
510 < define its exact propagator(solution) $\varphi_\tau$
509 > system,
510 > \begin{equation}
511 > \frac{{dx}}{{dt}} = f(x), \label{introEquation:ODE}
512 > \end{equation} we can
513 > define its exact propagator $\varphi_\tau$:
514   \[ x(t+\tau)
515   =\varphi_\tau(x(t))
516   \]
# Line 555 | Line 528 | Therefore, the exact propagator is self-adjoint,
528   \begin{equation}
529   \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
530   \end{equation}
531 < The exact propagator can also be written in terms of operator,
531 > The exact propagator can also be written as an operator,
532   \begin{equation}
533   \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
534   }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
# Line 609 | Line 582 | Using the chain rule, one may obtain,
582   \]
583   Using the chain rule, one may obtain,
584   \[
585 < \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \dot \nabla G,
585 > \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \cdot \nabla G,
586   \]
587   which is the condition for conserved quantities. For a canonical
588   Hamiltonian system, the time evolution of an arbitrary smooth
# Line 648 | Line 621 | variational methods can capture the decay of energy
621   Generating functions\cite{Channell1990} tend to lead to methods
622   which are cumbersome and difficult to use. In dissipative systems,
623   variational methods can capture the decay of energy
624 < accurately\cite{Kane2000}. Since they are geometrically unstable
624 > accurately.\cite{Kane2000} Since they are geometrically unstable
625   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626 < methods are not suitable for Hamiltonian system. Recently, various
627 < high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
628 < have been developed to overcome this instability. However, due to
629 < computational penalty involved in implementing the Runge-Kutta
630 < methods, they have not attracted much attention from the Molecular
631 < Dynamics community. Instead, splitting methods have been widely
632 < accepted since they exploit natural decompositions of the
633 < system\cite{Tuckerman1992, McLachlan1998}.
626 > methods are not suitable for Hamiltonian
627 > system.\cite{Cartwright1992} Recently, various high-order explicit
628 > Runge-Kutta methods \cite{Owren1992,Chen2003} have been developed to
629 > overcome this instability. However, due to computational penalty
630 > involved in implementing the Runge-Kutta methods, they have not
631 > attracted much attention from the Molecular Dynamics community.
632 > Instead, splitting methods have been widely accepted since they
633 > exploit natural decompositions of the system.\cite{McLachlan1998,
634 > Tuckerman1992}
635  
636   \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
637  
# Line 680 | Line 654 | simple first order expression is then given by the Lie
654   problem. If $H_1$ and $H_2$ can be integrated using exact
655   propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
656   simple first order expression is then given by the Lie-Trotter
657 < formula
657 > formula\cite{Trotter1959}
658   \begin{equation}
659   \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h},
660   \label{introEquation:firstOrderSplitting}
# Line 701 | Line 675 | local errors proportional to $h^2$, while the Strang s
675   The Lie-Trotter
676   splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
677   local errors proportional to $h^2$, while the Strang splitting gives
678 < a second-order decomposition,
678 > a second-order decomposition,\cite{Strang1968}
679   \begin{equation}
680   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
681   _{1,h/2} , \label{introEquation:secondOrderSplitting}
# Line 734 | Line 708 | known as \emph{velocity verlet} which is
708   \end{align}
709   where $F(t)$ is the force at time $t$. This integration scheme is
710   known as \emph{velocity verlet} which is
711 < symplectic(\ref{introEquation:SymplecticFlowComposition}),
712 < time-reversible(\ref{introEquation:timeReversible}) and
713 < volume-preserving (\ref{introEquation:volumePreserving}). These
711 > symplectic(Eq.~\ref{introEquation:SymplecticFlowComposition}),
712 > time-reversible(Eq.~\ref{introEquation:timeReversible}) and
713 > volume-preserving (Eq.~\ref{introEquation:volumePreserving}). These
714   geometric properties attribute to its long-time stability and its
715   popularity in the community. However, the most commonly used
716   velocity verlet integration scheme is written as below,
# Line 757 | Line 731 | the equations of motion would follow:
731  
732   \item Use the half step velocities to move positions one whole step, $\Delta t$.
733  
734 < \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
734 > \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
735  
736   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
737   \end{enumerate}
# Line 776 | Line 750 | q(\Delta t)} \right]. %
750  
751   \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
752  
753 < The Baker-Campbell-Hausdorff formula can be used to determine the
754 < local error of a splitting method in terms of the commutator of the
755 < operators(\ref{introEquation:exponentialOperator}) associated with
756 < the sub-propagator. For operators $hX$ and $hY$ which are associated
757 < with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
753 > The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
754 > to determine the local error of a splitting method in terms of the
755 > commutator of the
756 > operators(Eq.~\ref{introEquation:exponentialOperator}) associated
757 > with the sub-propagator. For operators $hX$ and $hY$ which are
758 > associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we
759 > have
760   \begin{equation}
761   \exp (hX + hY) = \exp (hZ)
762   \end{equation}
# Line 810 | Line 786 | order methods. Yoshida proposed an elegant way to comp
786   \end{equation}
787   A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
788   order methods. Yoshida proposed an elegant way to compose higher
789 < order methods based on symmetric splitting\cite{Yoshida1990}. Given
789 > order methods based on symmetric splitting.\cite{Yoshida1990} Given
790   a symmetric second order base method $ \varphi _h^{(2)} $, a
791   fourth-order symmetric method can be constructed by composing,
792   \[
# Line 862 | Line 838 | initialization of a simulation. Sec.~\ref{introSection
838   These three individual steps will be covered in the following
839   sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
840   initialization of a simulation. Sec.~\ref{introSection:production}
841 < will discuss issues of production runs.
841 > discusses issues of production runs.
842   Sec.~\ref{introSection:Analysis} provides the theoretical tools for
843   analysis of trajectories.
844  
# Line 896 | Line 872 | surface and to locate the local minimum. While converg
872   minimization to find a more reasonable conformation. Several energy
873   minimization methods have been developed to exploit the energy
874   surface and to locate the local minimum. While converging slowly
875 < near the minimum, steepest descent method is extremely robust when
875 > near the minimum, the steepest descent method is extremely robust when
876   systems are strongly anharmonic. Thus, it is often used to refine
877   structures from crystallographic data. Relying on the Hessian,
878   advanced methods like Newton-Raphson converge rapidly to a local
# Line 915 | Line 891 | end up setting the temperature of the system to a fina
891   temperature. Beginning at a lower temperature and gradually
892   increasing the temperature by assigning larger random velocities, we
893   end up setting the temperature of the system to a final temperature
894 < at which the simulation will be conducted. In heating phase, we
894 > at which the simulation will be conducted. In the heating phase, we
895   should also keep the system from drifting or rotating as a whole. To
896   do this, the net linear momentum and angular momentum of the system
897   is shifted to zero after each resampling from the Maxwell -Boltzman
# Line 930 | Line 906 | equilibration process is long enough. However, these s
906   properties \textit{etc}, become independent of time. Strictly
907   speaking, minimization and heating are not necessary, provided the
908   equilibration process is long enough. However, these steps can serve
909 < as a means to arrive at an equilibrated structure in an effective
909 > as a mean to arrive at an equilibrated structure in an effective
910   way.
911  
912   \subsection{\label{introSection:production}Production}
# Line 971 | Line 947 | evaluation is to apply spherical cutoffs where particl
947   %cutoff and minimum image convention
948   Another important technique to improve the efficiency of force
949   evaluation is to apply spherical cutoffs where particles farther
950 < than a predetermined distance are not included in the calculation
951 < \cite{Frenkel1996}. The use of a cutoff radius will cause a
952 < discontinuity in the potential energy curve. Fortunately, one can
950 > than a predetermined distance are not included in the
951 > calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
952 > a discontinuity in the potential energy curve. Fortunately, one can
953   shift a simple radial potential to ensure the potential curve go
954   smoothly to zero at the cutoff radius. The cutoff strategy works
955   well for Lennard-Jones interaction because of its short range
# Line 982 | Line 958 | with rapid and absolute convergence, has proved to min
958   in simulations. The Ewald summation, in which the slowly decaying
959   Coulomb potential is transformed into direct and reciprocal sums
960   with rapid and absolute convergence, has proved to minimize the
961 < periodicity artifacts in liquid simulations. Taking the advantages
962 < of the fast Fourier transform (FFT) for calculating discrete Fourier
963 < transforms, the particle mesh-based
961 > periodicity artifacts in liquid simulations. Taking advantage of
962 > fast Fourier transform (FFT) techniques for calculating discrete
963 > Fourier transforms, the particle mesh-based
964   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
965   $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
966   \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
# Line 994 | Line 970 | charge-neutralized Coulomb potential method developed
970   simulation community, these two methods are difficult to implement
971   correctly and efficiently. Instead, we use a damped and
972   charge-neutralized Coulomb potential method developed by Wolf and
973 < his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
973 > his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
974   particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
975   \begin{equation}
976   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
977   r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
978   R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
979 < r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
979 > r_{ij})}{r_{ij}}\right\}, \label{introEquation:shiftedCoulomb}
980   \end{equation}
981   where $\alpha$ is the convergence parameter. Due to the lack of
982   inherent periodicity and rapid convergence,this method is extremely
# Line 1017 | Line 993 | illustration of shifted Coulomb potential.}
993  
994   \subsection{\label{introSection:Analysis} Analysis}
995  
996 < Recently, advanced visualization technique have become applied to
996 > Recently, advanced visualization techniques have been applied to
997   monitor the motions of molecules. Although the dynamics of the
998   system can be described qualitatively from animation, quantitative
999   trajectory analysis is more useful. According to the principles of
# Line 1057 | Line 1033 | Fourier transforming raw data from a series of neutron
1033   function}, is of most fundamental importance to liquid theory.
1034   Experimentally, pair distribution functions can be gathered by
1035   Fourier transforming raw data from a series of neutron diffraction
1036 < experiments and integrating over the surface factor
1037 < \cite{Powles1973}. The experimental results can serve as a criterion
1038 < to justify the correctness of a liquid model. Moreover, various
1039 < equilibrium thermodynamic and structural properties can also be
1040 < expressed in terms of the radial distribution function
1041 < \cite{Allen1987}. The pair distribution functions $g(r)$ gives the
1042 < probability that a particle $i$ will be located at a distance $r$
1043 < from a another particle $j$ in the system
1036 > experiments and integrating over the surface
1037 > factor.\cite{Powles1973} The experimental results can serve as a
1038 > criterion to justify the correctness of a liquid model. Moreover,
1039 > various equilibrium thermodynamic and structural properties can also
1040 > be expressed in terms of the radial distribution
1041 > function.\cite{Allen1987} The pair distribution functions $g(r)$
1042 > gives the probability that a particle $i$ will be located at a
1043 > distance $r$ from a another particle $j$ in the system
1044   \begin{equation}
1045   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1046   \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
# Line 1087 | Line 1063 | If $A$ and $B$ refer to same variable, this kind of co
1063   \label{introEquation:timeCorrelationFunction}
1064   \end{equation}
1065   If $A$ and $B$ refer to same variable, this kind of correlation
1066 < function is called an \emph{autocorrelation function}. One example
1091 < of an auto correlation function is the velocity auto-correlation
1066 > functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1067   function which is directly related to transport properties of
1068   molecular liquids:
1069 < \[
1069 > \begin{equation}
1070   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1071   \right\rangle } dt
1072 < \]
1072 > \end{equation}
1073   where $D$ is diffusion constant. Unlike the velocity autocorrelation
1074   function, which is averaged over time origins and over all the
1075   atoms, the dipole autocorrelation functions is calculated for the
1076   entire system. The dipole autocorrelation function is given by:
1077 < \[
1077 > \begin{equation}
1078   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1079   \right\rangle
1080 < \]
1080 > \end{equation}
1081   Here $u_{tot}$ is the net dipole of the entire system and is given
1082   by
1083 < \[
1083 > \begin{equation}
1084   u_{tot} (t) = \sum\limits_i {u_i (t)}.
1085 < \]
1085 > \end{equation}
1086   In principle, many time correlation functions can be related to
1087   Fourier transforms of the infrared, Raman, and inelastic neutron
1088   scattering spectra of molecular liquids. In practice, one can
1089   extract the IR spectrum from the intensity of the molecular dipole
1090   fluctuation at each frequency using the following relationship:
1091 < \[
1091 > \begin{equation}
1092   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1093   i2\pi vt} dt}.
1094 < \]
1094 > \end{equation}
1095  
1096   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1097  
1098   Rigid bodies are frequently involved in the modeling of different
1099 < areas, from engineering, physics, to chemistry. For example,
1099 > areas, including engineering, physics and chemistry. For example,
1100   missiles and vehicles are usually modeled by rigid bodies.  The
1101   movement of the objects in 3D gaming engines or other physics
1102   simulators is governed by rigid body dynamics. In molecular
1103   simulations, rigid bodies are used to simplify protein-protein
1104 < docking studies\cite{Gray2003}.
1104 > docking studies.\cite{Gray2003}
1105  
1106   It is very important to develop stable and efficient methods to
1107   integrate the equations of motion for orientational degrees of
# Line 1138 | Line 1113 | still remain. A singularity-free representation utiliz
1113   angles can overcome this difficulty\cite{Barojas1973}, the
1114   computational penalty and the loss of angular momentum conservation
1115   still remain. A singularity-free representation utilizing
1116 < quaternions was developed by Evans in 1977\cite{Evans1977}.
1117 < Unfortunately, this approach uses a nonseparable Hamiltonian
1118 < resulting from the quaternion representation, which prevents the
1116 > quaternions was developed by Evans in 1977.\cite{Evans1977}
1117 > Unfortunately, this approach used a nonseparable Hamiltonian
1118 > resulting from the quaternion representation, which prevented the
1119   symplectic algorithm from being utilized. Another different approach
1120   is to apply holonomic constraints to the atoms belonging to the
1121   rigid body. Each atom moves independently under the normal forces
1122   deriving from potential energy and constraint forces which are used
1123   to guarantee the rigidness. However, due to their iterative nature,
1124   the SHAKE and Rattle algorithms also converge very slowly when the
1125 < number of constraints increases\cite{Ryckaert1977, Andersen1983}.
1125 > number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1126  
1127   A break-through in geometric literature suggests that, in order to
1128   develop a long-term integration scheme, one should preserve the
# Line 1157 | Line 1132 | An alternative method using the quaternion representat
1132   proposed to evolve the Hamiltonian system in a constraint manifold
1133   by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1134   An alternative method using the quaternion representation was
1135 < developed by Omelyan\cite{Omelyan1998}. However, both of these
1135 > developed by Omelyan.\cite{Omelyan1998} However, both of these
1136   methods are iterative and inefficient. In this section, we descibe a
1137   symplectic Lie-Poisson integrator for rigid bodies developed by
1138   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1139  
1140   \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1141 < The motion of a rigid body is Hamiltonian with the Hamiltonian
1167 < function
1141 > The Hamiltonian of a rigid body is given by
1142   \begin{equation}
1143   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1144   V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ].
1145   \label{introEquation:RBHamiltonian}
1146   \end{equation}
1147 < Here, $q$ and $Q$  are the position and rotation matrix for the
1148 < rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ , and
1149 < $J$, a diagonal matrix, is defined by
1147 > Here, $q$ and $Q$  are the position vector and rotation matrix for
1148 > the rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ ,
1149 > and $J$, a diagonal matrix, is defined by
1150   \[
1151   I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1152   \]
# Line 1182 | Line 1156 | Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1156   \begin{equation}
1157   Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1158   \end{equation}
1159 < which is used to ensure rotation matrix's unitarity. Differentiating
1160 < Eq.~\ref{introEquation:orthogonalConstraint} and using
1161 < Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1188 < \begin{equation}
1189 < Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1190 < \label{introEquation:RBFirstOrderConstraint}
1191 < \end{equation}
1192 < Using Equation (\ref{introEquation:motionHamiltonianCoordinate},
1193 < \ref{introEquation:motionHamiltonianMomentum}), one can write down
1159 > which is used to ensure the rotation matrix's unitarity. Using
1160 > Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~
1161 > \ref{introEquation:motionHamiltonianMomentum}, one can write down
1162   the equations of motion,
1163   \begin{eqnarray}
1164   \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
# Line 1198 | Line 1166 | the equations of motion,
1166   \frac{{dQ}}{{dt}} & = & PJ^{ - 1},  \label{introEquation:RBMotionRotation}\\
1167   \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1168   \end{eqnarray}
1169 + Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and
1170 + using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1171 + \begin{equation}
1172 + Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1173 + \label{introEquation:RBFirstOrderConstraint}
1174 + \end{equation}
1175   In general, there are two ways to satisfy the holonomic constraints.
1176   We can use a constraint force provided by a Lagrange multiplier on
1177 < the normal manifold to keep the motion on constraint space. Or we
1178 < can simply evolve the system on the constraint manifold. These two
1179 < methods have been proved to be equivalent. The holonomic constraint
1180 < and equations of motions define a constraint manifold for rigid
1181 < bodies
1177 > the normal manifold to keep the motion on the constraint space. Or
1178 > we can simply evolve the system on the constraint manifold. These
1179 > two methods have been proved to be equivalent. The holonomic
1180 > constraint and equations of motions define a constraint manifold for
1181 > rigid bodies
1182   \[
1183   M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0}
1184   \right\}.
1185   \]
1186 < Unfortunately, this constraint manifold is not the cotangent bundle
1187 < $T^* SO(3)$ which can be consider as a symplectic manifold on Lie
1188 < rotation group $SO(3)$. However, it turns out that under symplectic
1189 < transformation, the cotangent space and the phase space are
1216 < diffeomorphic. By introducing
1186 > Unfortunately, this constraint manifold is not $T^* SO(3)$ which is
1187 > a symplectic manifold on Lie rotation group $SO(3)$. However, it
1188 > turns out that under symplectic transformation, the cotangent space
1189 > and the phase space are diffeomorphic. By introducing
1190   \[
1191   \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1192   \]
1193 < the mechanical system subject to a holonomic constraint manifold $M$
1193 > the mechanical system subjected to a holonomic constraint manifold $M$
1194   can be re-formulated as a Hamiltonian system on the cotangent space
1195   \[
1196   T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
# Line 1279 | Line 1252 | motion. This unique property eliminates the requiremen
1252   Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1253   Lagrange multiplier $\Lambda$ is absent from the equations of
1254   motion. This unique property eliminates the requirement of
1255 < iterations which can not be avoided in other methods\cite{Kol1997,
1256 < Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1257 < equation of motion for angular momentum on body frame
1255 > iterations which can not be avoided in other methods.\cite{Kol1997,
1256 > Omelyan1998} Applying the hat-map isomorphism, we obtain the
1257 > equation of motion for angular momentum in the body frame
1258   \begin{equation}
1259   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1260   F_i (r,Q)} \right) \times X_i }.
# Line 1294 | Line 1267 | given by
1267   \]
1268  
1269   \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1270 < Lie-Poisson Integrator for Free Rigid Body}
1270 > Lie-Poisson Integrator for Free Rigid Bodies}
1271  
1272   If there are no external forces exerted on the rigid body, the only
1273   contribution to the rotational motion is from the kinetic energy
# Line 1346 | Line 1319 | To reduce the cost of computing expensive functions in
1319   \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1320   \]
1321   To reduce the cost of computing expensive functions in $e^{\Delta
1322 < tR_1 }$, we can use Cayley transformation to obtain a single-aixs
1323 < propagator,
1324 < \[
1325 < e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1326 < ).
1327 < \]
1328 < The propagator maps for $T_2^r$ and $T_3^r$ can be found in the same
1322 > tR_1 }$, we can use the Cayley transformation to obtain a
1323 > single-aixs propagator,
1324 > \begin{eqnarray*}
1325 > e^{\Delta tR_1 }  & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta
1326 > tR_1 ) \\
1327 > %
1328 > & \approx & \left( \begin{array}{ccc}
1329 > 1 & 0 & 0 \\
1330 > 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4}  & -\frac{\theta}{1+
1331 > \theta^2 / 4} \\
1332 > 0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 +
1333 > \theta^2 / 4}
1334 > \end{array}
1335 > \right).
1336 > \end{eqnarray*}
1337 > The propagators for $T_2^r$ and $T_3^r$ can be found in the same
1338   manner. In order to construct a second-order symplectic method, we
1339   split the angular kinetic Hamiltonian function into five terms
1340   \[
# Line 1368 | Line 1350 | _1 }.
1350   \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1351   _1 }.
1352   \]
1353 < The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1372 < $F(\pi )$ and $G(\pi )$ is defined by
1353 > The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1354   \[
1355   \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1356   ).
# Line 1378 | Line 1359 | norm of the angular momentum, $\parallel \pi
1359   function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1360   conserved quantity in Poisson system. We can easily verify that the
1361   norm of the angular momentum, $\parallel \pi
1362 < \parallel$, is a \emph{Casimir}. Let$ F(\pi ) = S(\frac{{\parallel
1362 > \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1363   \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1364   then by the chain rule
1365   \[
# Line 1397 | Line 1378 | The Hamiltonian of rigid body can be separated in term
1378   Splitting for Rigid Body}
1379  
1380   The Hamiltonian of rigid body can be separated in terms of kinetic
1381 < energy and potential energy,$H = T(p,\pi ) + V(q,Q)$. The equations
1381 > energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1382   of motion corresponding to potential energy and kinetic energy are
1383 < listed in the below table,
1383 > listed in Table~\ref{introTable:rbEquations}.
1384   \begin{table}
1385   \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1386 + \label{introTable:rbEquations}
1387   \begin{center}
1388   \begin{tabular}{|l|l|}
1389    \hline
# Line 1448 | Line 1430 | moving rigid bodies
1430   \begin{eqnarray}
1431   \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \notag\\
1432    & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \notag\\
1433 <  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\
1433 >  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .
1434   \label{introEquation:overallRBFlowMaps}
1435   \end{eqnarray}
1436  
# Line 1456 | Line 1438 | has been applied in a variety of studies. This section
1438   As an alternative to newtonian dynamics, Langevin dynamics, which
1439   mimics a simple heat bath with stochastic and dissipative forces,
1440   has been applied in a variety of studies. This section will review
1441 < the theory of Langevin dynamics. A brief derivation of generalized
1441 > the theory of Langevin dynamics. A brief derivation of the generalized
1442   Langevin equation will be given first. Following that, we will
1443 < discuss the physical meaning of the terms appearing in the equation
1462 < as well as the calculation of friction tensor from hydrodynamics
1463 < theory.
1443 > discuss the physical meaning of the terms appearing in the equation.
1444  
1445   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1446  
# Line 1469 | Line 1449 | Harmonic bath model is the derivation of the Generaliz
1449   environment, has been widely used in quantum chemistry and
1450   statistical mechanics. One of the successful applications of
1451   Harmonic bath model is the derivation of the Generalized Langevin
1452 < Dynamics (GLE). Lets consider a system, in which the degree of
1452 > Dynamics (GLE). Consider a system, in which the degree of
1453   freedom $x$ is assumed to couple to the bath linearly, giving a
1454   Hamiltonian of the form
1455   \begin{equation}
# Line 1480 | Line 1460 | H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_
1460   with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1461   \[
1462   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1463 < }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
1463 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  x_\alpha ^2 }
1464   \right\}}
1465   \]
1466   where the index $\alpha$ runs over all the bath degrees of freedom,
# Line 1525 | Line 1505 | differential equations into simple algebra problems wh
1505   differential equations,the Laplace transform is the appropriate tool
1506   to solve this problem. The basic idea is to transform the difficult
1507   differential equations into simple algebra problems which can be
1508 < solved easily. Then, by applying the inverse Laplace transform, also
1509 < known as the Bromwich integral, we can retrieve the solutions of the
1510 < original problems. Let $f(t)$ be a function defined on $ [0,\infty )
1511 < $, the Laplace transform of $f(t)$ is a new function defined as
1508 > solved easily. Then, by applying the inverse Laplace transform, we
1509 > can retrieve the solutions of the original problems. Let $f(t)$ be a
1510 > function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$
1511 > is a new function defined as
1512   \[
1513   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1514   \]
1515   where  $p$ is real and  $L$ is called the Laplace Transform
1516 < Operator. Below are some important properties of Laplace transform
1516 > Operator. Below are some important properties of the Laplace transform
1517   \begin{eqnarray*}
1518   L(x + y)  & = & L(x) + L(y) \\
1519   L(ax)     & = & aL(x) \\
# Line 1546 | Line 1526 | L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega
1526   p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x), \\
1527   L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}. \\
1528   \end{eqnarray*}
1529 < By the same way, the system coordinates become
1529 > In the same way, the system coordinates become
1530   \begin{eqnarray*}
1531   mL(\ddot x) & = &
1532    - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
# Line 1570 | Line 1550 | x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _
1550   & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1551   x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}}
1552   \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1553 < (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}
1554 < \end{eqnarray*}
1555 < \begin{eqnarray*}
1556 < m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1557 < {\sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1558 < }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha
1553 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}\\
1554 > %
1555 > & = & -
1556 > \frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha
1557 > = 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha
1558 > ^2 }}} \right)\cos (\omega _\alpha
1559   t)\dot x(t - \tau )d} \tau }  \\
1560   & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1561   x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha  }}}
# Line 1602 | Line 1582 | m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int
1582   (t)\dot x(t - \tau )d\tau }  + R(t)
1583   \label{introEuqation:GeneralizedLangevinDynamics}
1584   \end{equation}
1585 < which is known as the \emph{generalized Langevin equation}.
1585 > which is known as the \emph{generalized Langevin equation} (GLE).
1586  
1587   \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1588  
1589   One may notice that $R(t)$ depends only on initial conditions, which
1590   implies it is completely deterministic within the context of a
1591   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1592 < uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1592 > uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1593   \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1594   0.$ This property is what we expect from a truly random process. As
1595   long as the model chosen for $R(t)$ was a gaussian distribution in
# Line 1638 | Line 1618 | taken as a $delta$ function in time:
1618   infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1619   taken as a $delta$ function in time:
1620   \[
1621 < \xi (t) = 2\xi _0 \delta (t)
1621 > \xi (t) = 2\xi _0 \delta (t).
1622   \]
1623   Hence, the convolution integral becomes
1624   \[
# Line 1653 | Line 1633 | or be determined by Stokes' law for regular shaped par
1633   which is known as the Langevin equation. The static friction
1634   coefficient $\xi _0$ can either be calculated from spectral density
1635   or be determined by Stokes' law for regular shaped particles. A
1636 < briefly review on calculating friction tensor for arbitrary shaped
1636 > brief review on calculating friction tensors for arbitrary shaped
1637   particles is given in Sec.~\ref{introSection:frictionTensor}.
1638  
1639   \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
# Line 1663 | Line 1643 | q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \o
1643   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1644   ^2 }}x(0),
1645   \]
1646 < we can rewrite $R(T)$ as
1646 > we can rewrite $R(t)$ as
1647   \[
1648   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1649   \]
# Line 1674 | Line 1654 | And since the $q$ coordinates are harmonic oscillators
1654   \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\
1655   \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\
1656    & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\
1657 <  & = &kT\xi (t) \\
1657 >  & = &kT\xi (t)
1658   \end{eqnarray*}
1659   Thus, we recover the \emph{second fluctuation dissipation theorem}
1660   \begin{equation}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines