ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2692 by tim, Tue Apr 4 21:32:58 2006 UTC vs.
Revision 2942 by tim, Mon Jul 17 20:54:17 2006 UTC

# Line 1 | Line 1
1   \chapter{\label{chapt:introduction}INTRODUCTION AND THEORETICAL BACKGROUND}
2  
3 < \section{\label{introSection:molecularDynamics}Molecular Dynamics}
3 > \section{\label{introSection:classicalMechanics}Classical
4 > Mechanics}
5  
6 < As a special discipline of molecular modeling, Molecular dynamics
7 < has proven to be a powerful tool for studying the functions of
8 < biological systems, providing structural, thermodynamic and
9 < dynamical information.
6 > Using equations of motion derived from Classical Mechanics,
7 > Molecular Dynamics simulations are carried out by integrating the
8 > equations of motion for a given system of particles. There are three
9 > fundamental ideas behind classical mechanics. Firstly, one can
10 > determine the state of a mechanical system at any time of interest;
11 > Secondly, all the mechanical properties of the system at that time
12 > can be determined by combining the knowledge of the properties of
13 > the system with the specification of this state; Finally, the
14 > specification of the state when further combined with the laws of
15 > mechanics will also be sufficient to predict the future behavior of
16 > the system.
17  
18 < \subsection{\label{introSection:classicalMechanics}Classical Mechanics}
18 > \subsection{\label{introSection:newtonian}Newtonian Mechanics}
19 > The discovery of Newton's three laws of mechanics which govern the
20 > motion of particles is the foundation of the classical mechanics.
21 > Newton's first law defines a class of inertial frames. Inertial
22 > frames are reference frames where a particle not interacting with
23 > other bodies will move with constant speed in the same direction.
24 > With respect to inertial frames, Newton's second law has the form
25 > \begin{equation}
26 > F = \frac {dp}{dt} = \frac {mdv}{dt}
27 > \label{introEquation:newtonSecondLaw}
28 > \end{equation}
29 > A point mass interacting with other bodies moves with the
30 > acceleration along the direction of the force acting on it. Let
31 > $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
32 > $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
33 > Newton's third law states that
34 > \begin{equation}
35 > F_{ij} = -F_{ji}.
36 > \label{introEquation:newtonThirdLaw}
37 > \end{equation}
38 > Conservation laws of Newtonian Mechanics play very important roles
39 > in solving mechanics problems. The linear momentum of a particle is
40 > conserved if it is free or it experiences no force. The second
41 > conservation theorem concerns the angular momentum of a particle.
42 > The angular momentum $L$ of a particle with respect to an origin
43 > from which $r$ is measured is defined to be
44 > \begin{equation}
45 > L \equiv r \times p \label{introEquation:angularMomentumDefinition}
46 > \end{equation}
47 > The torque $\tau$ with respect to the same origin is defined to be
48 > \begin{equation}
49 > \tau \equiv r \times F \label{introEquation:torqueDefinition}
50 > \end{equation}
51 > Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
52 > \[
53 > \dot L = \frac{d}{{dt}}(r \times p) = (\dot r \times p) + (r \times
54 > \dot p)
55 > \]
56 > since
57 > \[
58 > \dot r \times p = \dot r \times mv = m\dot r \times \dot r \equiv 0
59 > \]
60 > thus,
61 > \begin{equation}
62 > \dot L = r \times \dot p = \tau
63 > \end{equation}
64 > If there are no external torques acting on a body, the angular
65 > momentum of it is conserved. The last conservation theorem state
66 > that if all forces are conservative, energy is conserved,
67 > \begin{equation}E = T + V. \label{introEquation:energyConservation}
68 > \end{equation}
69 > All of these conserved quantities are important factors to determine
70 > the quality of numerical integration schemes for rigid
71 > bodies.\cite{Dullweber1997}
72  
73 < Closely related to Classical Mechanics, Molecular Dynamics
13 < simulations are carried out by integrating the equations of motion
14 < for a given system of particles. There are three fundamental ideas
15 < behind classical mechanics. Firstly, One can determine the state of
16 < a mechanical system at any time of interest; Secondly, all the
17 < mechanical properties of the system at that time can be determined
18 < by combining the knowledge of the properties of the system with the
19 < specification of this state; Finally, the specification of the state
20 < when further combine with the laws of mechanics will also be
21 < sufficient to predict the future behavior of the system.
73 > \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
75 < \subsubsection{\label{introSection:newtonian}Newtonian Mechanics}
75 > Newtonian Mechanics suffers from an important limitation: motion can
76 > only be described in cartesian coordinate systems which make it
77 > impossible to predict analytically the properties of the system even
78 > if we know all of the details of the interaction. In order to
79 > overcome some of the practical difficulties which arise in attempts
80 > to apply Newton's equation to complex systems, approximate numerical
81 > procedures may be developed.
82  
83 < \subsubsection{\label{introSection:lagrangian}Lagrangian Mechanics}
83 > \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
84 > Principle}}
85  
27 Newtonian Mechanics suffers from two important limitations: it
28 describes their motion in special cartesian coordinate systems.
29 Another limitation of Newtonian mechanics becomes obvious when we
30 try to describe systems with large numbers of particles. It becomes
31 very difficult to predict the properties of the system by carrying
32 out calculations involving the each individual interaction between
33 all the particles, even if we know all of the details of the
34 interaction. In order to overcome some of the practical difficulties
35 which arise in attempts to apply Newton's equation to complex
36 system, alternative procedures may be developed.
37
38 \subsubsubsection{\label{introSection:halmiltonPrinciple}Hamilton's
39 Principle}
40
86   Hamilton introduced the dynamical principle upon which it is
87 < possible to base all of mechanics and, indeed, most of classical
88 < physics. Hamilton's Principle may be stated as follow,
89 <
90 < The actual trajectory, along which a dynamical system may move from
91 < one point to another within a specified time, is derived by finding
92 < the path which minimizes the time integral of the difference between
48 < the kinetic, $K$, and potential energies, $U$.
87 > possible to base all of mechanics and most of classical physics.
88 > Hamilton's Principle may be stated as follows: the trajectory, along
89 > which a dynamical system may move from one point to another within a
90 > specified time, is derived by finding the path which minimizes the
91 > time integral of the difference between the kinetic $K$, and
92 > potential energies $U$,
93   \begin{equation}
94 < \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0} ,
95 < \lable{introEquation:halmitonianPrinciple1}
94 > \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
95 > \label{introEquation:halmitonianPrinciple1}
96   \end{equation}
53
97   For simple mechanical systems, where the forces acting on the
98 < different part are derivable from a potential and the velocities are
99 < small compared with that of light, the Lagrangian function $L$ can
100 < be define as the difference between the kinetic energy of the system
58 < and its potential energy,
98 > different parts are derivable from a potential, the Lagrangian
99 > function $L$ can be defined as the difference between the kinetic
100 > energy of the system and its potential energy,
101   \begin{equation}
102 < L \equiv K - U = L(q_i ,\dot q_i ) ,
102 > L \equiv K - U = L(q_i ,\dot q_i ).
103   \label{introEquation:lagrangianDef}
104   \end{equation}
105 < then Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
105 > Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
106   \begin{equation}
107 < \delta \int_{t_1 }^{t_2 } {K dt = 0} ,
108 < \lable{introEquation:halmitonianPrinciple2}
107 > \delta \int_{t_1 }^{t_2 } {L dt = 0} .
108 > \label{introEquation:halmitonianPrinciple2}
109   \end{equation}
110  
111 < \subsubsubsection{\label{introSection:equationOfMotionLagrangian}The
112 < Equations of Motion in Lagrangian Mechanics}
111 > \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
112 > Equations of Motion in Lagrangian Mechanics}}
113  
114 < for a holonomic system of $f$ degrees of freedom, the equations of
115 < motion in the Lagrangian form is
114 > For a system of $f$ degrees of freedom, the equations of motion in
115 > the Lagrangian form is
116   \begin{equation}
117   \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
118   \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
119 < \lable{introEquation:eqMotionLagrangian}
119 > \label{introEquation:eqMotionLagrangian}
120   \end{equation}
121   where $q_{i}$ is generalized coordinate and $\dot{q_{i}}$ is
122   generalized velocity.
123  
124 < \subsubsection{\label{introSection:hamiltonian}Hamiltonian Mechanics}
124 > \subsection{\label{introSection:hamiltonian}Hamiltonian Mechanics}
125  
126   Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
127   introduced by William Rowan Hamilton in 1833 as a re-formulation of
128   classical mechanics. If the potential energy of a system is
129 < independent of generalized velocities, the generalized momenta can
88 < be defined as
129 > independent of velocities, the momenta can be defined as
130   \begin{equation}
131   p_i = \frac{\partial L}{\partial \dot q_i}
132   \label{introEquation:generalizedMomenta}
133   \end{equation}
134 < With the help of these momenta, we may now define a new quantity $H$
94 < by the equation
134 > The Lagrange equations of motion are then expressed by
135   \begin{equation}
136 < H = p_1 \dot q_1  +  \ldots  + p_f \dot q_f  - L,
136 > p_i  = \frac{{\partial L}}{{\partial q_i }}
137 > \label{introEquation:generalizedMomentaDot}
138 > \end{equation}
139 > With the help of the generalized momenta, we may now define a new
140 > quantity $H$ by the equation
141 > \begin{equation}
142 > H = \sum\limits_k {p_k \dot q_k }  - L ,
143   \label{introEquation:hamiltonianDefByLagrangian}
144   \end{equation}
145   where $ \dot q_1  \ldots \dot q_f $ are generalized velocities and
146 < $L$ is the Lagrangian function for the system.
146 > $L$ is the Lagrangian function for the system. Differentiating
147 > Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
148 > \begin{equation}
149 > dH = \sum\limits_k {\left( {p_k d\dot q_k  + \dot q_k dp_k  -
150 > \frac{{\partial L}}{{\partial q_k }}dq_k  - \frac{{\partial
151 > L}}{{\partial \dot q_k }}d\dot q_k } \right)}  - \frac{{\partial
152 > L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1}
153 > \end{equation}
154 > Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
155 > and fourth terms in the parentheses cancel. Therefore,
156 > Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
157 > \begin{equation}
158 > dH = \sum\limits_k {\left( {\dot q_k dp_k  - \dot p_k dq_k }
159 > \right)}  - \frac{{\partial L}}{{\partial t}}dt .
160 > \label{introEquation:diffHamiltonian2}
161 > \end{equation}
162 > By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
163 > find
164 > \begin{equation}
165 > \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
166 > \label{introEquation:motionHamiltonianCoordinate}
167 > \end{equation}
168 > \begin{equation}
169 > \frac{{\partial H}}{{\partial q_k }} =  - \dot {p_k}
170 > \label{introEquation:motionHamiltonianMomentum}
171 > \end{equation}
172 > and
173 > \begin{equation}
174 > \frac{{\partial H}}{{\partial t}} =  - \frac{{\partial L}}{{\partial
175 > t}}
176 > \label{introEquation:motionHamiltonianTime}
177 > \end{equation}
178 > where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179 > Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180 > equation of motion. Due to their symmetrical formula, they are also
181 > known as the canonical equations of motions.\cite{Goldstein2001}
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
185 < function of the generalized velocities $\dot q_i$ and the
186 < generalized coordinates $q_i$, while the Hamiltonian is considered
187 < to be a function of the generalized momenta $p_i$ and the conjugate
188 < generalized coordinate $q_i$. Hamiltonian Mechanics is more
189 < appropriate for application to statistical mechanics and quantum
190 < mechanics, since it treats the coordinate and its time derivative as
191 < independent variables and it only works with 1st-order differential
192 < equations.
193 <
194 <
195 < \subsubsection{\label{introSection:canonicalTransformation}Canonical Transformation}
196 <
197 < \subsection{\label{introSection:statisticalMechanics}Statistical Mechanics}
185 > function of the generalized velocities $\dot q_i$ and coordinates
186 > $q_i$, while the Hamiltonian is considered to be a function of the
187 > generalized momenta $p_i$ and the conjugate coordinates $q_i$.
188 > Hamiltonian Mechanics is more appropriate for application to
189 > statistical mechanics and quantum mechanics, since it treats the
190 > coordinate and its time derivative as independent variables and it
191 > only works with 1st-order differential equations.\cite{Marion1990}
192 > In Newtonian Mechanics, a system described by conservative forces
193 > conserves the total energy
194 > (Eq.~\ref{introEquation:energyConservation}). It follows that
195 > Hamilton's equations of motion conserve the total Hamiltonian
196 > \begin{equation}
197 > \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
198 > H}}{{\partial q_i }}\dot q_i  + \frac{{\partial H}}{{\partial p_i
199 > }}\dot p_i } \right)}  = \sum\limits_i {\left( {\frac{{\partial
200 > H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
201 > \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
202 > q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian}
203 > \end{equation}
204  
205 < The thermodynamic behaviors and properties  of Molecular Dynamics
205 > \section{\label{introSection:statisticalMechanics}Statistical
206 > Mechanics}
207 >
208 > The thermodynamic behaviors and properties of Molecular Dynamics
209   simulation are governed by the principle of Statistical Mechanics.
210   The following section will give a brief introduction to some of the
211 < Statistical Mechanics concepts presented in this dissertation.
211 > Statistical Mechanics concepts and theorems presented in this
212 > dissertation.
213  
214 < \subsubsection{\label{introSection::ensemble}Ensemble}
214 > \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
215  
216 < \subsubsection{\label{introSection:ergodic}The Ergodic Hypothesis}
216 > Mathematically, phase space is the space which represents all
217 > possible states of a system. Each possible state of the system
218 > corresponds to one unique point in the phase space. For mechanical
219 > systems, the phase space usually consists of all possible values of
220 > position and momentum variables. Consider a dynamic system of $f$
221 > particles in a cartesian space, where each of the $6f$ coordinates
222 > and momenta is assigned to one of $6f$ mutually orthogonal axes, the
223 > phase space of this system is a $6f$ dimensional space. A point, $x
224 > =
225 > (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
226 > \over q} _1 , \ldots
227 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
228 > \over q} _f
229 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
230 > \over p} _1  \ldots
231 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
232 > \over p} _f )$ , with a unique set of values of $6f$ coordinates and
233 > momenta is a phase space vector.
234 > %%%fix me
235  
236 < \subsection{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
236 > In statistical mechanics, the condition of an ensemble at any time
237 > can be regarded as appropriately specified by the density $\rho$
238 > with which representative points are distributed over the phase
239 > space. The density distribution for an ensemble with $f$ degrees of
240 > freedom is defined as,
241 > \begin{equation}
242 > \rho  = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
243 > \label{introEquation:densityDistribution}
244 > \end{equation}
245 > Governed by the principles of mechanics, the phase points change
246 > their locations which changes the density at any time at phase
247 > space. Hence, the density distribution is also to be taken as a
248 > function of the time. The number of systems $\delta N$ at time $t$
249 > can be determined by,
250 > \begin{equation}
251 > \delta N = \rho (q,p,t)dq_1  \ldots dq_f dp_1  \ldots dp_f.
252 > \label{introEquation:deltaN}
253 > \end{equation}
254 > Assuming enough copies of the systems, we can sufficiently
255 > approximate $\delta N$ without introducing discontinuity when we go
256 > from one region in the phase space to another. By integrating over
257 > the whole phase space,
258 > \begin{equation}
259 > N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
260 > \label{introEquation:totalNumberSystem}
261 > \end{equation}
262 > gives us an expression for the total number of copies. Hence, the
263 > probability per unit volume in the phase space can be obtained by,
264 > \begin{equation}
265 > \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
266 > {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
267 > \label{introEquation:unitProbability}
268 > \end{equation}
269 > With the help of Eq.~\ref{introEquation:unitProbability} and the
270 > knowledge of the system, it is possible to calculate the average
271 > value of any desired quantity which depends on the coordinates and
272 > momenta of the system. Even when the dynamics of the real system are
273 > complex, or stochastic, or even discontinuous, the average
274 > properties of the ensemble of possibilities as a whole remain well
275 > defined. For a classical system in thermal equilibrium with its
276 > environment, the ensemble average of a mechanical quantity, $\langle
277 > A(q , p) \rangle_t$, takes the form of an integral over the phase
278 > space of the system,
279 > \begin{equation}
280 > \langle  A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
281 > (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
282 > (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
283 > \label{introEquation:ensembelAverage}
284 > \end{equation}
285  
286 < \subsection{\label{introSection:correlationFunctions}Correlation Functions}
286 > \subsection{\label{introSection:liouville}Liouville's theorem}
287  
288 < \section{\label{introSection:langevinDynamics}Langevin Dynamics}
288 > Liouville's theorem is the foundation on which statistical mechanics
289 > rests. It describes the time evolution of the phase space
290 > distribution function. In order to calculate the rate of change of
291 > $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
292 > the two faces perpendicular to the $q_1$ axis, which are located at
293 > $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
294 > opposite face is given by the expression,
295 > \begin{equation}
296 > \left( {\rho  + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
297 > \right)\left( {\dot q_1  + \frac{{\partial \dot q_1 }}{{\partial q_1
298 > }}\delta q_1 } \right)\delta q_2  \ldots \delta q_f \delta p_1
299 > \ldots \delta p_f .
300 > \end{equation}
301 > Summing all over the phase space, we obtain
302 > \begin{equation}
303 > \frac{{d(\delta N)}}{{dt}} =  - \sum\limits_{i = 1}^f {\left[ {\rho
304 > \left( {\frac{{\partial \dot q_i }}{{\partial q_i }} +
305 > \frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left(
306 > {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i  + \frac{{\partial
307 > \rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1
308 > \ldots \delta q_f \delta p_1  \ldots \delta p_f .
309 > \end{equation}
310 > Differentiating the equations of motion in Hamiltonian formalism
311 > (\ref{introEquation:motionHamiltonianCoordinate},
312 > \ref{introEquation:motionHamiltonianMomentum}), we can show,
313 > \begin{equation}
314 > \sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }}
315 > + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)}  = 0 ,
316 > \end{equation}
317 > which cancels the first terms of the right hand side. Furthermore,
318 > dividing $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta
319 > p_f $ in both sides, we can write out Liouville's theorem in a
320 > simple form,
321 > \begin{equation}
322 > \frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f
323 > {\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i  +
324 > \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)}  = 0 .
325 > \label{introEquation:liouvilleTheorem}
326 > \end{equation}
327 > Liouville's theorem states that the distribution function is
328 > constant along any trajectory in phase space. In classical
329 > statistical mechanics, since the number of system copies in an
330 > ensemble is huge and constant, we can assume the local density has
331 > no reason (other than classical mechanics) to change,
332 > \begin{equation}
333 > \frac{{\partial \rho }}{{\partial t}} = 0.
334 > \label{introEquation:stationary}
335 > \end{equation}
336 > In such stationary system, the density of distribution $\rho$ can be
337 > connected to the Hamiltonian $H$ through Maxwell-Boltzmann
338 > distribution,
339 > \begin{equation}
340 > \rho  \propto e^{ - \beta H}
341 > \label{introEquation:densityAndHamiltonian}
342 > \end{equation}
343  
344 < \subsection{\label{introSection:generalizedLangevinDynamics}Generalized Langevin Dynamics}
344 > \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
345 > Lets consider a region in the phase space,
346 > \begin{equation}
347 > \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
348 > \end{equation}
349 > If this region is small enough, the density $\rho$ can be regarded
350 > as uniform over the whole integral. Thus, the number of phase points
351 > inside this region is given by,
352 > \begin{equation}
353 > \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
354 > dp_1 } ..dp_f.
355 > \end{equation}
356  
357 < \subsection{\label{introSection:hydroynamics}Hydrodynamics}
357 > \begin{equation}
358 > \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
359 > \frac{d}{{dt}}(\delta v) = 0.
360 > \end{equation}
361 > With the help of the stationary assumption
362 > (Eq.~\ref{introEquation:stationary}), we obtain the principle of
363 > \emph{conservation of volume in phase space},
364 > \begin{equation}
365 > \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
366 > ...dq_f dp_1 } ..dp_f  = 0.
367 > \label{introEquation:volumePreserving}
368 > \end{equation}
369 >
370 > \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
371 >
372 > Liouville's theorem can be expressed in a variety of different forms
373 > which are convenient within different contexts. For any two function
374 > $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 > bracket $\{F,G\}$ is defined as
376 > \begin{equation}
377 > \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378 > F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
379 > \frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial
380 > q_i }}} \right)}.
381 > \label{introEquation:poissonBracket}
382 > \end{equation}
383 > Substituting equations of motion in Hamiltonian formalism
384 > (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
385 > Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
386 > (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
387 > Liouville's theorem using Poisson bracket notion,
388 > \begin{equation}
389 > \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - \left\{
390 > {\rho ,H} \right\}.
391 > \label{introEquation:liouvilleTheromInPoissin}
392 > \end{equation}
393 > Moreover, the Liouville operator is defined as
394 > \begin{equation}
395 > iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial
396 > p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial
397 > H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)}
398 > \label{introEquation:liouvilleOperator}
399 > \end{equation}
400 > In terms of Liouville operator, Liouville's equation can also be
401 > expressed as
402 > \begin{equation}
403 > \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - iL\rho
404 > \label{introEquation:liouvilleTheoremInOperator}
405 > \end{equation}
406 > which can help define a propagator $\rho (t) = e^{-iLt} \rho (0)$.
407 > \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
408 >
409 > Various thermodynamic properties can be calculated from Molecular
410 > Dynamics simulation. By comparing experimental values with the
411 > calculated properties, one can determine the accuracy of the
412 > simulation and the quality of the underlying model. However, both
413 > experiments and computer simulations are usually performed during a
414 > certain time interval and the measurements are averaged over a
415 > period of time which is different from the average behavior of
416 > many-body system in Statistical Mechanics. Fortunately, the Ergodic
417 > Hypothesis makes a connection between time average and the ensemble
418 > average. It states that the time average and average over the
419 > statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
420 > \begin{equation}
421 > \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422 > \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
423 > {A(q(t),p(t))} } \rho (q(t), p(t)) dqdp
424 > \end{equation}
425 > where $\langle  A(q , p) \rangle_t$ is an equilibrium value of a
426 > physical quantity and $\rho (p(t), q(t))$ is the equilibrium
427 > distribution function. If an observation is averaged over a
428 > sufficiently long time (longer than the relaxation time), all
429 > accessible microstates in phase space are assumed to be equally
430 > probed, giving a properly weighted statistical average. This allows
431 > the researcher freedom of choice when deciding how best to measure a
432 > given observable. In case an ensemble averaged approach sounds most
433 > reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
434 > utilized. Or if the system lends itself to a time averaging
435 > approach, the Molecular Dynamics techniques in
436 > Sec.~\ref{introSection:molecularDynamics} will be the best
437 > choice.\cite{Frenkel1996}
438 >
439 > \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440 > A variety of numerical integrators have been proposed to simulate
441 > the motions of atoms in MD simulation. They usually begin with
442 > initial conditions and move the objects in the direction governed by
443 > the differential equations. However, most of them ignore the hidden
444 > physical laws contained within the equations. Since 1990, geometric
445 > integrators, which preserve various phase-flow invariants such as
446 > symplectic structure, volume and time reversal symmetry, were
447 > developed to address this issue.\cite{Dullweber1997, McLachlan1998,
448 > Leimkuhler1999} The velocity Verlet method, which happens to be a
449 > simple example of symplectic integrator, continues to gain
450 > popularity in the molecular dynamics community. This fact can be
451 > partly explained by its geometric nature.
452 >
453 > \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454 > A \emph{manifold} is an abstract mathematical space. It looks
455 > locally like Euclidean space, but when viewed globally, it may have
456 > more complicated structure. A good example of manifold is the
457 > surface of Earth. It seems to be flat locally, but it is round if
458 > viewed as a whole. A \emph{differentiable manifold} (also known as
459 > \emph{smooth manifold}) is a manifold on which it is possible to
460 > apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
461 > defined as a pair $(M, \omega)$ which consists of a
462 > \emph{differentiable manifold} $M$ and a close, non-degenerate,
463 > bilinear symplectic form, $\omega$. A symplectic form on a vector
464 > space $V$ is a function $\omega(x, y)$ which satisfies
465 > $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466 > \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 > $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
468 > vector field is an example of symplectic form. One of the
469 > motivations to study \emph{symplectic manifolds} in Hamiltonian
470 > Mechanics is that a symplectic manifold can represent all possible
471 > configurations of the system and the phase space of the system can
472 > be described by it's cotangent bundle.\cite{Jost2002} Every
473 > symplectic manifold is even dimensional. For instance, in Hamilton
474 > equations, coordinate and momentum always appear in pairs.
475 >
476 > \subsection{\label{introSection:ODE}Ordinary Differential Equations}
477 >
478 > For an ordinary differential system defined as
479 > \begin{equation}
480 > \dot x = f(x)
481 > \end{equation}
482 > where $x = x(q,p)$, this system is a canonical Hamiltonian, if
483 > $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484 > function and $J$ is the skew-symmetric matrix
485 > \begin{equation}
486 > J = \left( {\begin{array}{*{20}c}
487 >   0 & I  \\
488 >   { - I} & 0  \\
489 > \end{array}} \right)
490 > \label{introEquation:canonicalMatrix}
491 > \end{equation}
492 > where $I$ is an identity matrix. Using this notation, Hamiltonian
493 > system can be rewritten as,
494 > \begin{equation}
495 > \frac{d}{{dt}}x = J\nabla _x H(x).
496 > \label{introEquation:compactHamiltonian}
497 > \end{equation}In this case, $f$ is
498 > called a \emph{Hamiltonian vector field}. Another generalization of
499 > Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
500 > \begin{equation}
501 > \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502 > \end{equation}
503 > where the most obvious change being that matrix $J$ now depends on
504 > $x$.
505 >
506 > \subsection{\label{introSection:exactFlow}Exact Propagator}
507 >
508 > Let $x(t)$ be the exact solution of the ODE
509 > system,
510 > \begin{equation}
511 > \frac{{dx}}{{dt}} = f(x), \label{introEquation:ODE}
512 > \end{equation} we can
513 > define its exact propagator $\varphi_\tau$:
514 > \[ x(t+\tau)
515 > =\varphi_\tau(x(t))
516 > \]
517 > where $\tau$ is a fixed time step and $\varphi$ is a map from phase
518 > space to itself. The propagator has the continuous group property,
519 > \begin{equation}
520 > \varphi _{\tau _1 }  \circ \varphi _{\tau _2 }  = \varphi _{\tau _1
521 > + \tau _2 } .
522 > \end{equation}
523 > In particular,
524 > \begin{equation}
525 > \varphi _\tau   \circ \varphi _{ - \tau }  = I
526 > \end{equation}
527 > Therefore, the exact propagator is self-adjoint,
528 > \begin{equation}
529 > \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
530 > \end{equation}
531 > The exact propagator can also be written as an operator,
532 > \begin{equation}
533 > \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
534 > }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
535 > \label{introEquation:exponentialOperator}
536 > \end{equation}
537 > In most cases, it is not easy to find the exact propagator
538 > $\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$,
539 > which is usually called an integrator. The order of an integrator
540 > $\psi_\tau$ is $p$, if the Taylor series of $\psi_\tau$ agree to
541 > order $p$,
542 > \begin{equation}
543 > \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
544 > \end{equation}
545 >
546 > \subsection{\label{introSection:geometricProperties}Geometric Properties}
547 >
548 > The hidden geometric properties\cite{Budd1999, Marsden1998} of an
549 > ODE and its propagator play important roles in numerical studies.
550 > Many of them can be found in systems which occur naturally in
551 > applications. Let $\varphi$ be the propagator of Hamiltonian vector
552 > field, $\varphi$ is a \emph{symplectic} propagator if it satisfies,
553 > \begin{equation}
554 > {\varphi '}^T J \varphi ' = J.
555 > \end{equation}
556 > According to Liouville's theorem, the symplectic volume is invariant
557 > under a Hamiltonian propagator, which is the basis for classical
558 > statistical mechanics. Furthermore, the propagator of a Hamiltonian
559 > vector field on a symplectic manifold can be shown to be a
560 > symplectomorphism. As to the Poisson system,
561 > \begin{equation}
562 > {\varphi '}^T J \varphi ' = J \circ \varphi
563 > \end{equation}
564 > is the property that must be preserved by the integrator. It is
565 > possible to construct a \emph{volume-preserving} propagator for a
566 > source free ODE ($ \nabla \cdot f = 0 $), if the propagator
567 > satisfies $ \det d\varphi  = 1$. One can show easily that a
568 > symplectic propagator will be volume-preserving. Changing the
569 > variables $y = h(x)$ in an ODE (Eq.~\ref{introEquation:ODE}) will
570 > result in a new system,
571 > \[
572 > \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
573 > \]
574 > The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
575 > In other words, the propagator of this vector field is reversible if
576 > and only if $ h \circ \varphi ^{ - 1}  = \varphi  \circ h $. A
577 > conserved quantity of a general differential function is a function
578 > $ G:R^{2d}  \to R^d $ which is constant for all solutions of the ODE
579 > $\frac{{dx}}{{dt}} = f(x)$ ,
580 > \[
581 > \frac{{dG(x(t))}}{{dt}} = 0.
582 > \]
583 > Using the chain rule, one may obtain,
584 > \[
585 > \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \cdot \nabla G,
586 > \]
587 > which is the condition for conserved quantities. For a canonical
588 > Hamiltonian system, the time evolution of an arbitrary smooth
589 > function $G$ is given by,
590 > \begin{eqnarray}
591 > \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
592 >                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
593 > \label{introEquation:firstIntegral1}
594 > \end{eqnarray}
595 > Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
596 > can be rewritten as
597 > \[
598 > \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
599 > \]
600 > Therefore, the sufficient condition for $G$ to be a conserved
601 > quantity of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$ As
602 > is well known, the Hamiltonian (or energy) H of a Hamiltonian system
603 > is a conserved quantity, which is due to the fact $\{ H,H\}  = 0$.
604 > When designing any numerical methods, one should always try to
605 > preserve the structural properties of the original ODE and its
606 > propagator.
607 >
608 > \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
609 > A lot of well established and very effective numerical methods have
610 > been successful precisely because of their symplectic nature even
611 > though this fact was not recognized when they were first
612 > constructed. The most famous example is the Verlet-leapfrog method
613 > in molecular dynamics. In general, symplectic integrators can be
614 > constructed using one of four different methods.
615 > \begin{enumerate}
616 > \item Generating functions
617 > \item Variational methods
618 > \item Runge-Kutta methods
619 > \item Splitting methods
620 > \end{enumerate}
621 > Generating functions\cite{Channell1990} tend to lead to methods
622 > which are cumbersome and difficult to use. In dissipative systems,
623 > variational methods can capture the decay of energy
624 > accurately.\cite{Kane2000} Since they are geometrically unstable
625 > against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626 > methods are not suitable for Hamiltonian
627 > system.\cite{Cartwright1992} Recently, various high-order explicit
628 > Runge-Kutta methods \cite{Owren1992,Chen2003} have been developed to
629 > overcome this instability. However, due to computational penalty
630 > involved in implementing the Runge-Kutta methods, they have not
631 > attracted much attention from the Molecular Dynamics community.
632 > Instead, splitting methods have been widely accepted since they
633 > exploit natural decompositions of the system.\cite{McLachlan1998,
634 > Tuckerman1992}
635 >
636 > \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
637 >
638 > The main idea behind splitting methods is to decompose the discrete
639 > $\varphi_h$ as a composition of simpler propagators,
640 > \begin{equation}
641 > \varphi _h  = \varphi _{h_1 }  \circ \varphi _{h_2 }  \ldots  \circ
642 > \varphi _{h_n }
643 > \label{introEquation:FlowDecomposition}
644 > \end{equation}
645 > where each of the sub-propagator is chosen such that each represent
646 > a simpler integration of the system. Suppose that a Hamiltonian
647 > system takes the form,
648 > \[
649 > H = H_1 + H_2.
650 > \]
651 > Here, $H_1$ and $H_2$ may represent different physical processes of
652 > the system. For instance, they may relate to kinetic and potential
653 > energy respectively, which is a natural decomposition of the
654 > problem. If $H_1$ and $H_2$ can be integrated using exact
655 > propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
656 > simple first order expression is then given by the Lie-Trotter
657 > formula\cite{Trotter1959}
658 > \begin{equation}
659 > \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h},
660 > \label{introEquation:firstOrderSplitting}
661 > \end{equation}
662 > where $\varphi _h$ is the result of applying the corresponding
663 > continuous $\varphi _i$ over a time $h$. By definition, as
664 > $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
665 > must follow that each operator $\varphi_i(t)$ is a symplectic map.
666 > It is easy to show that any composition of symplectic propagators
667 > yields a symplectic map,
668 > \begin{equation}
669 > (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
670 > '\phi ' = \phi '^T J\phi ' = J,
671 > \label{introEquation:SymplecticFlowComposition}
672 > \end{equation}
673 > where $\phi$ and $\psi$ both are symplectic maps. Thus operator
674 > splitting in this context automatically generates a symplectic map.
675 > The Lie-Trotter
676 > splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
677 > local errors proportional to $h^2$, while the Strang splitting gives
678 > a second-order decomposition,\cite{Strang1968}
679 > \begin{equation}
680 > \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
681 > _{1,h/2} , \label{introEquation:secondOrderSplitting}
682 > \end{equation}
683 > which has a local error proportional to $h^3$. The Strang
684 > splitting's popularity in molecular simulation community attribute
685 > to its symmetric property,
686 > \begin{equation}
687 > \varphi _h^{ - 1} = \varphi _{ - h}.
688 > \label{introEquation:timeReversible}
689 > \end{equation}
690 >
691 > \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
692 > The classical equation for a system consisting of interacting
693 > particles can be written in Hamiltonian form,
694 > \[
695 > H = T + V
696 > \]
697 > where $T$ is the kinetic energy and $V$ is the potential energy.
698 > Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
699 > obtains the following:
700 > \begin{align}
701 > q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
702 >    \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
703 > \label{introEquation:Lp10a} \\%
704 > %
705 > \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
706 >    \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
707 > \label{introEquation:Lp10b}
708 > \end{align}
709 > where $F(t)$ is the force at time $t$. This integration scheme is
710 > known as \emph{velocity verlet} which is
711 > symplectic(Eq.~\ref{introEquation:SymplecticFlowComposition}),
712 > time-reversible(Eq.~\ref{introEquation:timeReversible}) and
713 > volume-preserving (Eq.~\ref{introEquation:volumePreserving}). These
714 > geometric properties attribute to its long-time stability and its
715 > popularity in the community. However, the most commonly used
716 > velocity verlet integration scheme is written as below,
717 > \begin{align}
718 > \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
719 >    \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\%
720 > %
721 > q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
722 >    \label{introEquation:Lp9b}\\%
723 > %
724 > \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
725 >    \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
726 > \end{align}
727 > From the preceding splitting, one can see that the integration of
728 > the equations of motion would follow:
729 > \begin{enumerate}
730 > \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
731 >
732 > \item Use the half step velocities to move positions one whole step, $\Delta t$.
733 >
734 > \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
735 >
736 > \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
737 > \end{enumerate}
738 > By simply switching the order of the propagators in the splitting
739 > and composing a new integrator, the \emph{position verlet}
740 > integrator, can be generated,
741 > \begin{align}
742 > \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
743 > \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
744 > \label{introEquation:positionVerlet1} \\%
745 > %
746 > q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot
747 > q(\Delta t)} \right]. %
748 > \label{introEquation:positionVerlet2}
749 > \end{align}
750 >
751 > \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
752 >
753 > The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
754 > to determine the local error of a splitting method in terms of the
755 > commutator of the
756 > operators(Eq.~\ref{introEquation:exponentialOperator}) associated
757 > with the sub-propagator. For operators $hX$ and $hY$ which are
758 > associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we
759 > have
760 > \begin{equation}
761 > \exp (hX + hY) = \exp (hZ)
762 > \end{equation}
763 > where
764 > \begin{equation}
765 > hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
766 > {[X,[X,Y]] + [Y,[Y,X]]} \right) +  \ldots .
767 > \end{equation}
768 > Here, $[X,Y]$ is the commutator of operator $X$ and $Y$ given by
769 > \[
770 > [X,Y] = XY - YX .
771 > \]
772 > Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
773 > to the Strang splitting, we can obtain
774 > \begin{eqnarray*}
775 > \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
776 >                                   &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
777 >                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
778 >                                   ).
779 > \end{eqnarray*}
780 > Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
781 > error of Strang splitting is proportional to $h^3$. The same
782 > procedure can be applied to a general splitting of the form
783 > \begin{equation}
784 > \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m -
785 > 1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 .
786 > \end{equation}
787 > A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
788 > order methods. Yoshida proposed an elegant way to compose higher
789 > order methods based on symmetric splitting.\cite{Yoshida1990} Given
790 > a symmetric second order base method $ \varphi _h^{(2)} $, a
791 > fourth-order symmetric method can be constructed by composing,
792 > \[
793 > \varphi _h^{(4)}  = \varphi _{\alpha h}^{(2)}  \circ \varphi _{\beta
794 > h}^{(2)}  \circ \varphi _{\alpha h}^{(2)}
795 > \]
796 > where $ \alpha  =  - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta
797 > = \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric
798 > integrator $ \varphi _h^{(2n + 2)}$ can be composed by
799 > \begin{equation}
800 > \varphi _h^{(2n + 2)}  = \varphi _{\alpha h}^{(2n)}  \circ \varphi
801 > _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)},
802 > \end{equation}
803 > if the weights are chosen as
804 > \[
805 > \alpha  =  - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
806 > \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
807 > \]
808 >
809 > \section{\label{introSection:molecularDynamics}Molecular Dynamics}
810 >
811 > As one of the principal tools of molecular modeling, Molecular
812 > dynamics has proven to be a powerful tool for studying the functions
813 > of biological systems, providing structural, thermodynamic and
814 > dynamical information. The basic idea of molecular dynamics is that
815 > macroscopic properties are related to microscopic behavior and
816 > microscopic behavior can be calculated from the trajectories in
817 > simulations. For instance, instantaneous temperature of a
818 > Hamiltonian system of $N$ particles can be measured by
819 > \[
820 > T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
821 > \]
822 > where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
823 > respectively, $f$ is the number of degrees of freedom, and $k_B$ is
824 > the Boltzman constant.
825 >
826 > A typical molecular dynamics run consists of three essential steps:
827 > \begin{enumerate}
828 >  \item Initialization
829 >    \begin{enumerate}
830 >    \item Preliminary preparation
831 >    \item Minimization
832 >    \item Heating
833 >    \item Equilibration
834 >    \end{enumerate}
835 >  \item Production
836 >  \item Analysis
837 > \end{enumerate}
838 > These three individual steps will be covered in the following
839 > sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
840 > initialization of a simulation. Sec.~\ref{introSection:production}
841 > discusses issues of production runs.
842 > Sec.~\ref{introSection:Analysis} provides the theoretical tools for
843 > analysis of trajectories.
844 >
845 > \subsection{\label{introSec:initialSystemSettings}Initialization}
846 >
847 > \subsubsection{\textbf{Preliminary preparation}}
848 >
849 > When selecting the starting structure of a molecule for molecular
850 > simulation, one may retrieve its Cartesian coordinates from public
851 > databases, such as RCSB Protein Data Bank \textit{etc}. Although
852 > thousands of crystal structures of molecules are discovered every
853 > year, many more remain unknown due to the difficulties of
854 > purification and crystallization. Even for molecules with known
855 > structures, some important information is missing. For example, a
856 > missing hydrogen atom which acts as donor in hydrogen bonding must
857 > be added. Moreover, in order to include electrostatic interactions,
858 > one may need to specify the partial charges for individual atoms.
859 > Under some circumstances, we may even need to prepare the system in
860 > a special configuration. For instance, when studying transport
861 > phenomenon in membrane systems, we may prepare the lipids in a
862 > bilayer structure instead of placing lipids randomly in solvent,
863 > since we are not interested in the slow self-aggregation process.
864 >
865 > \subsubsection{\textbf{Minimization}}
866 >
867 > It is quite possible that some of molecules in the system from
868 > preliminary preparation may be overlapping with each other. This
869 > close proximity leads to high initial potential energy which
870 > consequently jeopardizes any molecular dynamics simulations. To
871 > remove these steric overlaps, one typically performs energy
872 > minimization to find a more reasonable conformation. Several energy
873 > minimization methods have been developed to exploit the energy
874 > surface and to locate the local minimum. While converging slowly
875 > near the minimum, the steepest descent method is extremely robust when
876 > systems are strongly anharmonic. Thus, it is often used to refine
877 > structures from crystallographic data. Relying on the Hessian,
878 > advanced methods like Newton-Raphson converge rapidly to a local
879 > minimum, but become unstable if the energy surface is far from
880 > quadratic. Another factor that must be taken into account, when
881 > choosing energy minimization method, is the size of the system.
882 > Steepest descent and conjugate gradient can deal with models of any
883 > size. Because of the limits on computer memory to store the hessian
884 > matrix and the computing power needed to diagonalize these matrices,
885 > most Newton-Raphson methods can not be used with very large systems.
886 >
887 > \subsubsection{\textbf{Heating}}
888 >
889 > Typically, heating is performed by assigning random velocities
890 > according to a Maxwell-Boltzman distribution for a desired
891 > temperature. Beginning at a lower temperature and gradually
892 > increasing the temperature by assigning larger random velocities, we
893 > end up setting the temperature of the system to a final temperature
894 > at which the simulation will be conducted. In the heating phase, we
895 > should also keep the system from drifting or rotating as a whole. To
896 > do this, the net linear momentum and angular momentum of the system
897 > is shifted to zero after each resampling from the Maxwell -Boltzman
898 > distribution.
899 >
900 > \subsubsection{\textbf{Equilibration}}
901 >
902 > The purpose of equilibration is to allow the system to evolve
903 > spontaneously for a period of time and reach equilibrium. The
904 > procedure is continued until various statistical properties, such as
905 > temperature, pressure, energy, volume and other structural
906 > properties \textit{etc}, become independent of time. Strictly
907 > speaking, minimization and heating are not necessary, provided the
908 > equilibration process is long enough. However, these steps can serve
909 > as a mean to arrive at an equilibrated structure in an effective
910 > way.
911 >
912 > \subsection{\label{introSection:production}Production}
913 >
914 > The production run is the most important step of the simulation, in
915 > which the equilibrated structure is used as a starting point and the
916 > motions of the molecules are collected for later analysis. In order
917 > to capture the macroscopic properties of the system, the molecular
918 > dynamics simulation must be performed by sampling correctly and
919 > efficiently from the relevant thermodynamic ensemble.
920 >
921 > The most expensive part of a molecular dynamics simulation is the
922 > calculation of non-bonded forces, such as van der Waals force and
923 > Coulombic forces \textit{etc}. For a system of $N$ particles, the
924 > complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
925 > which makes large simulations prohibitive in the absence of any
926 > algorithmic tricks. A natural approach to avoid system size issues
927 > is to represent the bulk behavior by a finite number of the
928 > particles. However, this approach will suffer from surface effects
929 > at the edges of the simulation. To offset this, \textit{Periodic
930 > boundary conditions} (see Fig.~\ref{introFig:pbc}) were developed to
931 > simulate bulk properties with a relatively small number of
932 > particles. In this method, the simulation box is replicated
933 > throughout space to form an infinite lattice. During the simulation,
934 > when a particle moves in the primary cell, its image in other cells
935 > move in exactly the same direction with exactly the same
936 > orientation. Thus, as a particle leaves the primary cell, one of its
937 > images will enter through the opposite face.
938 > \begin{figure}
939 > \centering
940 > \includegraphics[width=\linewidth]{pbc.eps}
941 > \caption[An illustration of periodic boundary conditions]{A 2-D
942 > illustration of periodic boundary conditions. As one particle leaves
943 > the left of the simulation box, an image of it enters the right.}
944 > \label{introFig:pbc}
945 > \end{figure}
946 >
947 > %cutoff and minimum image convention
948 > Another important technique to improve the efficiency of force
949 > evaluation is to apply spherical cutoffs where particles farther
950 > than a predetermined distance are not included in the
951 > calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
952 > a discontinuity in the potential energy curve. Fortunately, one can
953 > shift a simple radial potential to ensure the potential curve go
954 > smoothly to zero at the cutoff radius. The cutoff strategy works
955 > well for Lennard-Jones interaction because of its short range
956 > nature. However, simply truncating the electrostatic interaction
957 > with the use of cutoffs has been shown to lead to severe artifacts
958 > in simulations. The Ewald summation, in which the slowly decaying
959 > Coulomb potential is transformed into direct and reciprocal sums
960 > with rapid and absolute convergence, has proved to minimize the
961 > periodicity artifacts in liquid simulations. Taking advantage of
962 > fast Fourier transform (FFT) techniques for calculating discrete
963 > Fourier transforms, the particle mesh-based
964 > methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
965 > $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
966 > \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
967 > which treats Coulombic interactions exactly at short range, and
968 > approximate the potential at long range through multipolar
969 > expansion. In spite of their wide acceptance at the molecular
970 > simulation community, these two methods are difficult to implement
971 > correctly and efficiently. Instead, we use a damped and
972 > charge-neutralized Coulomb potential method developed by Wolf and
973 > his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
974 > particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
975 > \begin{equation}
976 > V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
977 > r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
978 > R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
979 > r_{ij})}{r_{ij}}\right\}, \label{introEquation:shiftedCoulomb}
980 > \end{equation}
981 > where $\alpha$ is the convergence parameter. Due to the lack of
982 > inherent periodicity and rapid convergence,this method is extremely
983 > efficient and easy to implement.
984 > \begin{figure}
985 > \centering
986 > \includegraphics[width=\linewidth]{shifted_coulomb.eps}
987 > \caption[An illustration of shifted Coulomb potential]{An
988 > illustration of shifted Coulomb potential.}
989 > \label{introFigure:shiftedCoulomb}
990 > \end{figure}
991 >
992 > %multiple time step
993 >
994 > \subsection{\label{introSection:Analysis} Analysis}
995 >
996 > Recently, advanced visualization techniques have been applied to
997 > monitor the motions of molecules. Although the dynamics of the
998 > system can be described qualitatively from animation, quantitative
999 > trajectory analysis is more useful. According to the principles of
1000 > Statistical Mechanics in
1001 > Sec.~\ref{introSection:statisticalMechanics}, one can compute
1002 > thermodynamic properties, analyze fluctuations of structural
1003 > parameters, and investigate time-dependent processes of the molecule
1004 > from the trajectories.
1005 >
1006 > \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1007 >
1008 > Thermodynamic properties, which can be expressed in terms of some
1009 > function of the coordinates and momenta of all particles in the
1010 > system, can be directly computed from molecular dynamics. The usual
1011 > way to measure the pressure is based on virial theorem of Clausius
1012 > which states that the virial is equal to $-3Nk_BT$. For a system
1013 > with forces between particles, the total virial, $W$, contains the
1014 > contribution from external pressure and interaction between the
1015 > particles:
1016 > \[
1017 > W =  - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot
1018 > f_{ij} } } \right\rangle
1019 > \]
1020 > where $f_{ij}$ is the force between particle $i$ and $j$ at a
1021 > distance $r_{ij}$. Thus, the expression for the pressure is given
1022 > by:
1023 > \begin{equation}
1024 > P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i
1025 > < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1026 > \end{equation}
1027 >
1028 > \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1029 >
1030 > Structural Properties of a simple fluid can be described by a set of
1031 > distribution functions. Among these functions,the \emph{pair
1032 > distribution function}, also known as \emph{radial distribution
1033 > function}, is of most fundamental importance to liquid theory.
1034 > Experimentally, pair distribution functions can be gathered by
1035 > Fourier transforming raw data from a series of neutron diffraction
1036 > experiments and integrating over the surface
1037 > factor.\cite{Powles1973} The experimental results can serve as a
1038 > criterion to justify the correctness of a liquid model. Moreover,
1039 > various equilibrium thermodynamic and structural properties can also
1040 > be expressed in terms of the radial distribution
1041 > function.\cite{Allen1987} The pair distribution functions $g(r)$
1042 > gives the probability that a particle $i$ will be located at a
1043 > distance $r$ from a another particle $j$ in the system
1044 > \begin{equation}
1045 > g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1046 > \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1047 > (r)}{\rho}.
1048 > \end{equation}
1049 > Note that the delta function can be replaced by a histogram in
1050 > computer simulation. Peaks in $g(r)$ represent solvent shells, and
1051 > the height of these peaks gradually decreases to 1 as the liquid of
1052 > large distance approaches the bulk density.
1053 >
1054 >
1055 > \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1056 > Properties}}
1057 >
1058 > Time-dependent properties are usually calculated using \emph{time
1059 > correlation functions}, which correlate random variables $A$ and $B$
1060 > at two different times,
1061 > \begin{equation}
1062 > C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1063 > \label{introEquation:timeCorrelationFunction}
1064 > \end{equation}
1065 > If $A$ and $B$ refer to same variable, this kind of correlation
1066 > functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1067 > function which is directly related to transport properties of
1068 > molecular liquids:
1069 > \begin{equation}
1070 > D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1071 > \right\rangle } dt
1072 > \end{equation}
1073 > where $D$ is diffusion constant. Unlike the velocity autocorrelation
1074 > function, which is averaged over time origins and over all the
1075 > atoms, the dipole autocorrelation functions is calculated for the
1076 > entire system. The dipole autocorrelation function is given by:
1077 > \begin{equation}
1078 > c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1079 > \right\rangle
1080 > \end{equation}
1081 > Here $u_{tot}$ is the net dipole of the entire system and is given
1082 > by
1083 > \begin{equation}
1084 > u_{tot} (t) = \sum\limits_i {u_i (t)}.
1085 > \end{equation}
1086 > In principle, many time correlation functions can be related to
1087 > Fourier transforms of the infrared, Raman, and inelastic neutron
1088 > scattering spectra of molecular liquids. In practice, one can
1089 > extract the IR spectrum from the intensity of the molecular dipole
1090 > fluctuation at each frequency using the following relationship:
1091 > \begin{equation}
1092 > \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1093 > i2\pi vt} dt}.
1094 > \end{equation}
1095 >
1096 > \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1097 >
1098 > Rigid bodies are frequently involved in the modeling of different
1099 > areas, including engineering, physics and chemistry. For example,
1100 > missiles and vehicles are usually modeled by rigid bodies.  The
1101 > movement of the objects in 3D gaming engines or other physics
1102 > simulators is governed by rigid body dynamics. In molecular
1103 > simulations, rigid bodies are used to simplify protein-protein
1104 > docking studies.\cite{Gray2003}
1105 >
1106 > It is very important to develop stable and efficient methods to
1107 > integrate the equations of motion for orientational degrees of
1108 > freedom. Euler angles are the natural choice to describe the
1109 > rotational degrees of freedom. However, due to $\frac {1}{sin
1110 > \theta}$ singularities, the numerical integration of corresponding
1111 > equations of these motion is very inefficient and inaccurate.
1112 > Although an alternative integrator using multiple sets of Euler
1113 > angles can overcome this difficulty\cite{Barojas1973}, the
1114 > computational penalty and the loss of angular momentum conservation
1115 > still remain. A singularity-free representation utilizing
1116 > quaternions was developed by Evans in 1977.\cite{Evans1977}
1117 > Unfortunately, this approach used a nonseparable Hamiltonian
1118 > resulting from the quaternion representation, which prevented the
1119 > symplectic algorithm from being utilized. Another different approach
1120 > is to apply holonomic constraints to the atoms belonging to the
1121 > rigid body. Each atom moves independently under the normal forces
1122 > deriving from potential energy and constraint forces which are used
1123 > to guarantee the rigidness. However, due to their iterative nature,
1124 > the SHAKE and Rattle algorithms also converge very slowly when the
1125 > number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1126 >
1127 > A break-through in geometric literature suggests that, in order to
1128 > develop a long-term integration scheme, one should preserve the
1129 > symplectic structure of the propagator. By introducing a conjugate
1130 > momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1131 > equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1132 > proposed to evolve the Hamiltonian system in a constraint manifold
1133 > by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1134 > An alternative method using the quaternion representation was
1135 > developed by Omelyan.\cite{Omelyan1998} However, both of these
1136 > methods are iterative and inefficient. In this section, we descibe a
1137 > symplectic Lie-Poisson integrator for rigid bodies developed by
1138 > Dullweber and his coworkers\cite{Dullweber1997} in depth.
1139 >
1140 > \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1141 > The Hamiltonian of a rigid body is given by
1142 > \begin{equation}
1143 > H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1144 > V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ].
1145 > \label{introEquation:RBHamiltonian}
1146 > \end{equation}
1147 > Here, $q$ and $Q$  are the position vector and rotation matrix for
1148 > the rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ ,
1149 > and $J$, a diagonal matrix, is defined by
1150 > \[
1151 > I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1152 > \]
1153 > where $I_{ii}$ is the diagonal element of the inertia tensor. This
1154 > constrained Hamiltonian equation is subjected to a holonomic
1155 > constraint,
1156 > \begin{equation}
1157 > Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1158 > \end{equation}
1159 > which is used to ensure the rotation matrix's unitarity. Using
1160 > Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~
1161 > \ref{introEquation:motionHamiltonianMomentum}, one can write down
1162 > the equations of motion,
1163 > \begin{eqnarray}
1164 > \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1165 > \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1166 > \frac{{dQ}}{{dt}} & = & PJ^{ - 1},  \label{introEquation:RBMotionRotation}\\
1167 > \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1168 > \end{eqnarray}
1169 > Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and
1170 > using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1171 > \begin{equation}
1172 > Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1173 > \label{introEquation:RBFirstOrderConstraint}
1174 > \end{equation}
1175 > In general, there are two ways to satisfy the holonomic constraints.
1176 > We can use a constraint force provided by a Lagrange multiplier on
1177 > the normal manifold to keep the motion on the constraint space. Or
1178 > we can simply evolve the system on the constraint manifold. These
1179 > two methods have been proved to be equivalent. The holonomic
1180 > constraint and equations of motions define a constraint manifold for
1181 > rigid bodies
1182 > \[
1183 > M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0}
1184 > \right\}.
1185 > \]
1186 > Unfortunately, this constraint manifold is not $T^* SO(3)$ which is
1187 > a symplectic manifold on Lie rotation group $SO(3)$. However, it
1188 > turns out that under symplectic transformation, the cotangent space
1189 > and the phase space are diffeomorphic. By introducing
1190 > \[
1191 > \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1192 > \]
1193 > the mechanical system subjected to a holonomic constraint manifold $M$
1194 > can be re-formulated as a Hamiltonian system on the cotangent space
1195 > \[
1196 > T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1197 > 1,\tilde Q^T \tilde PJ^{ - 1}  + J^{ - 1} P^T \tilde Q = 0} \right\}
1198 > \]
1199 > For a body fixed vector $X_i$ with respect to the center of mass of
1200 > the rigid body, its corresponding lab fixed vector $X_0^{lab}$  is
1201 > given as
1202 > \begin{equation}
1203 > X_i^{lab} = Q X_i + q.
1204 > \end{equation}
1205 > Therefore, potential energy $V(q,Q)$ is defined by
1206 > \[
1207 > V(q,Q) = V(Q X_0 + q).
1208 > \]
1209 > Hence, the force and torque are given by
1210 > \[
1211 > \nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)},
1212 > \]
1213 > and
1214 > \[
1215 > \nabla _Q V(q,Q) = F(q,Q)X_i^t
1216 > \]
1217 > respectively. As a common choice to describe the rotation dynamics
1218 > of the rigid body, the angular momentum on the body fixed frame $\Pi
1219 > = Q^t P$ is introduced to rewrite the equations of motion,
1220 > \begin{equation}
1221 > \begin{array}{l}
1222 > \dot \Pi  = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda,  \\
1223 > \dot Q  = Q\Pi {\rm{ }}J^{ - 1},  \\
1224 > \end{array}
1225 > \label{introEqaution:RBMotionPI}
1226 > \end{equation}
1227 > as well as holonomic constraints $\Pi J^{ - 1}  + J^{ - 1} \Pi ^t  =
1228 > 0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1229 > matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1230 > \begin{equation}
1231 > v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1232 > {\begin{array}{*{20}c}
1233 >   0 & { - v_3 } & {v_2 }  \\
1234 >   {v_3 } & 0 & { - v_1 }  \\
1235 >   { - v_2 } & {v_1 } & 0  \\
1236 > \end{array}} \right),
1237 > \label{introEquation:hatmapIsomorphism}
1238 > \end{equation}
1239 > will let us associate the matrix products with traditional vector
1240 > operations
1241 > \[
1242 > \hat vu = v \times u.
1243 > \]
1244 > Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1245 > matrix,
1246 > \begin{eqnarray}
1247 > (\dot \Pi  - \dot \Pi ^T )&= &(\Pi  - \Pi ^T )(J^{ - 1} \Pi  + \Pi J^{ - 1} ) \notag \\
1248 > & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]}  -
1249 > (\Lambda  - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1250 > \end{eqnarray}
1251 > Since $\Lambda$ is symmetric, the last term of
1252 > Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1253 > Lagrange multiplier $\Lambda$ is absent from the equations of
1254 > motion. This unique property eliminates the requirement of
1255 > iterations which can not be avoided in other methods.\cite{Kol1997,
1256 > Omelyan1998} Applying the hat-map isomorphism, we obtain the
1257 > equation of motion for angular momentum in the body frame
1258 > \begin{equation}
1259 > \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1260 > F_i (r,Q)} \right) \times X_i }.
1261 > \label{introEquation:bodyAngularMotion}
1262 > \end{equation}
1263 > In the same manner, the equation of motion for rotation matrix is
1264 > given by
1265 > \[
1266 > \dot Q = Qskew(I^{ - 1} \pi ).
1267 > \]
1268 >
1269 > \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1270 > Lie-Poisson Integrator for Free Rigid Bodies}
1271 >
1272 > If there are no external forces exerted on the rigid body, the only
1273 > contribution to the rotational motion is from the kinetic energy
1274 > (the first term of \ref{introEquation:bodyAngularMotion}). The free
1275 > rigid body is an example of a Lie-Poisson system with Hamiltonian
1276 > function
1277 > \begin{equation}
1278 > T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
1279 > \label{introEquation:rotationalKineticRB}
1280 > \end{equation}
1281 > where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and
1282 > Lie-Poisson structure matrix,
1283 > \begin{equation}
1284 > J(\pi ) = \left( {\begin{array}{*{20}c}
1285 >   0 & {\pi _3 } & { - \pi _2 }  \\
1286 >   { - \pi _3 } & 0 & {\pi _1 }  \\
1287 >   {\pi _2 } & { - \pi _1 } & 0  \\
1288 > \end{array}} \right).
1289 > \end{equation}
1290 > Thus, the dynamics of free rigid body is governed by
1291 > \begin{equation}
1292 > \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi ).
1293 > \end{equation}
1294 > One may notice that each $T_i^r$ in
1295 > Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1296 > For instance, the equations of motion due to $T_1^r$ are given by
1297 > \begin{equation}
1298 > \frac{d}{{dt}}\pi  = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1299 > \label{introEqaution:RBMotionSingleTerm}
1300 > \end{equation}
1301 > with
1302 > \[ R_1  = \left( {\begin{array}{*{20}c}
1303 >   0 & 0 & 0  \\
1304 >   0 & 0 & {\pi _1 }  \\
1305 >   0 & { - \pi _1 } & 0  \\
1306 > \end{array}} \right).
1307 > \]
1308 > The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1309 > \[
1310 > \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1311 > Q(0)e^{\Delta tR_1 }
1312 > \]
1313 > with
1314 > \[
1315 > e^{\Delta tR_1 }  = \left( {\begin{array}{*{20}c}
1316 >   0 & 0 & 0  \\
1317 >   0 & {\cos \theta _1 } & {\sin \theta _1 }  \\
1318 >   0 & { - \sin \theta _1 } & {\cos \theta _1 }  \\
1319 > \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1320 > \]
1321 > To reduce the cost of computing expensive functions in $e^{\Delta
1322 > tR_1 }$, we can use the Cayley transformation to obtain a
1323 > single-aixs propagator,
1324 > \begin{eqnarray*}
1325 > e^{\Delta tR_1 }  & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta
1326 > tR_1 ) \\
1327 > %
1328 > & \approx & \left( \begin{array}{ccc}
1329 > 1 & 0 & 0 \\
1330 > 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4}  & -\frac{\theta}{1+
1331 > \theta^2 / 4} \\
1332 > 0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 +
1333 > \theta^2 / 4}
1334 > \end{array}
1335 > \right).
1336 > \end{eqnarray*}
1337 > The propagators for $T_2^r$ and $T_3^r$ can be found in the same
1338 > manner. In order to construct a second-order symplectic method, we
1339 > split the angular kinetic Hamiltonian function into five terms
1340 > \[
1341 > T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1342 > ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1343 > (\pi _1 ).
1344 > \]
1345 > By concatenating the propagators corresponding to these five terms,
1346 > we can obtain an symplectic integrator,
1347 > \[
1348 > \varphi _{\Delta t,T^r }  = \varphi _{\Delta t/2,\pi _1 }  \circ
1349 > \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }
1350 > \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1351 > _1 }.
1352 > \]
1353 > The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1354 > \[
1355 > \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1356 > ).
1357 > \]
1358 > If the Poisson bracket of a function $F$ with an arbitrary smooth
1359 > function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1360 > conserved quantity in Poisson system. We can easily verify that the
1361 > norm of the angular momentum, $\parallel \pi
1362 > \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1363 > \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1364 > then by the chain rule
1365 > \[
1366 > \nabla _\pi  F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1367 > }}{2})\pi.
1368 > \]
1369 > Thus, $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel
1370 > \pi
1371 > \parallel ^2 }}{2})\pi  \times \pi  = 0 $. This explicit
1372 > Lie-Poisson integrator is found to be both extremely efficient and
1373 > stable. These properties can be explained by the fact the small
1374 > angle approximation is used and the norm of the angular momentum is
1375 > conserved.
1376 >
1377 > \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1378 > Splitting for Rigid Body}
1379 >
1380 > The Hamiltonian of rigid body can be separated in terms of kinetic
1381 > energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1382 > of motion corresponding to potential energy and kinetic energy are
1383 > listed in Table~\ref{introTable:rbEquations}.
1384 > \begin{table}
1385 > \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1386 > \label{introTable:rbEquations}
1387 > \begin{center}
1388 > \begin{tabular}{|l|l|}
1389 >  \hline
1390 >  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
1391 >  Potential & Kinetic \\
1392 >  $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\
1393 >  $\frac{d}{{dt}}p =  - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\
1394 >  $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\
1395 >  $ \frac{d}{{dt}}\pi  = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi  = \pi  \times I^{ - 1} \pi$\\
1396 >  \hline
1397 > \end{tabular}
1398 > \end{center}
1399 > \end{table}
1400 > A second-order symplectic method is now obtained by the composition
1401 > of the position and velocity propagators,
1402 > \[
1403 > \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi
1404 > _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}.
1405 > \]
1406 > Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1407 > sub-propagators which corresponding to force and torque
1408 > respectively,
1409 > \[
1410 > \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi
1411 > _{\Delta t/2,\tau }.
1412 > \]
1413 > Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1414 > $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1415 > inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1416 > kinetic energy can be separated to translational kinetic term, $T^t
1417 > (p)$, and rotational kinetic term, $T^r (\pi )$,
1418 > \begin{equation}
1419 > T(p,\pi ) =T^t (p) + T^r (\pi ).
1420 > \end{equation}
1421 > where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1422 > defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1423 > the corresponding propagators are given by
1424 > \[
1425 > \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi
1426 > _{\Delta t,T^r }.
1427 > \]
1428 > Finally, we obtain the overall symplectic propagators for freely
1429 > moving rigid bodies
1430 > \begin{eqnarray}
1431 > \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \notag\\
1432 >  & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \notag\\
1433 >  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .
1434 > \label{introEquation:overallRBFlowMaps}
1435 > \end{eqnarray}
1436 >
1437 > \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1438 > As an alternative to newtonian dynamics, Langevin dynamics, which
1439 > mimics a simple heat bath with stochastic and dissipative forces,
1440 > has been applied in a variety of studies. This section will review
1441 > the theory of Langevin dynamics. A brief derivation of the generalized
1442 > Langevin equation will be given first. Following that, we will
1443 > discuss the physical meaning of the terms appearing in the equation.
1444 >
1445 > \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1446 >
1447 > A harmonic bath model, in which an effective set of harmonic
1448 > oscillators are used to mimic the effect of a linearly responding
1449 > environment, has been widely used in quantum chemistry and
1450 > statistical mechanics. One of the successful applications of
1451 > Harmonic bath model is the derivation of the Generalized Langevin
1452 > Dynamics (GLE). Consider a system, in which the degree of
1453 > freedom $x$ is assumed to couple to the bath linearly, giving a
1454 > Hamiltonian of the form
1455 > \begin{equation}
1456 > H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N)
1457 > \label{introEquation:bathGLE}.
1458 > \end{equation}
1459 > Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1460 > with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1461 > \[
1462 > H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1463 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  x_\alpha ^2 }
1464 > \right\}}
1465 > \]
1466 > where the index $\alpha$ runs over all the bath degrees of freedom,
1467 > $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1468 > the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1469 > coupling,
1470 > \[
1471 > \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x}
1472 > \]
1473 > where $g_\alpha$ are the coupling constants between the bath
1474 > coordinates ($x_ \alpha$) and the system coordinate ($x$).
1475 > Introducing
1476 > \[
1477 > W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2
1478 > }}{{2m_\alpha  w_\alpha ^2 }}} x^2
1479 > \]
1480 > and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1481 > \[
1482 > H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha  = 1}^N
1483 > {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha
1484 > w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha
1485 > w_\alpha ^2 }}x} \right)^2 } \right\}}.
1486 > \]
1487 > Since the first two terms of the new Hamiltonian depend only on the
1488 > system coordinates, we can get the equations of motion for
1489 > Generalized Langevin Dynamics by Hamilton's equations,
1490 > \begin{equation}
1491 > m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} -
1492 > \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   -
1493 > \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right)},
1494 > \label{introEquation:coorMotionGLE}
1495 > \end{equation}
1496 > and
1497 > \begin{equation}
1498 > m\ddot x_\alpha   =  - m_\alpha  w_\alpha ^2 \left( {x_\alpha   -
1499 > \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right).
1500 > \label{introEquation:bathMotionGLE}
1501 > \end{equation}
1502 > In order to derive an equation for $x$, the dynamics of the bath
1503 > variables $x_\alpha$ must be solved exactly first. As an integral
1504 > transform which is particularly useful in solving linear ordinary
1505 > differential equations,the Laplace transform is the appropriate tool
1506 > to solve this problem. The basic idea is to transform the difficult
1507 > differential equations into simple algebra problems which can be
1508 > solved easily. Then, by applying the inverse Laplace transform, we
1509 > can retrieve the solutions of the original problems. Let $f(t)$ be a
1510 > function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$
1511 > is a new function defined as
1512 > \[
1513 > L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1514 > \]
1515 > where  $p$ is real and  $L$ is called the Laplace Transform
1516 > Operator. Below are some important properties of the Laplace transform
1517 > \begin{eqnarray*}
1518 > L(x + y)  & = & L(x) + L(y) \\
1519 > L(ax)     & = & aL(x) \\
1520 > L(\dot x) & = & pL(x) - px(0) \\
1521 > L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1522 > L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1523 > \end{eqnarray*}
1524 > Applying the Laplace transform to the bath coordinates, we obtain
1525 > \begin{eqnarray*}
1526 > p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x), \\
1527 > L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}. \\
1528 > \end{eqnarray*}
1529 > In the same way, the system coordinates become
1530 > \begin{eqnarray*}
1531 > mL(\ddot x) & = &
1532 >  - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1533 >  & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1534 > \end{eqnarray*}
1535 > With the help of some relatively important inverse Laplace
1536 > transformations:
1537 > \[
1538 > \begin{array}{c}
1539 > L(\cos at) = \frac{p}{{p^2  + a^2 }} \\
1540 > L(\sin at) = \frac{a}{{p^2  + a^2 }} \\
1541 > L(1) = \frac{1}{p} \\
1542 > \end{array}
1543 > \]
1544 > we obtain
1545 > \begin{eqnarray*}
1546 > m\ddot x & =  & - \frac{{\partial W(x)}}{{\partial x}} -
1547 > \sum\limits_{\alpha  = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1548 > }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1549 > _\alpha  t)\dot x(t - \tau )d\tau } } \right\}}  \\
1550 > & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1551 > x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}}
1552 > \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1553 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}\\
1554 > %
1555 > & = & -
1556 > \frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha
1557 > = 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha
1558 > ^2 }}} \right)\cos (\omega _\alpha
1559 > t)\dot x(t - \tau )d} \tau }  \\
1560 > & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1561 > x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha  }}}
1562 > \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1563 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}
1564 > \end{eqnarray*}
1565 > Introducing a \emph{dynamic friction kernel}
1566 > \begin{equation}
1567 > \xi (t) = \sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1568 > }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha  t)}
1569 > \label{introEquation:dynamicFrictionKernelDefinition}
1570 > \end{equation}
1571 > and \emph{a random force}
1572 > \begin{equation}
1573 > R(t) = \sum\limits_{\alpha  = 1}^N {\left( {g_\alpha  x_\alpha  (0)
1574 > - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}x(0)}
1575 > \right)\cos (\omega _\alpha  t)}  + \frac{{\dot x_\alpha
1576 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t),
1577 > \label{introEquation:randomForceDefinition}
1578 > \end{equation}
1579 > the equation of motion can be rewritten as
1580 > \begin{equation}
1581 > m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi
1582 > (t)\dot x(t - \tau )d\tau }  + R(t)
1583 > \label{introEuqation:GeneralizedLangevinDynamics}
1584 > \end{equation}
1585 > which is known as the \emph{generalized Langevin equation} (GLE).
1586 >
1587 > \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1588 >
1589 > One may notice that $R(t)$ depends only on initial conditions, which
1590 > implies it is completely deterministic within the context of a
1591 > harmonic bath. However, it is easy to verify that $R(t)$ is totally
1592 > uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1593 > \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1594 > 0.$ This property is what we expect from a truly random process. As
1595 > long as the model chosen for $R(t)$ was a gaussian distribution in
1596 > general, the stochastic nature of the GLE still remains.
1597 > %dynamic friction kernel
1598 > The convolution integral
1599 > \[
1600 > \int_0^t {\xi (t)\dot x(t - \tau )d\tau }
1601 > \]
1602 > depends on the entire history of the evolution of $x$, which implies
1603 > that the bath retains memory of previous motions. In other words,
1604 > the bath requires a finite time to respond to change in the motion
1605 > of the system. For a sluggish bath which responds slowly to changes
1606 > in the system coordinate, we may regard $\xi(t)$ as a constant
1607 > $\xi(t) = \Xi_0$. Hence, the convolution integral becomes
1608 > \[
1609 > \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = \xi _0 (x(t) - x(0))
1610 > \]
1611 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1612 > \[
1613 > m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) +
1614 > \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1615 > \]
1616 > which can be used to describe the effect of dynamic caging in
1617 > viscous solvents. The other extreme is the bath that responds
1618 > infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1619 > taken as a $delta$ function in time:
1620 > \[
1621 > \xi (t) = 2\xi _0 \delta (t).
1622 > \]
1623 > Hence, the convolution integral becomes
1624 > \[
1625 > \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = 2\xi _0 \int_0^t
1626 > {\delta (t)\dot x(t - \tau )d\tau }  = \xi _0 \dot x(t),
1627 > \]
1628 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1629 > \begin{equation}
1630 > m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1631 > x(t) + R(t) \label{introEquation:LangevinEquation}
1632 > \end{equation}
1633 > which is known as the Langevin equation. The static friction
1634 > coefficient $\xi _0$ can either be calculated from spectral density
1635 > or be determined by Stokes' law for regular shaped particles. A
1636 > brief review on calculating friction tensors for arbitrary shaped
1637 > particles is given in Sec.~\ref{introSection:frictionTensor}.
1638 >
1639 > \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1640 >
1641 > Defining a new set of coordinates
1642 > \[
1643 > q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1644 > ^2 }}x(0),
1645 > \]
1646 > we can rewrite $R(t)$ as
1647 > \[
1648 > R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1649 > \]
1650 > And since the $q$ coordinates are harmonic oscillators,
1651 > \begin{eqnarray*}
1652 > \left\langle {q_\alpha ^2 } \right\rangle  & = & \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\
1653 > \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\
1654 > \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\
1655 > \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\
1656 >  & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\
1657 >  & = &kT\xi (t)
1658 > \end{eqnarray*}
1659 > Thus, we recover the \emph{second fluctuation dissipation theorem}
1660 > \begin{equation}
1661 > \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1662 > \label{introEquation:secondFluctuationDissipation},
1663 > \end{equation}
1664 > which acts as a constraint on the possible ways in which one can
1665 > model the random force and friction kernel.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines