| 6 |  | Closely related to Classical Mechanics, Molecular Dynamics | 
| 7 |  | simulations are carried out by integrating the equations of motion | 
| 8 |  | for a given system of particles. There are three fundamental ideas | 
| 9 | < | behind classical mechanics. Firstly, One can determine the state of | 
| 9 | > | behind classical mechanics. Firstly, one can determine the state of | 
| 10 |  | a mechanical system at any time of interest; Secondly, all the | 
| 11 |  | mechanical properties of the system at that time can be determined | 
| 12 |  | by combining the knowledge of the properties of the system with the | 
| 17 |  | \subsection{\label{introSection:newtonian}Newtonian Mechanics} | 
| 18 |  | The discovery of Newton's three laws of mechanics which govern the | 
| 19 |  | motion of particles is the foundation of the classical mechanics. | 
| 20 | < | Newton¡¯s first law defines a class of inertial frames. Inertial | 
| 20 | > | Newton's first law defines a class of inertial frames. Inertial | 
| 21 |  | frames are reference frames where a particle not interacting with | 
| 22 |  | other bodies will move with constant speed in the same direction. | 
| 23 | < | With respect to inertial frames Newton¡¯s second law has the form | 
| 23 | > | With respect to inertial frames, Newton's second law has the form | 
| 24 |  | \begin{equation} | 
| 25 | < | F = \frac {dp}{dt} = \frac {mv}{dt} | 
| 25 | > | F = \frac {dp}{dt} = \frac {mdv}{dt} | 
| 26 |  | \label{introEquation:newtonSecondLaw} | 
| 27 |  | \end{equation} | 
| 28 |  | A point mass interacting with other bodies moves with the | 
| 29 |  | acceleration along the direction of the force acting on it. Let | 
| 30 |  | $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and | 
| 31 |  | $F_{ji}$ be the force that particle $j$ exerts on particle $i$. | 
| 32 | < | Newton¡¯s third law states that | 
| 32 | > | Newton's third law states that | 
| 33 |  | \begin{equation} | 
| 34 |  | F_{ij} = -F_{ji} | 
| 35 |  | \label{introEquation:newtonThirdLaw} | 
| 46 |  | \end{equation} | 
| 47 |  | The torque $\tau$ with respect to the same origin is defined to be | 
| 48 |  | \begin{equation} | 
| 49 | < | N \equiv r \times F \label{introEquation:torqueDefinition} | 
| 49 | > | \tau \equiv r \times F \label{introEquation:torqueDefinition} | 
| 50 |  | \end{equation} | 
| 51 |  | Differentiating Eq.~\ref{introEquation:angularMomentumDefinition}, | 
| 52 |  | \[ | 
| 59 |  | \] | 
| 60 |  | thus, | 
| 61 |  | \begin{equation} | 
| 62 | < | \dot L = r \times \dot p = N | 
| 62 | > | \dot L = r \times \dot p = \tau | 
| 63 |  | \end{equation} | 
| 64 |  | If there are no external torques acting on a body, the angular | 
| 65 |  | momentum of it is conserved. The last conservation theorem state | 
| 68 |  | \end{equation} | 
| 69 |  | is conserved. All of these conserved quantities are | 
| 70 |  | important factors to determine the quality of numerical integration | 
| 71 | < | scheme for rigid body \cite{Dullweber1997}. | 
| 71 | > | schemes for rigid bodies \cite{Dullweber1997}. | 
| 72 |  |  | 
| 73 |  | \subsection{\label{introSection:lagrangian}Lagrangian Mechanics} | 
| 74 |  |  | 
| 75 | < | Newtonian Mechanics suffers from two important limitations: it | 
| 76 | < | describes their motion in special cartesian coordinate systems. | 
| 77 | < | Another limitation of Newtonian mechanics becomes obvious when we | 
| 78 | < | try to describe systems with large numbers of particles. It becomes | 
| 79 | < | very difficult to predict the properties of the system by carrying | 
| 80 | < | out calculations involving the each individual interaction between | 
| 81 | < | all the particles, even if we know all of the details of the | 
| 82 | < | interaction. In order to overcome some of the practical difficulties | 
| 83 | < | which arise in attempts to apply Newton's equation to complex | 
| 84 | < | system, alternative procedures may be developed. | 
| 75 | > | Newtonian Mechanics suffers from two important limitations: motions | 
| 76 | > | can only be described in cartesian coordinate systems. Moreover, It | 
| 77 | > | become impossible to predict analytically the properties of the | 
| 78 | > | system even if we know all of the details of the interaction. In | 
| 79 | > | order to overcome some of the practical difficulties which arise in | 
| 80 | > | attempts to apply Newton's equation to complex system, approximate | 
| 81 | > | numerical procedures may be developed. | 
| 82 |  |  | 
| 83 | < | \subsubsection{\label{introSection:halmiltonPrinciple}Hamilton's | 
| 84 | < | Principle} | 
| 83 | > | \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's | 
| 84 | > | Principle}} | 
| 85 |  |  | 
| 86 |  | Hamilton introduced the dynamical principle upon which it is | 
| 87 | < | possible to base all of mechanics and, indeed, most of classical | 
| 88 | < | physics. Hamilton's Principle may be stated as follow, | 
| 87 | > | possible to base all of mechanics and most of classical physics. | 
| 88 | > | Hamilton's Principle may be stated as follows, | 
| 89 |  |  | 
| 90 |  | The actual trajectory, along which a dynamical system may move from | 
| 91 |  | one point to another within a specified time, is derived by finding | 
| 92 |  | the path which minimizes the time integral of the difference between | 
| 93 | < | the kinetic, $K$, and potential energies, $U$ \cite{tolman79}. | 
| 93 | > | the kinetic, $K$, and potential energies, $U$. | 
| 94 |  | \begin{equation} | 
| 95 |  | \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0} , | 
| 96 |  | \label{introEquation:halmitonianPrinciple1} | 
| 97 |  | \end{equation} | 
| 98 |  |  | 
| 99 |  | For simple mechanical systems, where the forces acting on the | 
| 100 | < | different part are derivable from a potential and the velocities are | 
| 101 | < | small compared with that of light, the Lagrangian function $L$ can | 
| 102 | < | be define as the difference between the kinetic energy of the system | 
| 106 | < | and its potential energy, | 
| 100 | > | different parts are derivable from a potential, the Lagrangian | 
| 101 | > | function $L$ can be defined as the difference between the kinetic | 
| 102 | > | energy of the system and its potential energy, | 
| 103 |  | \begin{equation} | 
| 104 |  | L \equiv K - U = L(q_i ,\dot q_i ) , | 
| 105 |  | \label{introEquation:lagrangianDef} | 
| 110 |  | \label{introEquation:halmitonianPrinciple2} | 
| 111 |  | \end{equation} | 
| 112 |  |  | 
| 113 | < | \subsubsection{\label{introSection:equationOfMotionLagrangian}The | 
| 114 | < | Equations of Motion in Lagrangian Mechanics} | 
| 113 | > | \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The | 
| 114 | > | Equations of Motion in Lagrangian Mechanics}} | 
| 115 |  |  | 
| 116 | < | For a holonomic system of $f$ degrees of freedom, the equations of | 
| 117 | < | motion in the Lagrangian form is | 
| 116 | > | For a system of $f$ degrees of freedom, the equations of motion in | 
| 117 | > | the Lagrangian form is | 
| 118 |  | \begin{equation} | 
| 119 |  | \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} - | 
| 120 |  | \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f | 
| 128 |  | Arising from Lagrangian Mechanics, Hamiltonian Mechanics was | 
| 129 |  | introduced by William Rowan Hamilton in 1833 as a re-formulation of | 
| 130 |  | classical mechanics. If the potential energy of a system is | 
| 131 | < | independent of generalized velocities, the generalized momenta can | 
| 136 | < | be defined as | 
| 131 | > | independent of velocities, the momenta can be defined as | 
| 132 |  | \begin{equation} | 
| 133 |  | p_i = \frac{\partial L}{\partial \dot q_i} | 
| 134 |  | \label{introEquation:generalizedMomenta} | 
| 167 |  | By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can | 
| 168 |  | find | 
| 169 |  | \begin{equation} | 
| 170 | < | \frac{{\partial H}}{{\partial p_k }} = q_k | 
| 170 | > | \frac{{\partial H}}{{\partial p_k }} = \dot {q_k} | 
| 171 |  | \label{introEquation:motionHamiltonianCoordinate} | 
| 172 |  | \end{equation} | 
| 173 |  | \begin{equation} | 
| 174 | < | \frac{{\partial H}}{{\partial q_k }} =  - p_k | 
| 174 | > | \frac{{\partial H}}{{\partial q_k }} =  - \dot {p_k} | 
| 175 |  | \label{introEquation:motionHamiltonianMomentum} | 
| 176 |  | \end{equation} | 
| 177 |  | and | 
| 184 |  | Eq.~\ref{introEquation:motionHamiltonianCoordinate} and | 
| 185 |  | Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's | 
| 186 |  | equation of motion. Due to their symmetrical formula, they are also | 
| 187 | < | known as the canonical equations of motions \cite{Goldstein01}. | 
| 187 | > | known as the canonical equations of motions \cite{Goldstein2001}. | 
| 188 |  |  | 
| 189 |  | An important difference between Lagrangian approach and the | 
| 190 |  | Hamiltonian approach is that the Lagrangian is considered to be a | 
| 191 | < | function of the generalized velocities $\dot q_i$ and the | 
| 192 | < | generalized coordinates $q_i$, while the Hamiltonian is considered | 
| 193 | < | to be a function of the generalized momenta $p_i$ and the conjugate | 
| 194 | < | generalized coordinate $q_i$. Hamiltonian Mechanics is more | 
| 195 | < | appropriate for application to statistical mechanics and quantum | 
| 196 | < | mechanics, since it treats the coordinate and its time derivative as | 
| 197 | < | independent variables and it only works with 1st-order differential | 
| 203 | < | equations\cite{Marion90}. | 
| 191 | > | function of the generalized velocities $\dot q_i$ and coordinates | 
| 192 | > | $q_i$, while the Hamiltonian is considered to be a function of the | 
| 193 | > | generalized momenta $p_i$ and the conjugate coordinates $q_i$. | 
| 194 | > | Hamiltonian Mechanics is more appropriate for application to | 
| 195 | > | statistical mechanics and quantum mechanics, since it treats the | 
| 196 | > | coordinate and its time derivative as independent variables and it | 
| 197 | > | only works with 1st-order differential equations\cite{Marion1990}. | 
| 198 |  |  | 
| 199 |  | In Newtonian Mechanics, a system described by conservative forces | 
| 200 |  | conserves the total energy \ref{introEquation:energyConservation}. | 
| 224 |  | possible states. Each possible state of the system corresponds to | 
| 225 |  | one unique point in the phase space. For mechanical systems, the | 
| 226 |  | phase space usually consists of all possible values of position and | 
| 227 | < | momentum variables. Consider a dynamic system in a cartesian space, | 
| 228 | < | where each of the $6f$ coordinates and momenta is assigned to one of | 
| 229 | < | $6f$ mutually orthogonal axes, the phase space of this system is a | 
| 230 | < | $6f$ dimensional space. A point, $x = (q_1 , \ldots ,q_f ,p_1 , | 
| 231 | < | \ldots ,p_f )$, with a unique set of values of $6f$ coordinates and | 
| 232 | < | momenta is a phase space vector. | 
| 227 | > | momentum variables. Consider a dynamic system of $f$ particles in a | 
| 228 | > | cartesian space, where each of the $6f$ coordinates and momenta is | 
| 229 | > | assigned to one of $6f$ mutually orthogonal axes, the phase space of | 
| 230 | > | this system is a $6f$ dimensional space. A point, $x = (q_1 , \ldots | 
| 231 | > | ,q_f ,p_1 , \ldots ,p_f )$, with a unique set of values of $6f$ | 
| 232 | > | coordinates and momenta is a phase space vector. | 
| 233 |  |  | 
| 234 | + | %%%fix me | 
| 235 |  | A microscopic state or microstate of a classical system is | 
| 236 |  | specification of the complete phase space vector of a system at any | 
| 237 |  | instant in time. An ensemble is defined as a collection of systems | 
| 252 |  | regions of the phase space. The condition of an ensemble at any time | 
| 253 |  | can be regarded as appropriately specified by the density $\rho$ | 
| 254 |  | with which representative points are distributed over the phase | 
| 255 | < | space. The density of distribution for an ensemble with $f$ degrees | 
| 256 | < | of freedom is defined as, | 
| 255 | > | space. The density distribution for an ensemble with $f$ degrees of | 
| 256 | > | freedom is defined as, | 
| 257 |  | \begin{equation} | 
| 258 |  | \rho  = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t). | 
| 259 |  | \label{introEquation:densityDistribution} | 
| 260 |  | \end{equation} | 
| 261 |  | Governed by the principles of mechanics, the phase points change | 
| 262 | < | their value which would change the density at any time at phase | 
| 263 | < | space. Hence, the density of distribution is also to be taken as a | 
| 262 | > | their locations which would change the density at any time at phase | 
| 263 | > | space. Hence, the density distribution is also to be taken as a | 
| 264 |  | function of the time. | 
| 265 |  |  | 
| 266 |  | The number of systems $\delta N$ at time $t$ can be determined by, | 
| 268 |  | \delta N = \rho (q,p,t)dq_1  \ldots dq_f dp_1  \ldots dp_f. | 
| 269 |  | \label{introEquation:deltaN} | 
| 270 |  | \end{equation} | 
| 271 | < | Assuming a large enough population of systems are exploited, we can | 
| 272 | < | sufficiently approximate $\delta N$ without introducing | 
| 273 | < | discontinuity when we go from one region in the phase space to | 
| 274 | < | another. By integrating over the whole phase space, | 
| 271 | > | Assuming a large enough population of systems, we can sufficiently | 
| 272 | > | approximate $\delta N$ without introducing discontinuity when we go | 
| 273 | > | from one region in the phase space to another. By integrating over | 
| 274 | > | the whole phase space, | 
| 275 |  | \begin{equation} | 
| 276 |  | N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f | 
| 277 |  | \label{introEquation:totalNumberSystem} | 
| 283 |  | {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}. | 
| 284 |  | \label{introEquation:unitProbability} | 
| 285 |  | \end{equation} | 
| 286 | < | With the help of Equation(\ref{introEquation:unitProbability}) and | 
| 287 | < | the knowledge of the system, it is possible to calculate the average | 
| 286 | > | With the help of Eq.~\ref{introEquation:unitProbability} and the | 
| 287 | > | knowledge of the system, it is possible to calculate the average | 
| 288 |  | value of any desired quantity which depends on the coordinates and | 
| 289 |  | momenta of the system. Even when the dynamics of the real system is | 
| 290 |  | complex, or stochastic, or even discontinuous, the average | 
| 291 | < | properties of the ensemble of possibilities as a whole may still | 
| 292 | < | remain well defined. For a classical system in thermal equilibrium | 
| 293 | < | with its environment, the ensemble average of a mechanical quantity, | 
| 294 | < | $\langle A(q , p) \rangle_t$, takes the form of an integral over the | 
| 295 | < | phase space of the system, | 
| 291 | > | properties of the ensemble of possibilities as a whole remaining | 
| 292 | > | well defined. For a classical system in thermal equilibrium with its | 
| 293 | > | environment, the ensemble average of a mechanical quantity, $\langle | 
| 294 | > | A(q , p) \rangle_t$, takes the form of an integral over the phase | 
| 295 | > | space of the system, | 
| 296 |  | \begin{equation} | 
| 297 |  | \langle  A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho | 
| 298 |  | (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho | 
| 302 |  |  | 
| 303 |  | There are several different types of ensembles with different | 
| 304 |  | statistical characteristics. As a function of macroscopic | 
| 305 | < | parameters, such as temperature \textit{etc}, partition function can | 
| 306 | < | be used to describe the statistical properties of a system in | 
| 305 | > | parameters, such as temperature \textit{etc}, the partition function | 
| 306 | > | can be used to describe the statistical properties of a system in | 
| 307 |  | thermodynamic equilibrium. | 
| 308 |  |  | 
| 309 |  | As an ensemble of systems, each of which is known to be thermally | 
| 310 | < | isolated and conserve energy, Microcanonical ensemble(NVE) has a | 
| 311 | < | partition function like, | 
| 310 | > | isolated and conserve energy, the Microcanonical ensemble (NVE) has | 
| 311 | > | a partition function like, | 
| 312 |  | \begin{equation} | 
| 313 |  | \Omega (N,V,E) = e^{\beta TS} \label{introEquation:NVEPartition}. | 
| 314 |  | \end{equation} | 
| 315 | < | A canonical ensemble(NVT)is an ensemble of systems, each of which | 
| 315 | > | A canonical ensemble (NVT)is an ensemble of systems, each of which | 
| 316 |  | can share its energy with a large heat reservoir. The distribution | 
| 317 |  | of the total energy amongst the possible dynamical states is given | 
| 318 |  | by the partition function, | 
| 321 |  | \label{introEquation:NVTPartition} | 
| 322 |  | \end{equation} | 
| 323 |  | Here, $A$ is the Helmholtz free energy which is defined as $ A = U - | 
| 324 | < | TS$. Since most experiment are carried out under constant pressure | 
| 325 | < | condition, isothermal-isobaric ensemble(NPT) play a very important | 
| 326 | < | role in molecular simulation. The isothermal-isobaric ensemble allow | 
| 327 | < | the system to exchange energy with a heat bath of temperature $T$ | 
| 328 | < | and to change the volume as well. Its partition function is given as | 
| 324 | > | TS$. Since most experiments are carried out under constant pressure | 
| 325 | > | condition, the isothermal-isobaric ensemble (NPT) plays a very | 
| 326 | > | important role in molecular simulations. The isothermal-isobaric | 
| 327 | > | ensemble allow the system to exchange energy with a heat bath of | 
| 328 | > | temperature $T$ and to change the volume as well. Its partition | 
| 329 | > | function is given as | 
| 330 |  | \begin{equation} | 
| 331 |  | \Delta (N,P,T) =  - e^{\beta G}. | 
| 332 |  | \label{introEquation:NPTPartition} | 
| 335 |  |  | 
| 336 |  | \subsection{\label{introSection:liouville}Liouville's theorem} | 
| 337 |  |  | 
| 338 | < | The Liouville's theorem is the foundation on which statistical | 
| 339 | < | mechanics rests. It describes the time evolution of phase space | 
| 338 | > | Liouville's theorem is the foundation on which statistical mechanics | 
| 339 | > | rests. It describes the time evolution of the phase space | 
| 340 |  | distribution function. In order to calculate the rate of change of | 
| 341 | < | $\rho$, we begin from Equation(\ref{introEquation:deltaN}). If we | 
| 342 | < | consider the two faces perpendicular to the $q_1$ axis, which are | 
| 343 | < | located at $q_1$ and $q_1 + \delta q_1$, the number of phase points | 
| 344 | < | leaving the opposite face is given by the expression, | 
| 341 | > | $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider | 
| 342 | > | the two faces perpendicular to the $q_1$ axis, which are located at | 
| 343 | > | $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the | 
| 344 | > | opposite face is given by the expression, | 
| 345 |  | \begin{equation} | 
| 346 |  | \left( {\rho  + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 } | 
| 347 |  | \right)\left( {\dot q_1  + \frac{{\partial \dot q_1 }}{{\partial q_1 | 
| 365 |  | + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)}  = 0 , | 
| 366 |  | \end{equation} | 
| 367 |  | which cancels the first terms of the right hand side. Furthermore, | 
| 368 | < | divining $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta | 
| 368 | > | dividing $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta | 
| 369 |  | p_f $ in both sides, we can write out Liouville's theorem in a | 
| 370 |  | simple form, | 
| 371 |  | \begin{equation} | 
| 377 |  |  | 
| 378 |  | Liouville's theorem states that the distribution function is | 
| 379 |  | constant along any trajectory in phase space. In classical | 
| 380 | < | statistical mechanics, since the number of particles in the system | 
| 381 | < | is huge, we may be able to believe the system is stationary, | 
| 380 | > | statistical mechanics, since the number of members in an ensemble is | 
| 381 | > | huge and constant, we can assume the local density has no reason | 
| 382 | > | (other than classical mechanics) to change, | 
| 383 |  | \begin{equation} | 
| 384 |  | \frac{{\partial \rho }}{{\partial t}} = 0. | 
| 385 |  | \label{introEquation:stationary} | 
| 392 |  | \label{introEquation:densityAndHamiltonian} | 
| 393 |  | \end{equation} | 
| 394 |  |  | 
| 395 | < | \subsubsection{\label{introSection:phaseSpaceConservation}Conservation of Phase Space} | 
| 395 | > | \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}} | 
| 396 |  | Lets consider a region in the phase space, | 
| 397 |  | \begin{equation} | 
| 398 |  | \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f . | 
| 399 |  | \end{equation} | 
| 400 |  | If this region is small enough, the density $\rho$ can be regarded | 
| 401 | < | as uniform over the whole phase space. Thus, the number of phase | 
| 402 | < | points inside this region is given by, | 
| 401 | > | as uniform over the whole integral. Thus, the number of phase points | 
| 402 | > | inside this region is given by, | 
| 403 |  | \begin{equation} | 
| 404 |  | \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f | 
| 405 |  | dp_1 } ..dp_f. | 
| 411 |  | \end{equation} | 
| 412 |  | With the help of stationary assumption | 
| 413 |  | (\ref{introEquation:stationary}), we obtain the principle of the | 
| 414 | < | \emph{conservation of extension in phase space}, | 
| 414 | > | \emph{conservation of volume in phase space}, | 
| 415 |  | \begin{equation} | 
| 416 |  | \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 } | 
| 417 |  | ...dq_f dp_1 } ..dp_f  = 0. | 
| 418 |  | \label{introEquation:volumePreserving} | 
| 419 |  | \end{equation} | 
| 420 |  |  | 
| 421 | < | \subsubsection{\label{introSection:liouvilleInOtherForms}Liouville's Theorem in Other Forms} | 
| 421 | > | \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}} | 
| 422 |  |  | 
| 423 |  | Liouville's theorem can be expresses in a variety of different forms | 
| 424 |  | which are convenient within different contexts. For any two function | 
| 432 |  | \label{introEquation:poissonBracket} | 
| 433 |  | \end{equation} | 
| 434 |  | Substituting equations of motion in Hamiltonian formalism( | 
| 435 | < | \ref{introEquation:motionHamiltonianCoordinate} , | 
| 436 | < | \ref{introEquation:motionHamiltonianMomentum} ) into | 
| 437 | < | (\ref{introEquation:liouvilleTheorem}), we can rewrite Liouville's | 
| 438 | < | theorem using Poisson bracket notion, | 
| 435 | > | Eq.~\ref{introEquation:motionHamiltonianCoordinate} , | 
| 436 | > | Eq.~\ref{introEquation:motionHamiltonianMomentum} ) into | 
| 437 | > | (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite | 
| 438 | > | Liouville's theorem using Poisson bracket notion, | 
| 439 |  | \begin{equation} | 
| 440 |  | \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - \left\{ | 
| 441 |  | {\rho ,H} \right\}. | 
| 460 |  | Various thermodynamic properties can be calculated from Molecular | 
| 461 |  | Dynamics simulation. By comparing experimental values with the | 
| 462 |  | calculated properties, one can determine the accuracy of the | 
| 463 | < | simulation and the quality of the underlying model. However, both of | 
| 464 | < | experiment and computer simulation are usually performed during a | 
| 463 | > | simulation and the quality of the underlying model. However, both | 
| 464 | > | experiments and computer simulations are usually performed during a | 
| 465 |  | certain time interval and the measurements are averaged over a | 
| 466 |  | period of them which is different from the average behavior of | 
| 467 | < | many-body system in Statistical Mechanics. Fortunately, Ergodic | 
| 468 | < | Hypothesis is proposed to make a connection between time average and | 
| 469 | < | ensemble average. It states that time average and average over the | 
| 470 | < | statistical ensemble are identical \cite{Frenkel1996, leach01:mm}. | 
| 467 | > | many-body system in Statistical Mechanics. Fortunately, the Ergodic | 
| 468 | > | Hypothesis makes a connection between time average and the ensemble | 
| 469 | > | average. It states that the time average and average over the | 
| 470 | > | statistical ensemble are identical \cite{Frenkel1996, Leach2001}. | 
| 471 |  | \begin{equation} | 
| 472 |  | \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty } | 
| 473 |  | \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma | 
| 481 |  | a properly weighted statistical average. This allows the researcher | 
| 482 |  | freedom of choice when deciding how best to measure a given | 
| 483 |  | observable. In case an ensemble averaged approach sounds most | 
| 484 | < | reasonable, the Monte Carlo techniques\cite{metropolis:1949} can be | 
| 484 | > | reasonable, the Monte Carlo techniques\cite{Metropolis1949} can be | 
| 485 |  | utilized. Or if the system lends itself to a time averaging | 
| 486 |  | approach, the Molecular Dynamics techniques in | 
| 487 |  | Sec.~\ref{introSection:molecularDynamics} will be the best | 
| 488 |  | choice\cite{Frenkel1996}. | 
| 489 |  |  | 
| 490 |  | \section{\label{introSection:geometricIntegratos}Geometric Integrators} | 
| 491 | < | A variety of numerical integrators were proposed to simulate the | 
| 492 | < | motions. They usually begin with an initial conditionals and move | 
| 493 | < | the objects in the direction governed by the differential equations. | 
| 494 | < | However, most of them ignore the hidden physical law contained | 
| 495 | < | within the equations. Since 1990, geometric integrators, which | 
| 496 | < | preserve various phase-flow invariants such as symplectic structure, | 
| 497 | < | volume and time reversal symmetry, are developed to address this | 
| 498 | < | issue. The velocity verlet method, which happens to be a simple | 
| 499 | < | example of symplectic integrator, continues to gain its popularity | 
| 500 | < | in molecular dynamics community. This fact can be partly explained | 
| 501 | < | by its geometric nature. | 
| 491 | > | A variety of numerical integrators have been proposed to simulate | 
| 492 | > | the motions of atoms in MD simulation. They usually begin with | 
| 493 | > | initial conditionals and move the objects in the direction governed | 
| 494 | > | by the differential equations. However, most of them ignore the | 
| 495 | > | hidden physical laws contained within the equations. Since 1990, | 
| 496 | > | geometric integrators, which preserve various phase-flow invariants | 
| 497 | > | such as symplectic structure, volume and time reversal symmetry, are | 
| 498 | > | developed to address this issue\cite{Dullweber1997, McLachlan1998, | 
| 499 | > | Leimkuhler1999}. The velocity Verlet method, which happens to be a | 
| 500 | > | simple example of symplectic integrator, continues to gain | 
| 501 | > | popularity in the molecular dynamics community. This fact can be | 
| 502 | > | partly explained by its geometric nature. | 
| 503 |  |  | 
| 504 | < | \subsection{\label{introSection:symplecticManifold}Symplectic Manifold} | 
| 505 | < | A \emph{manifold} is an abstract mathematical space. It locally | 
| 506 | < | looks like Euclidean space, but when viewed globally, it may have | 
| 507 | < | more complicate structure. A good example of manifold is the surface | 
| 508 | < | of Earth. It seems to be flat locally, but it is round if viewed as | 
| 509 | < | a whole. A \emph{differentiable manifold} (also known as | 
| 510 | < | \emph{smooth manifold}) is a manifold with an open cover in which | 
| 511 | < | the covering neighborhoods are all smoothly isomorphic to one | 
| 512 | < | another. In other words,it is possible to apply calculus on | 
| 515 | < | \emph{differentiable manifold}. A \emph{symplectic manifold} is | 
| 516 | < | defined as a pair $(M, \omega)$ which consisting of a | 
| 504 | > | \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds} | 
| 505 | > | A \emph{manifold} is an abstract mathematical space. It looks | 
| 506 | > | locally like Euclidean space, but when viewed globally, it may have | 
| 507 | > | more complicated structure. A good example of manifold is the | 
| 508 | > | surface of Earth. It seems to be flat locally, but it is round if | 
| 509 | > | viewed as a whole. A \emph{differentiable manifold} (also known as | 
| 510 | > | \emph{smooth manifold}) is a manifold on which it is possible to | 
| 511 | > | apply calculus on \emph{differentiable manifold}. A \emph{symplectic | 
| 512 | > | manifold} is defined as a pair $(M, \omega)$ which consists of a | 
| 513 |  | \emph{differentiable manifold} $M$ and a close, non-degenerated, | 
| 514 |  | bilinear symplectic form, $\omega$. A symplectic form on a vector | 
| 515 |  | space $V$ is a function $\omega(x, y)$ which satisfies | 
| 516 |  | $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+ | 
| 517 |  | \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and | 
| 518 | < | $\omega(x, x) = 0$. Cross product operation in vector field is an | 
| 519 | < | example of symplectic form. | 
| 518 | > | $\omega(x, x) = 0$. The cross product operation in vector field is | 
| 519 | > | an example of symplectic form. | 
| 520 |  |  | 
| 521 | < | One of the motivations to study \emph{symplectic manifold} in | 
| 521 | > | One of the motivations to study \emph{symplectic manifolds} in | 
| 522 |  | Hamiltonian Mechanics is that a symplectic manifold can represent | 
| 523 |  | all possible configurations of the system and the phase space of the | 
| 524 |  | system can be described by it's cotangent bundle. Every symplectic | 
| 525 |  | manifold is even dimensional. For instance, in Hamilton equations, | 
| 526 |  | coordinate and momentum always appear in pairs. | 
| 527 |  |  | 
| 532 | – | Let  $(M,\omega)$ and $(N, \eta)$ be symplectic manifolds. A map | 
| 533 | – | \[ | 
| 534 | – | f : M \rightarrow N | 
| 535 | – | \] | 
| 536 | – | is a \emph{symplectomorphism} if it is a \emph{diffeomorphims} and | 
| 537 | – | the \emph{pullback} of $\eta$ under f is equal to $\omega$. | 
| 538 | – | Canonical transformation is an example of symplectomorphism in | 
| 539 | – | classical mechanics. | 
| 540 | – |  | 
| 528 |  | \subsection{\label{introSection:ODE}Ordinary Differential Equations} | 
| 529 |  |  | 
| 530 | < | For a ordinary differential system defined as | 
| 530 | > | For an ordinary differential system defined as | 
| 531 |  | \begin{equation} | 
| 532 |  | \dot x = f(x) | 
| 533 |  | \end{equation} | 
| 534 | < | where $x = x(q,p)^T$, this system is canonical Hamiltonian, if | 
| 534 | > | where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if | 
| 535 |  | \begin{equation} | 
| 536 |  | f(r) = J\nabla _x H(r). | 
| 537 |  | \end{equation} | 
| 552 |  | \end{equation}In this case, $f$ is | 
| 553 |  | called a \emph{Hamiltonian vector field}. | 
| 554 |  |  | 
| 555 | < | Another generalization of Hamiltonian dynamics is Poisson Dynamics, | 
| 555 | > | Another generalization of Hamiltonian dynamics is Poisson | 
| 556 | > | Dynamics\cite{Olver1986}, | 
| 557 |  | \begin{equation} | 
| 558 |  | \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian} | 
| 559 |  | \end{equation} | 
| 591 |  | \end{equation} | 
| 592 |  |  | 
| 593 |  | In most cases, it is not easy to find the exact flow $\varphi_\tau$. | 
| 594 | < | Instead, we use a approximate map, $\psi_\tau$, which is usually | 
| 594 | > | Instead, we use an approximate map, $\psi_\tau$, which is usually | 
| 595 |  | called integrator. The order of an integrator $\psi_\tau$ is $p$, if | 
| 596 |  | the Taylor series of $\psi_\tau$ agree to order $p$, | 
| 597 |  | \begin{equation} | 
| 598 | < | \psi_tau(x) = x + \tau f(x) + O(\tau^{p+1}) | 
| 598 | > | \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1}) | 
| 599 |  | \end{equation} | 
| 600 |  |  | 
| 601 |  | \subsection{\label{introSection:geometricProperties}Geometric Properties} | 
| 602 |  |  | 
| 603 | < | The hidden geometric properties of ODE and its flow play important | 
| 604 | < | roles in numerical studies. Many of them can be found in systems | 
| 605 | < | which occur naturally in applications. | 
| 603 | > | The hidden geometric properties\cite{Budd1999, Marsden1998} of an | 
| 604 | > | ODE and its flow play important roles in numerical studies. Many of | 
| 605 | > | them can be found in systems which occur naturally in applications. | 
| 606 |  |  | 
| 607 |  | Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is | 
| 608 |  | a \emph{symplectic} flow if it satisfies, | 
| 617 |  | \begin{equation} | 
| 618 |  | {\varphi '}^T J \varphi ' = J \circ \varphi | 
| 619 |  | \end{equation} | 
| 620 | < | is the property must be preserved by the integrator. | 
| 620 | > | is the property that must be preserved by the integrator. | 
| 621 |  |  | 
| 622 |  | It is possible to construct a \emph{volume-preserving} flow for a | 
| 623 | < | source free($ \nabla \cdot f = 0 $) ODE, if the flow satisfies $ | 
| 623 | > | source free ODE ($ \nabla \cdot f = 0 $), if the flow satisfies $ | 
| 624 |  | \det d\varphi  = 1$. One can show easily that a symplectic flow will | 
| 625 |  | be volume-preserving. | 
| 626 |  |  | 
| 627 | < | Changing the variables $y = h(x)$ in a ODE\ref{introEquation:ODE} | 
| 628 | < | will result in a new system, | 
| 627 | > | Changing the variables $y = h(x)$ in an ODE | 
| 628 | > | (Eq.~\ref{introEquation:ODE}) will result in a new system, | 
| 629 |  | \[ | 
| 630 |  | \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y). | 
| 631 |  | \] | 
| 646 |  | which is the condition for conserving \emph{first integral}. For a | 
| 647 |  | canonical Hamiltonian system, the time evolution of an arbitrary | 
| 648 |  | smooth function $G$ is given by, | 
| 649 | < | \begin{equation} | 
| 650 | < | \begin{array}{c} | 
| 651 | < | \frac{{dG(x(t))}}{{dt}} = [\nabla _x G(x(t))]^T \dot x(t) \\ | 
| 652 | < | = [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\ | 
| 665 | < | \end{array} | 
| 649 | > |  | 
| 650 | > | \begin{eqnarray} | 
| 651 | > | \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \\ | 
| 652 | > | & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\ | 
| 653 |  | \label{introEquation:firstIntegral1} | 
| 654 | < | \end{equation} | 
| 654 | > | \end{eqnarray} | 
| 655 | > |  | 
| 656 | > |  | 
| 657 |  | Using poisson bracket notion, Equation | 
| 658 |  | \ref{introEquation:firstIntegral1} can be rewritten as | 
| 659 |  | \[ | 
| 668 |  | is a \emph{first integral}, which is due to the fact $\{ H,H\}  = | 
| 669 |  | 0$. | 
| 670 |  |  | 
| 671 | < |  | 
| 683 | < | When designing any numerical methods, one should always try to | 
| 671 | > | When designing any numerical methods, one should always try to | 
| 672 |  | preserve the structural properties of the original ODE and its flow. | 
| 673 |  |  | 
| 674 |  | \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods} | 
| 675 |  | A lot of well established and very effective numerical methods have | 
| 676 |  | been successful precisely because of their symplecticities even | 
| 677 |  | though this fact was not recognized when they were first | 
| 678 | < | constructed. The most famous example is leapfrog methods in | 
| 679 | < | molecular dynamics. In general, symplectic integrators can be | 
| 678 | > | constructed. The most famous example is the Verlet-leapfrog method | 
| 679 | > | in molecular dynamics. In general, symplectic integrators can be | 
| 680 |  | constructed using one of four different methods. | 
| 681 |  | \begin{enumerate} | 
| 682 |  | \item Generating functions | 
| 685 |  | \item Splitting methods | 
| 686 |  | \end{enumerate} | 
| 687 |  |  | 
| 688 | < | Generating function tends to lead to methods which are cumbersome | 
| 689 | < | and difficult to use. In dissipative systems, variational methods | 
| 690 | < | can capture the decay of energy accurately. Since their | 
| 691 | < | geometrically unstable nature against non-Hamiltonian perturbations, | 
| 692 | < | ordinary implicit Runge-Kutta methods are not suitable for | 
| 693 | < | Hamiltonian system. Recently, various high-order explicit | 
| 694 | < | Runge--Kutta methods have been developed to overcome this | 
| 688 | > | Generating function\cite{Channell1990} tends to lead to methods | 
| 689 | > | which are cumbersome and difficult to use. In dissipative systems, | 
| 690 | > | variational methods can capture the decay of energy | 
| 691 | > | accurately\cite{Kane2000}. Since their geometrically unstable nature | 
| 692 | > | against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta | 
| 693 | > | methods are not suitable for Hamiltonian system. Recently, various | 
| 694 | > | high-order explicit Runge-Kutta methods | 
| 695 | > | \cite{Owren1992,Chen2003}have been developed to overcome this | 
| 696 |  | instability. However, due to computational penalty involved in | 
| 697 | < | implementing the Runge-Kutta methods, they do not attract too much | 
| 698 | < | attention from Molecular Dynamics community. Instead, splitting have | 
| 699 | < | been widely accepted since they exploit natural decompositions of | 
| 700 | < | the system\cite{Tuckerman92}. | 
| 697 | > | implementing the Runge-Kutta methods, they have not attracted much | 
| 698 | > | attention from the Molecular Dynamics community. Instead, splitting | 
| 699 | > | methods have been widely accepted since they exploit natural | 
| 700 | > | decompositions of the system\cite{Tuckerman1992, McLachlan1998}. | 
| 701 |  |  | 
| 702 | < | \subsubsection{\label{introSection:splittingMethod}Splitting Method} | 
| 702 | > | \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}} | 
| 703 |  |  | 
| 704 |  | The main idea behind splitting methods is to decompose the discrete | 
| 705 |  | $\varphi_h$ as a composition of simpler flows, | 
| 720 |  | energy respectively, which is a natural decomposition of the | 
| 721 |  | problem. If $H_1$ and $H_2$ can be integrated using exact flows | 
| 722 |  | $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a simple first | 
| 723 | < | order is then given by the Lie-Trotter formula | 
| 723 | > | order expression is then given by the Lie-Trotter formula | 
| 724 |  | \begin{equation} | 
| 725 |  | \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h}, | 
| 726 |  | \label{introEquation:firstOrderSplitting} | 
| 746 |  | \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi | 
| 747 |  | _{1,h/2} , \label{introEquation:secondOrderSplitting} | 
| 748 |  | \end{equation} | 
| 749 | < | which has a local error proportional to $h^3$. Sprang splitting's | 
| 750 | < | popularity in molecular simulation community attribute to its | 
| 751 | < | symmetric property, | 
| 749 | > | which has a local error proportional to $h^3$. The Sprang | 
| 750 | > | splitting's popularity in molecular simulation community attribute | 
| 751 | > | to its symmetric property, | 
| 752 |  | \begin{equation} | 
| 753 |  | \varphi _h^{ - 1} = \varphi _{ - h}. | 
| 754 |  | \label{introEquation:timeReversible} | 
| 755 |  | \end{equation} | 
| 756 |  |  | 
| 757 | < | \subsubsection{\label{introSection:exampleSplittingMethod}Example of Splitting Method} | 
| 757 | > | \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}} | 
| 758 |  | The classical equation for a system consisting of interacting | 
| 759 |  | particles can be written in Hamiltonian form, | 
| 760 |  | \[ | 
| 761 |  | H = T + V | 
| 762 |  | \] | 
| 763 |  | where $T$ is the kinetic energy and $V$ is the potential energy. | 
| 764 | < | Setting $H_1 = T, H_2 = V$ and applying Strang splitting, one | 
| 764 | > | Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one | 
| 765 |  | obtains the following: | 
| 766 |  | \begin{align} | 
| 767 |  | q(\Delta t) &= q(0) + \dot{q}(0)\Delta t + | 
| 788 |  | \label{introEquation:Lp9b}\\% | 
| 789 |  | % | 
| 790 |  | \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) + | 
| 791 | < | \frac{\Delta t}{2m}\, F[q(0)]. \label{introEquation:Lp9c} | 
| 791 | > | \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c} | 
| 792 |  | \end{align} | 
| 793 |  | From the preceding splitting, one can see that the integration of | 
| 794 |  | the equations of motion would follow: | 
| 797 |  |  | 
| 798 |  | \item Use the half step velocities to move positions one whole step, $\Delta t$. | 
| 799 |  |  | 
| 800 | < | \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move. | 
| 800 | > | \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move. | 
| 801 |  |  | 
| 802 |  | \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values. | 
| 803 |  | \end{enumerate} | 
| 804 |  |  | 
| 805 | < | Simply switching the order of splitting and composing, a new | 
| 806 | < | integrator, the \emph{position verlet} integrator, can be generated, | 
| 805 | > | By simply switching the order of the propagators in the splitting | 
| 806 | > | and composing a new integrator, the \emph{position verlet} | 
| 807 | > | integrator, can be generated, | 
| 808 |  | \begin{align} | 
| 809 |  | \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) + | 
| 810 |  | \frac{{\Delta t}}{{2m}}\dot q(0)} \right], % | 
| 815 |  | \label{introEquation:positionVerlet2} | 
| 816 |  | \end{align} | 
| 817 |  |  | 
| 818 | < | \subsubsection{\label{introSection:errorAnalysis}Error Analysis and Higher Order Methods} | 
| 818 | > | \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}} | 
| 819 |  |  | 
| 820 | < | Baker-Campbell-Hausdorff formula can be used to determine the local | 
| 821 | < | error of splitting method in terms of commutator of the | 
| 820 | > | The Baker-Campbell-Hausdorff formula can be used to determine the | 
| 821 | > | local error of splitting method in terms of the commutator of the | 
| 822 |  | operators(\ref{introEquation:exponentialOperator}) associated with | 
| 823 | < | the sub-flow. For operators $hX$ and $hY$ which are associate to | 
| 823 | > | the sub-flow. For operators $hX$ and $hY$ which are associated with | 
| 824 |  | $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have | 
| 825 |  | \begin{equation} | 
| 826 |  | \exp (hX + hY) = \exp (hZ) | 
| 834 |  | \[ | 
| 835 |  | [X,Y] = XY - YX . | 
| 836 |  | \] | 
| 837 | < | Applying Baker-Campbell-Hausdorff formula to Sprang splitting, we | 
| 838 | < | can obtain | 
| 839 | < | \begin{equation} | 
| 840 | < | \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 | 
| 841 | < | [X,Y]/4 + h^2 [Y,X]/4 \\ & & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\ | 
| 842 | < | & & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3 [X,[X,Y]]/24 & & \mbox{} + | 
| 843 | < | \ldots ) | 
| 844 | < | \end{equation} | 
| 855 | < | Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0\], the dominant local | 
| 837 | > | Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974} | 
| 838 | > | to the Sprang splitting, we can obtain | 
| 839 | > | \begin{eqnarray*} | 
| 840 | > | \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\ | 
| 841 | > | &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\ | 
| 842 | > | &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots ) | 
| 843 | > | \end{eqnarray*} | 
| 844 | > | Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0,\] the dominant local | 
| 845 |  | error of Spring splitting is proportional to $h^3$. The same | 
| 846 | < | procedure can be applied to general splitting,  of the form | 
| 846 | > | procedure can be applied to a general splitting,  of the form | 
| 847 |  | \begin{equation} | 
| 848 |  | \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m - | 
| 849 |  | 1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 . | 
| 850 |  | \end{equation} | 
| 851 | < | Careful choice of coefficient $a_1 ,\ldot , b_m$ will lead to higher | 
| 852 | < | order method. Yoshida proposed an elegant way to compose higher | 
| 853 | < | order methods based on symmetric splitting. Given a symmetric second | 
| 854 | < | order base method $ \varphi _h^{(2)} $, a fourth-order symmetric | 
| 855 | < | method can be constructed by composing, | 
| 851 | > | A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher | 
| 852 | > | order methods. Yoshida proposed an elegant way to compose higher | 
| 853 | > | order methods based on symmetric splitting\cite{Yoshida1990}. Given | 
| 854 | > | a symmetric second order base method $ \varphi _h^{(2)} $, a | 
| 855 | > | fourth-order symmetric method can be constructed by composing, | 
| 856 |  | \[ | 
| 857 |  | \varphi _h^{(4)}  = \varphi _{\alpha h}^{(2)}  \circ \varphi _{\beta | 
| 858 |  | h}^{(2)}  \circ \varphi _{\alpha h}^{(2)} | 
| 862 |  | integrator $ \varphi _h^{(2n + 2)}$ can be composed by | 
| 863 |  | \begin{equation} | 
| 864 |  | \varphi _h^{(2n + 2)}  = \varphi _{\alpha h}^{(2n)}  \circ \varphi | 
| 865 | < | _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)} | 
| 865 | > | _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}, | 
| 866 |  | \end{equation} | 
| 867 | < | , if the weights are chosen as | 
| 867 | > | if the weights are chosen as | 
| 868 |  | \[ | 
| 869 |  | \alpha  =  - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta = | 
| 870 |  | \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} . | 
| 901 |  | \end{enumerate} | 
| 902 |  | These three individual steps will be covered in the following | 
| 903 |  | sections. Sec.~\ref{introSec:initialSystemSettings} deals with the | 
| 904 | < | initialization of a simulation. Sec.~\ref{introSec:production} will | 
| 905 | < | discusses issues in production run. Sec.~\ref{introSection:Analysis} | 
| 906 | < | provides the theoretical tools for trajectory analysis. | 
| 904 | > | initialization of a simulation. Sec.~\ref{introSection:production} | 
| 905 | > | will discusse issues in production run. | 
| 906 | > | Sec.~\ref{introSection:Analysis} provides the theoretical tools for | 
| 907 | > | trajectory analysis. | 
| 908 |  |  | 
| 909 |  | \subsection{\label{introSec:initialSystemSettings}Initialization} | 
| 910 |  |  | 
| 911 | < | \subsubsection{Preliminary preparation} | 
| 911 | > | \subsubsection{\textbf{Preliminary preparation}} | 
| 912 |  |  | 
| 913 |  | When selecting the starting structure of a molecule for molecular | 
| 914 |  | simulation, one may retrieve its Cartesian coordinates from public | 
| 915 |  | databases, such as RCSB Protein Data Bank \textit{etc}. Although | 
| 916 |  | thousands of crystal structures of molecules are discovered every | 
| 917 |  | year, many more remain unknown due to the difficulties of | 
| 918 | < | purification and crystallization. Even for the molecule with known | 
| 919 | < | structure, some important information is missing. For example, the | 
| 918 | > | purification and crystallization. Even for molecules with known | 
| 919 | > | structure, some important information is missing. For example, a | 
| 920 |  | missing hydrogen atom which acts as donor in hydrogen bonding must | 
| 921 |  | be added. Moreover, in order to include electrostatic interaction, | 
| 922 |  | one may need to specify the partial charges for individual atoms. | 
| 923 |  | Under some circumstances, we may even need to prepare the system in | 
| 924 | < | a special setup. For instance, when studying transport phenomenon in | 
| 925 | < | membrane system, we may prepare the lipids in bilayer structure | 
| 926 | < | instead of placing lipids randomly in solvent, since we are not | 
| 927 | < | interested in self-aggregation and it takes a long time to happen. | 
| 924 | > | a special configuration. For instance, when studying transport | 
| 925 | > | phenomenon in membrane systems, we may prepare the lipids in a | 
| 926 | > | bilayer structure instead of placing lipids randomly in solvent, | 
| 927 | > | since we are not interested in the slow self-aggregation process. | 
| 928 |  |  | 
| 929 | < | \subsubsection{Minimization} | 
| 929 | > | \subsubsection{\textbf{Minimization}} | 
| 930 |  |  | 
| 931 |  | It is quite possible that some of molecules in the system from | 
| 932 | < | preliminary preparation may be overlapped with each other. This | 
| 933 | < | close proximity leads to high potential energy which consequently | 
| 934 | < | jeopardizes any molecular dynamics simulations. To remove these | 
| 935 | < | steric overlaps, one typically performs energy minimization to find | 
| 936 | < | a more reasonable conformation. Several energy minimization methods | 
| 937 | < | have been developed to exploit the energy surface and to locate the | 
| 938 | < | local minimum. While converging slowly near the minimum, steepest | 
| 939 | < | descent method is extremely robust when systems are far from | 
| 940 | < | harmonic. Thus, it is often used to refine structure from | 
| 941 | < | crystallographic data. Relied on the gradient or hessian, advanced | 
| 942 | < | methods like conjugate gradient and Newton-Raphson converge rapidly | 
| 943 | < | to a local minimum, while become unstable if the energy surface is | 
| 944 | < | far from quadratic. Another factor must be taken into account, when | 
| 932 | > | preliminary preparation may be overlapping with each other. This | 
| 933 | > | close proximity leads to high initial potential energy which | 
| 934 | > | consequently jeopardizes any molecular dynamics simulations. To | 
| 935 | > | remove these steric overlaps, one typically performs energy | 
| 936 | > | minimization to find a more reasonable conformation. Several energy | 
| 937 | > | minimization methods have been developed to exploit the energy | 
| 938 | > | surface and to locate the local minimum. While converging slowly | 
| 939 | > | near the minimum, steepest descent method is extremely robust when | 
| 940 | > | systems are strongly anharmonic. Thus, it is often used to refine | 
| 941 | > | structure from crystallographic data. Relied on the gradient or | 
| 942 | > | hessian, advanced methods like Newton-Raphson converge rapidly to a | 
| 943 | > | local minimum, but become unstable if the energy surface is far from | 
| 944 | > | quadratic. Another factor that must be taken into account, when | 
| 945 |  | choosing energy minimization method, is the size of the system. | 
| 946 |  | Steepest descent and conjugate gradient can deal with models of any | 
| 947 | < | size. Because of the limit of computation power to calculate hessian | 
| 948 | < | matrix and insufficient storage capacity to store them, most | 
| 949 | < | Newton-Raphson methods can not be used with very large models. | 
| 947 | > | size. Because of the limits on computer memory to store the hessian | 
| 948 | > | matrix and the computing power needed to diagonalized these | 
| 949 | > | matrices, most Newton-Raphson methods can not be used with very | 
| 950 | > | large systems. | 
| 951 |  |  | 
| 952 | < | \subsubsection{Heating} | 
| 952 | > | \subsubsection{\textbf{Heating}} | 
| 953 |  |  | 
| 954 |  | Typically, Heating is performed by assigning random velocities | 
| 955 | < | according to a Gaussian distribution for a temperature. Beginning at | 
| 956 | < | a lower temperature and gradually increasing the temperature by | 
| 957 | < | assigning greater random velocities, we end up with setting the | 
| 958 | < | temperature of the system to a final temperature at which the | 
| 959 | < | simulation will be conducted. In heating phase, we should also keep | 
| 960 | < | the system from drifting or rotating as a whole. Equivalently, the | 
| 961 | < | net linear momentum and angular momentum of the system should be | 
| 962 | < | shifted to zero. | 
| 955 | > | according to a Maxwell-Boltzman distribution for a desired | 
| 956 | > | temperature. Beginning at a lower temperature and gradually | 
| 957 | > | increasing the temperature by assigning larger random velocities, we | 
| 958 | > | end up with setting the temperature of the system to a final | 
| 959 | > | temperature at which the simulation will be conducted. In heating | 
| 960 | > | phase, we should also keep the system from drifting or rotating as a | 
| 961 | > | whole. To do this, the net linear momentum and angular momentum of | 
| 962 | > | the system is shifted to zero after each resampling from the Maxwell | 
| 963 | > | -Boltzman distribution. | 
| 964 |  |  | 
| 965 | < | \subsubsection{Equilibration} | 
| 965 | > | \subsubsection{\textbf{Equilibration}} | 
| 966 |  |  | 
| 967 |  | The purpose of equilibration is to allow the system to evolve | 
| 968 |  | spontaneously for a period of time and reach equilibrium. The | 
| 976 |  |  | 
| 977 |  | \subsection{\label{introSection:production}Production} | 
| 978 |  |  | 
| 979 | < | Production run is the most important steps of the simulation, in | 
| 979 | > | The production run is the most important step of the simulation, in | 
| 980 |  | which the equilibrated structure is used as a starting point and the | 
| 981 |  | motions of the molecules are collected for later analysis. In order | 
| 982 |  | to capture the macroscopic properties of the system, the molecular | 
| 983 | < | dynamics simulation must be performed in correct and efficient way. | 
| 983 | > | dynamics simulation must be performed by sampling correctly and | 
| 984 | > | efficiently from the relevant thermodynamic ensemble. | 
| 985 |  |  | 
| 986 |  | The most expensive part of a molecular dynamics simulation is the | 
| 987 |  | calculation of non-bonded forces, such as van der Waals force and | 
| 988 |  | Coulombic forces \textit{etc}. For a system of $N$ particles, the | 
| 989 |  | complexity of the algorithm for pair-wise interactions is $O(N^2 )$, | 
| 990 |  | which making large simulations prohibitive in the absence of any | 
| 991 | < | computation saving techniques. | 
| 991 | > | algorithmic tricks. | 
| 992 |  |  | 
| 993 | < | A natural approach to avoid system size issue is to represent the | 
| 993 | > | A natural approach to avoid system size issues is to represent the | 
| 994 |  | bulk behavior by a finite number of the particles. However, this | 
| 995 | < | approach will suffer from the surface effect. To offset this, | 
| 996 | < | \textit{Periodic boundary condition} is developed to simulate bulk | 
| 995 | > | approach will suffer from the surface effect at the edges of the | 
| 996 | > | simulation. To offset this, \textit{Periodic boundary conditions} | 
| 997 | > | (see Fig.~\ref{introFig:pbc}) is developed to simulate bulk | 
| 998 |  | properties with a relatively small number of particles. In this | 
| 999 |  | method, the simulation box is replicated throughout space to form an | 
| 1000 |  | infinite lattice. During the simulation, when a particle moves in | 
| 1002 |  | direction with exactly the same orientation. Thus, as a particle | 
| 1003 |  | leaves the primary cell, one of its images will enter through the | 
| 1004 |  | opposite face. | 
| 1005 | < | %\begin{figure} | 
| 1006 | < | %\centering | 
| 1007 | < | %\includegraphics[width=\linewidth]{pbcFig.eps} | 
| 1008 | < | %\caption[An illustration of periodic boundary conditions]{A 2-D | 
| 1009 | < | %illustration of periodic boundary conditions. As one particle leaves | 
| 1010 | < | %the right of the simulation box, an image of it enters the left.} | 
| 1011 | < | %\label{introFig:pbc} | 
| 1012 | < | %\end{figure} | 
| 1005 | > | \begin{figure} | 
| 1006 | > | \centering | 
| 1007 | > | \includegraphics[width=\linewidth]{pbc.eps} | 
| 1008 | > | \caption[An illustration of periodic boundary conditions]{A 2-D | 
| 1009 | > | illustration of periodic boundary conditions. As one particle leaves | 
| 1010 | > | the left of the simulation box, an image of it enters the right.} | 
| 1011 | > | \label{introFig:pbc} | 
| 1012 | > | \end{figure} | 
| 1013 |  |  | 
| 1014 |  | %cutoff and minimum image convention | 
| 1015 |  | Another important technique to improve the efficiency of force | 
| 1016 | < | evaluation is to apply cutoff where particles farther than a | 
| 1017 | < | predetermined distance, are not included in the calculation | 
| 1016 | > | evaluation is to apply spherical cutoff where particles farther than | 
| 1017 | > | a predetermined distance are not included in the calculation | 
| 1018 |  | \cite{Frenkel1996}. The use of a cutoff radius will cause a | 
| 1019 |  | discontinuity in the potential energy curve. Fortunately, one can | 
| 1020 | < | shift the potential to ensure the potential curve go smoothly to | 
| 1021 | < | zero at the cutoff radius. Cutoff strategy works pretty well for | 
| 1022 | < | Lennard-Jones interaction because of its short range nature. | 
| 1023 | < | However, simply truncating the electrostatic interaction with the | 
| 1024 | < | use of cutoff has been shown to lead to severe artifacts in | 
| 1025 | < | simulations. Ewald summation, in which the slowly conditionally | 
| 1026 | < | convergent Coulomb potential is transformed into direct and | 
| 1027 | < | reciprocal sums with rapid and absolute convergence, has proved to | 
| 1028 | < | minimize the periodicity artifacts in liquid simulations. Taking the | 
| 1029 | < | advantages of the fast Fourier transform (FFT) for calculating | 
| 1030 | < | discrete Fourier transforms, the particle mesh-based methods are | 
| 1031 | < | accelerated from $O(N^{3/2})$ to $O(N logN)$. An alternative | 
| 1032 | < | approach is \emph{fast multipole method}, which treats Coulombic | 
| 1033 | < | interaction exactly at short range, and approximate the potential at | 
| 1034 | < | long range through multipolar expansion. In spite of their wide | 
| 1035 | < | acceptances at the molecular simulation community, these two methods | 
| 1036 | < | are hard to be implemented correctly and efficiently. Instead, we | 
| 1037 | < | use a damped and charge-neutralized Coulomb potential method | 
| 1038 | < | developed by Wolf and his coworkers. The shifted Coulomb potential | 
| 1039 | < | for particle $i$ and particle $j$ at distance $r_{rj}$ is given by: | 
| 1020 | > | shift simple radial potential to ensure the potential curve go | 
| 1021 | > | smoothly to zero at the cutoff radius. The cutoff strategy works | 
| 1022 | > | well for Lennard-Jones interaction because of its short range | 
| 1023 | > | nature. However, simply truncating the electrostatic interaction | 
| 1024 | > | with the use of cutoffs has been shown to lead to severe artifacts | 
| 1025 | > | in simulations. The Ewald summation, in which the slowly decaying | 
| 1026 | > | Coulomb potential is transformed into direct and reciprocal sums | 
| 1027 | > | with rapid and absolute convergence, has proved to minimize the | 
| 1028 | > | periodicity artifacts in liquid simulations. Taking the advantages | 
| 1029 | > | of the fast Fourier transform (FFT) for calculating discrete Fourier | 
| 1030 | > | transforms, the particle mesh-based | 
| 1031 | > | methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from | 
| 1032 | > | $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the | 
| 1033 | > | \emph{fast multipole method}\cite{Greengard1987, Greengard1994}, | 
| 1034 | > | which treats Coulombic interactions exactly at short range, and | 
| 1035 | > | approximate the potential at long range through multipolar | 
| 1036 | > | expansion. In spite of their wide acceptance at the molecular | 
| 1037 | > | simulation community, these two methods are difficult to implement | 
| 1038 | > | correctly and efficiently. Instead, we use a damped and | 
| 1039 | > | charge-neutralized Coulomb potential method developed by Wolf and | 
| 1040 | > | his coworkers\cite{Wolf1999}. The shifted Coulomb potential for | 
| 1041 | > | particle $i$ and particle $j$ at distance $r_{rj}$ is given by: | 
| 1042 |  | \begin{equation} | 
| 1043 |  | V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha | 
| 1044 |  | r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow | 
| 1048 |  | where $\alpha$ is the convergence parameter. Due to the lack of | 
| 1049 |  | inherent periodicity and rapid convergence,this method is extremely | 
| 1050 |  | efficient and easy to implement. | 
| 1051 | < | %\begin{figure} | 
| 1052 | < | %\centering | 
| 1053 | < | %\includegraphics[width=\linewidth]{pbcFig.eps} | 
| 1054 | < | %\caption[An illustration of shifted Coulomb potential]{An illustration of shifted Coulomb potential.} | 
| 1055 | < | %\label{introFigure:shiftedCoulomb} | 
| 1056 | < | %\end{figure} | 
| 1051 | > | \begin{figure} | 
| 1052 | > | \centering | 
| 1053 | > | \includegraphics[width=\linewidth]{shifted_coulomb.eps} | 
| 1054 | > | \caption[An illustration of shifted Coulomb potential]{An | 
| 1055 | > | illustration of shifted Coulomb potential.} | 
| 1056 | > | \label{introFigure:shiftedCoulomb} | 
| 1057 | > | \end{figure} | 
| 1058 |  |  | 
| 1059 |  | %multiple time step | 
| 1060 |  |  | 
| 1061 |  | \subsection{\label{introSection:Analysis} Analysis} | 
| 1062 |  |  | 
| 1063 | < | Recently, advanced visualization technique are widely applied to | 
| 1063 | > | Recently, advanced visualization technique have become applied to | 
| 1064 |  | monitor the motions of molecules. Although the dynamics of the | 
| 1065 |  | system can be described qualitatively from animation, quantitative | 
| 1066 | < | trajectory analysis are more appreciable. According to the | 
| 1067 | < | principles of Statistical Mechanics, | 
| 1068 | < | Sec.~\ref{introSection:statisticalMechanics}, one can compute | 
| 1069 | < | thermodynamics properties, analyze fluctuations of structural | 
| 1070 | < | parameters, and investigate time-dependent processes of the molecule | 
| 1074 | < | from the trajectories. | 
| 1066 | > | trajectory analysis are more useful. According to the principles of | 
| 1067 | > | Statistical Mechanics, Sec.~\ref{introSection:statisticalMechanics}, | 
| 1068 | > | one can compute thermodynamic properties, analyze fluctuations of | 
| 1069 | > | structural parameters, and investigate time-dependent processes of | 
| 1070 | > | the molecule from the trajectories. | 
| 1071 |  |  | 
| 1072 | < | \subsubsection{\label{introSection:thermodynamicsProperties}Thermodynamics Properties} | 
| 1072 | > | \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}} | 
| 1073 |  |  | 
| 1074 | < | Thermodynamics properties, which can be expressed in terms of some | 
| 1074 | > | Thermodynamic properties, which can be expressed in terms of some | 
| 1075 |  | function of the coordinates and momenta of all particles in the | 
| 1076 |  | system, can be directly computed from molecular dynamics. The usual | 
| 1077 |  | way to measure the pressure is based on virial theorem of Clausius | 
| 1091 |  | < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle | 
| 1092 |  | \end{equation} | 
| 1093 |  |  | 
| 1094 | < | \subsubsection{\label{introSection:structuralProperties}Structural Properties} | 
| 1094 | > | \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}} | 
| 1095 |  |  | 
| 1096 |  | Structural Properties of a simple fluid can be described by a set of | 
| 1097 | < | distribution functions. Among these functions,\emph{pair | 
| 1097 | > | distribution functions. Among these functions,the \emph{pair | 
| 1098 |  | distribution function}, also known as \emph{radial distribution | 
| 1099 | < | function}, is of most fundamental importance to liquid-state theory. | 
| 1100 | < | Pair distribution function can be gathered by Fourier transforming | 
| 1101 | < | raw data from a series of neutron diffraction experiments and | 
| 1102 | < | integrating over the surface factor \cite{Powles73}. The experiment | 
| 1103 | < | result can serve as a criterion to justify the correctness of the | 
| 1104 | < | theory. Moreover, various equilibrium thermodynamic and structural | 
| 1105 | < | properties can also be expressed in terms of radial distribution | 
| 1106 | < | function \cite{allen87:csl}. | 
| 1099 | > | function}, is of most fundamental importance to liquid theory. | 
| 1100 | > | Experimentally, pair distribution function can be gathered by | 
| 1101 | > | Fourier transforming raw data from a series of neutron diffraction | 
| 1102 | > | experiments and integrating over the surface factor | 
| 1103 | > | \cite{Powles1973}. The experimental results can serve as a criterion | 
| 1104 | > | to justify the correctness of a liquid model. Moreover, various | 
| 1105 | > | equilibrium thermodynamic and structural properties can also be | 
| 1106 | > | expressed in terms of radial distribution function \cite{Allen1987}. | 
| 1107 |  |  | 
| 1108 | < | A pair distribution functions $g(r)$ gives the probability that a | 
| 1108 | > | The pair distribution functions $g(r)$ gives the probability that a | 
| 1109 |  | particle $i$ will be located at a distance $r$ from a another | 
| 1110 |  | particle $j$ in the system | 
| 1111 |  | \[ | 
| 1112 |  | g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j | 
| 1113 | < | \ne i} {\delta (r - r_{ij} )} } } \right\rangle. | 
| 1113 | > | \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho | 
| 1114 | > | (r)}{\rho}. | 
| 1115 |  | \] | 
| 1116 |  | Note that the delta function can be replaced by a histogram in | 
| 1117 | < | computer simulation. Figure | 
| 1118 | < | \ref{introFigure:pairDistributionFunction} shows a typical pair | 
| 1119 | < | distribution function for the liquid argon system. The occurrence of | 
| 1123 | < | several peaks in the plot of $g(r)$ suggests that it is more likely | 
| 1124 | < | to find particles at certain radial values than at others. This is a | 
| 1125 | < | result of the attractive interaction at such distances. Because of | 
| 1126 | < | the strong repulsive forces at short distance, the probability of | 
| 1127 | < | locating particles at distances less than about 2.5{\AA} from each | 
| 1128 | < | other is essentially zero. | 
| 1117 | > | computer simulation. Peaks in $g(r)$ represent solvent shells, and | 
| 1118 | > | the height of these peaks gradually decreases to 1 as the liquid of | 
| 1119 | > | large distance approaches the bulk density. | 
| 1120 |  |  | 
| 1130 | – | %\begin{figure} | 
| 1131 | – | %\centering | 
| 1132 | – | %\includegraphics[width=\linewidth]{pdf.eps} | 
| 1133 | – | %\caption[Pair distribution function for the liquid argon | 
| 1134 | – | %]{Pair distribution function for the liquid argon} | 
| 1135 | – | %\label{introFigure:pairDistributionFunction} | 
| 1136 | – | %\end{figure} | 
| 1121 |  |  | 
| 1122 | < | \subsubsection{\label{introSection:timeDependentProperties}Time-dependent | 
| 1123 | < | Properties} | 
| 1122 | > | \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent | 
| 1123 | > | Properties}} | 
| 1124 |  |  | 
| 1125 |  | Time-dependent properties are usually calculated using \emph{time | 
| 1126 | < | correlation function}, which correlates random variables $A$ and $B$ | 
| 1127 | < | at two different time | 
| 1126 | > | correlation functions}, which correlate random variables $A$ and $B$ | 
| 1127 | > | at two different times, | 
| 1128 |  | \begin{equation} | 
| 1129 |  | C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle. | 
| 1130 |  | \label{introEquation:timeCorrelationFunction} | 
| 1131 |  | \end{equation} | 
| 1132 |  | If $A$ and $B$ refer to same variable, this kind of correlation | 
| 1133 | < | function is called \emph{auto correlation function}. One example of | 
| 1134 | < | auto correlation function is velocity auto-correlation function | 
| 1135 | < | which is directly related to transport properties of molecular | 
| 1136 | < | liquids: | 
| 1133 | > | function is called an \emph{autocorrelation function}. One example | 
| 1134 | > | of an auto correlation function is the velocity auto-correlation | 
| 1135 | > | function which is directly related to transport properties of | 
| 1136 | > | molecular liquids: | 
| 1137 |  | \[ | 
| 1138 |  | D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)} | 
| 1139 |  | \right\rangle } dt | 
| 1140 |  | \] | 
| 1141 | < | where $D$ is diffusion constant. Unlike velocity autocorrelation | 
| 1142 | < | function which is averaging over time origins and over all the | 
| 1143 | < | atoms, dipole autocorrelation are calculated for the entire system. | 
| 1144 | < | The dipole autocorrelation function is given by: | 
| 1141 | > | where $D$ is diffusion constant. Unlike the velocity autocorrelation | 
| 1142 | > | function, which is averaging over time origins and over all the | 
| 1143 | > | atoms, the dipole autocorrelation functions are calculated for the | 
| 1144 | > | entire system. The dipole autocorrelation function is given by: | 
| 1145 |  | \[ | 
| 1146 |  | c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)} | 
| 1147 |  | \right\rangle | 
| 1167 |  | areas, from engineering, physics, to chemistry. For example, | 
| 1168 |  | missiles and vehicle are usually modeled by rigid bodies.  The | 
| 1169 |  | movement of the objects in 3D gaming engine or other physics | 
| 1170 | < | simulator is governed by the rigid body dynamics. In molecular | 
| 1171 | < | simulation, rigid body is used to simplify the model in | 
| 1172 | < | protein-protein docking study{\cite{Gray03}}. | 
| 1170 | > | simulator is governed by rigid body dynamics. In molecular | 
| 1171 | > | simulations, rigid bodies are used to simplify protein-protein | 
| 1172 | > | docking studies\cite{Gray2003}. | 
| 1173 |  |  | 
| 1174 |  | It is very important to develop stable and efficient methods to | 
| 1175 | < | integrate the equations of motion of orientational degrees of | 
| 1176 | < | freedom. Euler angles are the nature choice to describe the | 
| 1177 | < | rotational degrees of freedom. However, due to its singularity, the | 
| 1178 | < | numerical integration of corresponding equations of motion is very | 
| 1179 | < | inefficient and inaccurate. Although an alternative integrator using | 
| 1180 | < | different sets of Euler angles can overcome this difficulty\cite{}, | 
| 1181 | < | the computational penalty and the lost of angular momentum | 
| 1182 | < | conservation still remain. A singularity free representation | 
| 1183 | < | utilizing quaternions was developed by Evans in 1977. Unfortunately, | 
| 1184 | < | this approach suffer from the nonseparable Hamiltonian resulted from | 
| 1175 | > | integrate the equations of motion for orientational degrees of | 
| 1176 | > | freedom. Euler angles are the natural choice to describe the | 
| 1177 | > | rotational degrees of freedom. However, due to $\frac {1}{sin | 
| 1178 | > | \theta}$ singularities, the numerical integration of corresponding | 
| 1179 | > | equations of motion is very inefficient and inaccurate. Although an | 
| 1180 | > | alternative integrator using multiple sets of Euler angles can | 
| 1181 | > | overcome this difficulty\cite{Barojas1973}, the computational | 
| 1182 | > | penalty and the loss of angular momentum conservation still remain. | 
| 1183 | > | A singularity-free representation utilizing quaternions was | 
| 1184 | > | developed by Evans in 1977\cite{Evans1977}. Unfortunately, this | 
| 1185 | > | approach uses a nonseparable Hamiltonian resulting from the | 
| 1186 |  | quaternion representation, which prevents the symplectic algorithm | 
| 1187 |  | to be utilized. Another different approach is to apply holonomic | 
| 1188 |  | constraints to the atoms belonging to the rigid body. Each atom | 
| 1189 |  | moves independently under the normal forces deriving from potential | 
| 1190 |  | energy and constraint forces which are used to guarantee the | 
| 1191 | < | rigidness. However, due to their iterative nature, SHAKE and Rattle | 
| 1192 | < | algorithm converge very slowly when the number of constraint | 
| 1193 | < | increases. | 
| 1191 | > | rigidness. However, due to their iterative nature, the SHAKE and | 
| 1192 | > | Rattle algorithms also converge very slowly when the number of | 
| 1193 | > | constraints increases\cite{Ryckaert1977, Andersen1983}. | 
| 1194 |  |  | 
| 1195 | < | The break through in geometric literature suggests that, in order to | 
| 1195 | > | A break-through in geometric literature suggests that, in order to | 
| 1196 |  | develop a long-term integration scheme, one should preserve the | 
| 1197 | < | symplectic structure of the flow. Introducing conjugate momentum to | 
| 1198 | < | rotation matrix $Q$ and re-formulating Hamiltonian's equation, a | 
| 1199 | < | symplectic integrator, RSHAKE, was proposed to evolve the | 
| 1200 | < | Hamiltonian system in a constraint manifold by iteratively | 
| 1201 | < | satisfying the orthogonality constraint $Q_T Q = 1$. An alternative | 
| 1202 | < | method using quaternion representation was developed by Omelyan. | 
| 1203 | < | However, both of these methods are iterative and inefficient. In | 
| 1204 | < | this section, we will present a symplectic Lie-Poisson integrator | 
| 1205 | < | for rigid body developed by Dullweber and his | 
| 1206 | < | coworkers\cite{Dullweber1997} in depth. | 
| 1197 | > | symplectic structure of the flow. By introducing a conjugate | 
| 1198 | > | momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's | 
| 1199 | > | equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was | 
| 1200 | > | proposed to evolve the Hamiltonian system in a constraint manifold | 
| 1201 | > | by iteratively satisfying the orthogonality constraint $Q^T Q = 1$. | 
| 1202 | > | An alternative method using the quaternion representation was | 
| 1203 | > | developed by Omelyan\cite{Omelyan1998}. However, both of these | 
| 1204 | > | methods are iterative and inefficient. In this section, we descibe a | 
| 1205 | > | symplectic Lie-Poisson integrator for rigid body developed by | 
| 1206 | > | Dullweber and his coworkers\cite{Dullweber1997} in depth. | 
| 1207 |  |  | 
| 1208 | < | \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Body} | 
| 1209 | < | The motion of the rigid body is Hamiltonian with the Hamiltonian | 
| 1208 | > | \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies} | 
| 1209 | > | The motion of a rigid body is Hamiltonian with the Hamiltonian | 
| 1210 |  | function | 
| 1211 |  | \begin{equation} | 
| 1212 |  | H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) + | 
| 1220 |  | I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} } | 
| 1221 |  | \] | 
| 1222 |  | where $I_{ii}$ is the diagonal element of the inertia tensor. This | 
| 1223 | < | constrained Hamiltonian equation subjects to a holonomic constraint, | 
| 1223 | > | constrained Hamiltonian equation is subjected to a holonomic | 
| 1224 | > | constraint, | 
| 1225 |  | \begin{equation} | 
| 1226 |  | Q^T Q = 1, \label{introEquation:orthogonalConstraint} | 
| 1227 |  | \end{equation} | 
| 1228 | < | which is used to ensure rotation matrix's orthogonality. | 
| 1229 | < | Differentiating \ref{introEquation:orthogonalConstraint} and using | 
| 1230 | < | Equation \ref{introEquation:RBMotionMomentum}, one may obtain, | 
| 1228 | > | which is used to ensure rotation matrix's unitarity. Differentiating | 
| 1229 | > | \ref{introEquation:orthogonalConstraint} and using Equation | 
| 1230 | > | \ref{introEquation:RBMotionMomentum}, one may obtain, | 
| 1231 |  | \begin{equation} | 
| 1232 |  | Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\ | 
| 1233 |  | \label{introEquation:RBFirstOrderConstraint} | 
| 1236 |  | Using Equation (\ref{introEquation:motionHamiltonianCoordinate}, | 
| 1237 |  | \ref{introEquation:motionHamiltonianMomentum}), one can write down | 
| 1238 |  | the equations of motion, | 
| 1253 | – | \[ | 
| 1254 | – | \begin{array}{c} | 
| 1255 | – | \frac{{dq}}{{dt}} = \frac{p}{m} \label{introEquation:RBMotionPosition}\\ | 
| 1256 | – | \frac{{dp}}{{dt}} =  - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\ | 
| 1257 | – | \frac{{dQ}}{{dt}} = PJ^{ - 1}  \label{introEquation:RBMotionRotation}\\ | 
| 1258 | – | \frac{{dP}}{{dt}} =  - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}\\ | 
| 1259 | – | \end{array} | 
| 1260 | – | \] | 
| 1239 |  |  | 
| 1240 | + | \begin{eqnarray} | 
| 1241 | + | \frac{{dq}}{{dt}} & = & \frac{p}{m} \label{introEquation:RBMotionPosition}\\ | 
| 1242 | + | \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\ | 
| 1243 | + | \frac{{dQ}}{{dt}} & = & PJ^{ - 1}  \label{introEquation:RBMotionRotation}\\ | 
| 1244 | + | \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP} | 
| 1245 | + | \end{eqnarray} | 
| 1246 | + |  | 
| 1247 |  | In general, there are two ways to satisfy the holonomic constraints. | 
| 1248 | < | We can use constraint force provided by lagrange multiplier on the | 
| 1249 | < | normal manifold to keep the motion on constraint space. Or we can | 
| 1250 | < | simply evolve the system in constraint manifold. The two method are | 
| 1251 | < | proved to be equivalent. The holonomic constraint and equations of | 
| 1252 | < | motions define a constraint manifold for rigid body | 
| 1248 | > | We can use a constraint force provided by a Lagrange multiplier on | 
| 1249 | > | the normal manifold to keep the motion on constraint space. Or we | 
| 1250 | > | can simply evolve the system on the constraint manifold. These two | 
| 1251 | > | methods have been proved to be equivalent. The holonomic constraint | 
| 1252 | > | and equations of motions define a constraint manifold for rigid | 
| 1253 | > | bodies | 
| 1254 |  | \[ | 
| 1255 |  | M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0} | 
| 1256 |  | \right\}. | 
| 1259 |  | Unfortunately, this constraint manifold is not the cotangent bundle | 
| 1260 |  | $T_{\star}SO(3)$. However, it turns out that under symplectic | 
| 1261 |  | transformation, the cotangent space and the phase space are | 
| 1262 | < | diffeomorphic. Introducing | 
| 1262 | > | diffeomorphic. By introducing | 
| 1263 |  | \[ | 
| 1264 |  | \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right), | 
| 1265 |  | \] | 
| 1291 |  | respectively. | 
| 1292 |  |  | 
| 1293 |  | As a common choice to describe the rotation dynamics of the rigid | 
| 1294 | < | body, angular momentum on body frame $\Pi  = Q^t P$ is introduced to | 
| 1295 | < | rewrite the equations of motion, | 
| 1294 | > | body, the angular momentum on the body fixed frame $\Pi  = Q^t P$ is | 
| 1295 | > | introduced to rewrite the equations of motion, | 
| 1296 |  | \begin{equation} | 
| 1297 |  | \begin{array}{l} | 
| 1298 |  | \mathop \Pi \limits^ \bullet   = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda  \\ | 
| 1324 |  | \[ | 
| 1325 |  | \hat vu = v \times u | 
| 1326 |  | \] | 
| 1341 | – |  | 
| 1327 |  | Using \ref{introEqaution:RBMotionPI}, one can construct a skew | 
| 1328 |  | matrix, | 
| 1329 |  | \begin{equation} | 
| 1330 | < | (\mathop \Pi \limits^ \bullet   - \mathop \Pi \limits^ \bullet  ^T | 
| 1330 | > | (\mathop \Pi \limits^ \bullet   - \mathop \Pi \limits^ {\bullet  ^T} | 
| 1331 |  | ){\rm{ }} = {\rm{ }}(\Pi  - \Pi ^T ){\rm{ }}(J^{ - 1} \Pi  + \Pi J^{ | 
| 1332 |  | - 1} ) + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]} - | 
| 1333 |  | (\Lambda  - \Lambda ^T ) . \label{introEquation:skewMatrixPI} | 
| 1335 |  | Since $\Lambda$ is symmetric, the last term of Equation | 
| 1336 |  | \ref{introEquation:skewMatrixPI} is zero, which implies the Lagrange | 
| 1337 |  | multiplier $\Lambda$ is absent from the equations of motion. This | 
| 1338 | < | unique property eliminate the requirement of iterations which can | 
| 1339 | < | not be avoided in other methods\cite{}. | 
| 1338 | > | unique property eliminates the requirement of iterations which can | 
| 1339 | > | not be avoided in other methods\cite{Kol1997, Omelyan1998}. | 
| 1340 |  |  | 
| 1341 | < | Applying hat-map isomorphism, we obtain the equation of motion for | 
| 1342 | < | angular momentum on body frame | 
| 1341 | > | Applying the hat-map isomorphism, we obtain the equation of motion | 
| 1342 | > | for angular momentum on body frame | 
| 1343 |  | \begin{equation} | 
| 1344 |  | \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T | 
| 1345 |  | F_i (r,Q)} \right) \times X_i }. | 
| 1354 |  | \subsection{\label{introSection:SymplecticFreeRB}Symplectic | 
| 1355 |  | Lie-Poisson Integrator for Free Rigid Body} | 
| 1356 |  |  | 
| 1357 | < | If there is not external forces exerted on the rigid body, the only | 
| 1358 | < | contribution to the rotational is from the kinetic potential (the | 
| 1359 | < | first term of \ref{ introEquation:bodyAngularMotion}). The free | 
| 1360 | < | rigid body is an example of Lie-Poisson system with Hamiltonian | 
| 1357 | > | If there are no external forces exerted on the rigid body, the only | 
| 1358 | > | contribution to the rotational motion is from the kinetic energy | 
| 1359 | > | (the first term of \ref{introEquation:bodyAngularMotion}). The free | 
| 1360 | > | rigid body is an example of a Lie-Poisson system with Hamiltonian | 
| 1361 |  | function | 
| 1362 |  | \begin{equation} | 
| 1363 |  | T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 ) | 
| 1405 |  | \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t. | 
| 1406 |  | \] | 
| 1407 |  | To reduce the cost of computing expensive functions in $e^{\Delta | 
| 1408 | < | tR_1 }$, we can use Cayley transformation, | 
| 1408 | > | tR_1 }$, we can use Cayley transformation to obtain a single-aixs | 
| 1409 | > | propagator, | 
| 1410 |  | \[ | 
| 1411 |  | e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1 | 
| 1412 |  | ) | 
| 1413 |  | \] | 
| 1414 |  | The flow maps for $T_2^r$ and $T_3^r$ can be found in the same | 
| 1415 | < | manner. | 
| 1416 | < |  | 
| 1431 | < | In order to construct a second-order symplectic method, we split the | 
| 1432 | < | angular kinetic Hamiltonian function can into five terms | 
| 1415 | > | manner. In order to construct a second-order symplectic method, we | 
| 1416 | > | split the angular kinetic Hamiltonian function can into five terms | 
| 1417 |  | \[ | 
| 1418 |  | T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2 | 
| 1419 |  | ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r | 
| 1420 | < | (\pi _1 ) | 
| 1421 | < | \]. | 
| 1422 | < | Concatenating flows corresponding to these five terms, we can obtain | 
| 1423 | < | an symplectic integrator, | 
| 1420 | > | (\pi _1 ). | 
| 1421 | > | \] | 
| 1422 | > | By concatenating the propagators corresponding to these five terms, | 
| 1423 | > | we can obtain an symplectic integrator, | 
| 1424 |  | \[ | 
| 1425 |  | \varphi _{\Delta t,T^r }  = \varphi _{\Delta t/2,\pi _1 }  \circ | 
| 1426 |  | \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 } | 
| 1447 |  | \] | 
| 1448 |  | Thus $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel \pi | 
| 1449 |  | \parallel ^2 }}{2})\pi  \times \pi  = 0 $. This explicit | 
| 1450 | < | Lie-Poisson integrator is found to be extremely efficient and stable | 
| 1451 | < | which can be explained by the fact the small angle approximation is | 
| 1452 | < | used and the norm of the angular momentum is conserved. | 
| 1450 | > | Lie-Poisson integrator is found to be both extremely efficient and | 
| 1451 | > | stable. These properties can be explained by the fact the small | 
| 1452 | > | angle approximation is used and the norm of the angular momentum is | 
| 1453 | > | conserved. | 
| 1454 |  |  | 
| 1455 |  | \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian | 
| 1456 |  | Splitting for Rigid Body} | 
| 1462 |  | \] | 
| 1463 |  | The equations of motion corresponding to potential energy and | 
| 1464 |  | kinetic energy are listed in the below table, | 
| 1465 | + | \begin{table} | 
| 1466 | + | \caption{Equations of motion due to Potential and Kinetic Energies} | 
| 1467 |  | \begin{center} | 
| 1468 |  | \begin{tabular}{|l|l|} | 
| 1469 |  | \hline | 
| 1476 |  | \hline | 
| 1477 |  | \end{tabular} | 
| 1478 |  | \end{center} | 
| 1479 | + | \end{table} | 
| 1480 |  | A second-order symplectic method is now obtained by the composition | 
| 1481 | < | of the flow maps, | 
| 1481 | > | of the position and velocity propagators, | 
| 1482 |  | \[ | 
| 1483 |  | \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi | 
| 1484 |  | _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}. | 
| 1485 |  | \] | 
| 1486 |  | Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two | 
| 1487 | < | sub-flows which corresponding to force and torque respectively, | 
| 1487 | > | sub-propagators which corresponding to force and torque | 
| 1488 | > | respectively, | 
| 1489 |  | \[ | 
| 1490 |  | \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi | 
| 1491 |  | _{\Delta t/2,\tau }. | 
| 1492 |  | \] | 
| 1493 |  | Since the associated operators of $\varphi _{\Delta t/2,F} $ and | 
| 1494 | < | $\circ \varphi _{\Delta t/2,\tau }$ are commuted, the composition | 
| 1495 | < | order inside $\varphi _{\Delta t/2,V}$ does not matter. | 
| 1496 | < |  | 
| 1497 | < | Furthermore, kinetic potential can be separated to translational | 
| 1509 | < | kinetic term, $T^t (p)$, and rotational kinetic term, $T^r (\pi )$, | 
| 1494 | > | $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order | 
| 1495 | > | inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the | 
| 1496 | > | kinetic energy can be separated to translational kinetic term, $T^t | 
| 1497 | > | (p)$, and rotational kinetic term, $T^r (\pi )$, | 
| 1498 |  | \begin{equation} | 
| 1499 |  | T(p,\pi ) =T^t (p) + T^r (\pi ). | 
| 1500 |  | \end{equation} | 
| 1501 |  | where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is | 
| 1502 |  | defined by \ref{introEquation:rotationalKineticRB}. Therefore, the | 
| 1503 | < | corresponding flow maps are given by | 
| 1503 | > | corresponding propagators are given by | 
| 1504 |  | \[ | 
| 1505 |  | \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi | 
| 1506 |  | _{\Delta t,T^r }. | 
| 1507 |  | \] | 
| 1508 | < | Finally, we obtain the overall symplectic flow maps for free moving | 
| 1509 | < | rigid body | 
| 1508 | > | Finally, we obtain the overall symplectic propagators for freely | 
| 1509 | > | moving rigid bodies | 
| 1510 |  | \begin{equation} | 
| 1511 |  | \begin{array}{c} | 
| 1512 |  | \varphi _{\Delta t}  = \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \\ | 
| 1520 |  | As an alternative to newtonian dynamics, Langevin dynamics, which | 
| 1521 |  | mimics a simple heat bath with stochastic and dissipative forces, | 
| 1522 |  | has been applied in a variety of studies. This section will review | 
| 1523 | < | the theory of Langevin dynamics simulation. A brief derivation of | 
| 1524 | < | generalized Langevin equation will be given first. Follow that, we | 
| 1525 | < | will discuss the physical meaning of the terms appearing in the | 
| 1526 | < | equation as well as the calculation of friction tensor from | 
| 1527 | < | hydrodynamics theory. | 
| 1523 | > | the theory of Langevin dynamics. A brief derivation of generalized | 
| 1524 | > | Langevin equation will be given first. Following that, we will | 
| 1525 | > | discuss the physical meaning of the terms appearing in the equation | 
| 1526 | > | as well as the calculation of friction tensor from hydrodynamics | 
| 1527 | > | theory. | 
| 1528 |  |  | 
| 1529 |  | \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation} | 
| 1530 |  |  | 
| 1531 | < | Harmonic bath model, in which an effective set of harmonic | 
| 1531 | > | A harmonic bath model, in which an effective set of harmonic | 
| 1532 |  | oscillators are used to mimic the effect of a linearly responding | 
| 1533 |  | environment, has been widely used in quantum chemistry and | 
| 1534 |  | statistical mechanics. One of the successful applications of | 
| 1535 | < | Harmonic bath model is the derivation of Deriving Generalized | 
| 1536 | < | Langevin Dynamics. Lets consider a system, in which the degree of | 
| 1535 | > | Harmonic bath model is the derivation of the Generalized Langevin | 
| 1536 | > | Dynamics (GLE). Lets consider a system, in which the degree of | 
| 1537 |  | freedom $x$ is assumed to couple to the bath linearly, giving a | 
| 1538 |  | Hamiltonian of the form | 
| 1539 |  | \begin{equation} | 
| 1540 |  | H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N) | 
| 1541 |  | \label{introEquation:bathGLE}. | 
| 1542 |  | \end{equation} | 
| 1543 | < | Here $p$ is a momentum conjugate to $q$, $m$ is the mass associated | 
| 1544 | < | with this degree of freedom, $H_B$ is harmonic bath Hamiltonian, | 
| 1543 | > | Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated | 
| 1544 | > | with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian, | 
| 1545 |  | \[ | 
| 1546 |  | H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2 | 
| 1547 |  | }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 } | 
| 1549 |  | \] | 
| 1550 |  | where the index $\alpha$ runs over all the bath degrees of freedom, | 
| 1551 |  | $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are | 
| 1552 | < | the harmonic bath masses, and $\Delta U$ is bilinear system-bath | 
| 1552 | > | the harmonic bath masses, and $\Delta U$ is a bilinear system-bath | 
| 1553 |  | coupling, | 
| 1554 |  | \[ | 
| 1555 |  | \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x} | 
| 1556 |  | \] | 
| 1557 | < | where $g_\alpha$ are the coupling constants between the bath and the | 
| 1558 | < | coordinate $x$. Introducing | 
| 1557 | > | where $g_\alpha$ are the coupling constants between the bath | 
| 1558 | > | coordinates ($x_ \alpha$) and the system coordinate ($x$). | 
| 1559 | > | Introducing | 
| 1560 |  | \[ | 
| 1561 |  | W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2 | 
| 1562 |  | }}{{2m_\alpha  w_\alpha ^2 }}} x^2 | 
| 1571 |  | \] | 
| 1572 |  | Since the first two terms of the new Hamiltonian depend only on the | 
| 1573 |  | system coordinates, we can get the equations of motion for | 
| 1574 | < | Generalized Langevin Dynamics by Hamilton's equations | 
| 1586 | < | \ref{introEquation:motionHamiltonianCoordinate, | 
| 1587 | < | introEquation:motionHamiltonianMomentum}, | 
| 1574 | > | Generalized Langevin Dynamics by Hamilton's equations, | 
| 1575 |  | \begin{equation} | 
| 1576 |  | m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - | 
| 1577 |  | \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   - | 
| 1588 |  | In order to derive an equation for $x$, the dynamics of the bath | 
| 1589 |  | variables $x_\alpha$ must be solved exactly first. As an integral | 
| 1590 |  | transform which is particularly useful in solving linear ordinary | 
| 1591 | < | differential equations, Laplace transform is the appropriate tool to | 
| 1592 | < | solve this problem. The basic idea is to transform the difficult | 
| 1591 | > | differential equations,the Laplace transform is the appropriate tool | 
| 1592 | > | to solve this problem. The basic idea is to transform the difficult | 
| 1593 |  | differential equations into simple algebra problems which can be | 
| 1594 | < | solved easily. Then applying inverse Laplace transform, also known | 
| 1595 | < | as the Bromwich integral, we can retrieve the solutions of the | 
| 1594 | > | solved easily. Then, by applying the inverse Laplace transform, also | 
| 1595 | > | known as the Bromwich integral, we can retrieve the solutions of the | 
| 1596 |  | original problems. | 
| 1597 |  |  | 
| 1598 |  | Let $f(t)$ be a function defined on $ [0,\infty ) $. The Laplace | 
| 1602 |  | \] | 
| 1603 |  | where  $p$ is real and  $L$ is called the Laplace Transform | 
| 1604 |  | Operator. Below are some important properties of Laplace transform | 
| 1618 | – | \begin{equation} | 
| 1619 | – | \begin{array}{c} | 
| 1620 | – | L(x + y) = L(x) + L(y) \\ | 
| 1621 | – | L(ax) = aL(x) \\ | 
| 1622 | – | L(\dot x) = pL(x) - px(0) \\ | 
| 1623 | – | L(\ddot x) = p^2 L(x) - px(0) - \dot x(0) \\ | 
| 1624 | – | L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right) = G(p)H(p) \\ | 
| 1625 | – | \end{array} | 
| 1626 | – | \end{equation} | 
| 1605 |  |  | 
| 1606 | < | Applying Laplace transform to the bath coordinates, we obtain | 
| 1607 | < | \[ | 
| 1608 | < | \begin{array}{c} | 
| 1609 | < | p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) =  - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\ | 
| 1610 | < | L(x_\alpha  ) = \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\ | 
| 1611 | < | \end{array} | 
| 1612 | < | \] | 
| 1606 | > | \begin{eqnarray*} | 
| 1607 | > | L(x + y)  & = & L(x) + L(y) \\ | 
| 1608 | > | L(ax)     & = & aL(x) \\ | 
| 1609 | > | L(\dot x) & = & pL(x) - px(0) \\ | 
| 1610 | > | L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\ | 
| 1611 | > | L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\ | 
| 1612 | > | \end{eqnarray*} | 
| 1613 | > |  | 
| 1614 | > |  | 
| 1615 | > | Applying the Laplace transform to the bath coordinates, we obtain | 
| 1616 | > | \begin{eqnarray*} | 
| 1617 | > | p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\ | 
| 1618 | > | L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\ | 
| 1619 | > | \end{eqnarray*} | 
| 1620 | > |  | 
| 1621 |  | By the same way, the system coordinates become | 
| 1622 | < | \[ | 
| 1623 | < | \begin{array}{c} | 
| 1624 | < | mL(\ddot x) =  - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} \\ | 
| 1625 | < | - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\ | 
| 1640 | < | \end{array} | 
| 1641 | < | \] | 
| 1622 | > | \begin{eqnarray*} | 
| 1623 | > | mL(\ddot x) & = & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} \\ | 
| 1624 | > | & & \mbox{} - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\ | 
| 1625 | > | \end{eqnarray*} | 
| 1626 |  |  | 
| 1627 |  | With the help of some relatively important inverse Laplace | 
| 1628 |  | transformations: | 
| 1634 |  | \end{array} | 
| 1635 |  | \] | 
| 1636 |  | , we obtain | 
| 1637 | < | \begin{align} | 
| 1638 | < | m\ddot x &=  - \frac{{\partial W(x)}}{{\partial x}} - | 
| 1637 | > | \begin{eqnarray*} | 
| 1638 | > | m\ddot x & =  & - \frac{{\partial W(x)}}{{\partial x}} - | 
| 1639 |  | \sum\limits_{\alpha  = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2 | 
| 1640 |  | }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega | 
| 1641 | < | _\alpha  t)\dot x(t - \tau )d\tau  - \left[ {g_\alpha  x_\alpha  (0) | 
| 1642 | < | - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}} \right]\cos | 
| 1643 | < | (\omega _\alpha  t) - \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega | 
| 1644 | < | _\alpha  }}\sin (\omega _\alpha  t)} } \right\}} | 
| 1645 | < | % | 
| 1646 | < | &= - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t | 
| 1641 | > | _\alpha  t)\dot x(t - \tau )d\tau } } \right\}}  \\ | 
| 1642 | > | & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha | 
| 1643 | > | x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}} | 
| 1644 | > | \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha | 
| 1645 | > | (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}} | 
| 1646 | > | \end{eqnarray*} | 
| 1647 | > | \begin{eqnarray*} | 
| 1648 | > | m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t | 
| 1649 |  | {\sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2 | 
| 1650 |  | }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha | 
| 1651 | < | t)\dot x(t - \tau )d} \tau }  + \sum\limits_{\alpha  = 1}^N {\left\{ | 
| 1652 | < | {\left[ {g_\alpha  x_\alpha  (0) - \frac{{g_\alpha  }}{{m_\alpha | 
| 1653 | < | \omega _\alpha  }}} \right]\cos (\omega _\alpha  t) + | 
| 1654 | < | \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega _\alpha  }}\sin | 
| 1655 | < | (\omega _\alpha  t)} \right\}} | 
| 1656 | < | \end{align} | 
| 1671 | < |  | 
| 1651 | > | t)\dot x(t - \tau )d} \tau }  \\ | 
| 1652 | > | & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha | 
| 1653 | > | x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha  }}} | 
| 1654 | > | \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha | 
| 1655 | > | (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}} | 
| 1656 | > | \end{eqnarray*} | 
| 1657 |  | Introducing a \emph{dynamic friction kernel} | 
| 1658 |  | \begin{equation} | 
| 1659 |  | \xi (t) = \sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2 | 
| 1676 |  | \end{equation} | 
| 1677 |  | which is known as the \emph{generalized Langevin equation}. | 
| 1678 |  |  | 
| 1679 | < | \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}Random Force and Dynamic Friction Kernel} | 
| 1679 | > | \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}} | 
| 1680 |  |  | 
| 1681 |  | One may notice that $R(t)$ depends only on initial conditions, which | 
| 1682 |  | implies it is completely deterministic within the context of a | 
| 1689 |  | \end{array} | 
| 1690 |  | \] | 
| 1691 |  | This property is what we expect from a truly random process. As long | 
| 1692 | < | as the model, which is gaussian distribution in general, chosen for | 
| 1693 | < | $R(t)$ is a truly random process, the stochastic nature of the GLE | 
| 1709 | < | still remains. | 
| 1692 | > | as the model chosen for $R(t)$ was a gaussian distribution in | 
| 1693 | > | general, the stochastic nature of the GLE still remains. | 
| 1694 |  |  | 
| 1695 |  | %dynamic friction kernel | 
| 1696 |  | The convolution integral | 
| 1711 |  | m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) + | 
| 1712 |  | \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t), | 
| 1713 |  | \] | 
| 1714 | < | which can be used to describe dynamic caging effect. The other | 
| 1715 | < | extreme is the bath that responds infinitely quickly to motions in | 
| 1716 | < | the system. Thus, $\xi (t)$ can be taken as a $delta$ function in | 
| 1717 | < | time: | 
| 1714 | > | which can be used to describe the effect of dynamic caging in | 
| 1715 | > | viscous solvents. The other extreme is the bath that responds | 
| 1716 | > | infinitely quickly to motions in the system. Thus, $\xi (t)$ can be | 
| 1717 | > | taken as a $delta$ function in time: | 
| 1718 |  | \[ | 
| 1719 |  | \xi (t) = 2\xi _0 \delta (t) | 
| 1720 |  | \] | 
| 1730 |  | \end{equation} | 
| 1731 |  | which is known as the Langevin equation. The static friction | 
| 1732 |  | coefficient $\xi _0$ can either be calculated from spectral density | 
| 1733 | < | or be determined by Stokes' law for regular shaped particles.A | 
| 1733 | > | or be determined by Stokes' law for regular shaped particles. A | 
| 1734 |  | briefly review on calculating friction tensor for arbitrary shaped | 
| 1735 |  | particles is given in Sec.~\ref{introSection:frictionTensor}. | 
| 1736 |  |  | 
| 1737 | < | \subsubsection{\label{introSection:secondFluctuationDissipation}The Second Fluctuation Dissipation Theorem} | 
| 1737 | > | \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}} | 
| 1738 |  |  | 
| 1739 |  | Defining a new set of coordinates, | 
| 1740 |  | \[ | 
| 1746 |  | R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}. | 
| 1747 |  | \] | 
| 1748 |  | And since the $q$ coordinates are harmonic oscillators, | 
| 1749 | < | \[ | 
| 1750 | < | \begin{array}{c} | 
| 1751 | < | \left\langle {q_\alpha ^2 } \right\rangle  = \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\ | 
| 1752 | < | \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  = \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\ | 
| 1753 | < | \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle  = \delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\ | 
| 1754 | < | \left\langle {R(t)R(0)} \right\rangle  = \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\ | 
| 1755 | < | = \sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\ | 
| 1756 | < | = kT\xi (t) \\ | 
| 1757 | < | \end{array} | 
| 1758 | < | \] | 
| 1749 | > |  | 
| 1750 | > | \begin{eqnarray*} | 
| 1751 | > | \left\langle {q_\alpha ^2 } \right\rangle  & = & \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\ | 
| 1752 | > | \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\ | 
| 1753 | > | \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\ | 
| 1754 | > | \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\ | 
| 1755 | > | & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\ | 
| 1756 | > | & = &kT\xi (t) \\ | 
| 1757 | > | \end{eqnarray*} | 
| 1758 | > |  | 
| 1759 |  | Thus, we recover the \emph{second fluctuation dissipation theorem} | 
| 1760 |  | \begin{equation} | 
| 1761 |  | \xi (t) = \left\langle {R(t)R(0)} \right\rangle | 
| 1763 |  | \end{equation} | 
| 1764 |  | In effect, it acts as a constraint on the possible ways in which one | 
| 1765 |  | can model the random force and friction kernel. | 
| 1782 | – |  | 
| 1783 | – | \subsection{\label{introSection:frictionTensor} Friction Tensor} | 
| 1784 | – | Theoretically, the friction kernel can be determined using velocity | 
| 1785 | – | autocorrelation function. However, this approach become impractical | 
| 1786 | – | when the system become more and more complicate. Instead, various | 
| 1787 | – | approaches based on hydrodynamics have been developed to calculate | 
| 1788 | – | the friction coefficients. The friction effect is isotropic in | 
| 1789 | – | Equation, \zeta can be taken as a scalar. In general, friction | 
| 1790 | – | tensor \Xi is a $6\times 6$ matrix given by | 
| 1791 | – | \[ | 
| 1792 | – | \Xi  = \left( {\begin{array}{*{20}c} | 
| 1793 | – | {\Xi _{}^{tt} } & {\Xi _{}^{rt} }  \\ | 
| 1794 | – | {\Xi _{}^{tr} } & {\Xi _{}^{rr} }  \\ | 
| 1795 | – | \end{array}} \right). | 
| 1796 | – | \] | 
| 1797 | – | Here, $ {\Xi^{tt} }$ and $ {\Xi^{rr} }$ are translational friction | 
| 1798 | – | tensor and rotational resistance (friction) tensor respectively, | 
| 1799 | – | while ${\Xi^{tr} }$ is translation-rotation coupling tensor and $ | 
| 1800 | – | {\Xi^{rt} }$ is rotation-translation coupling tensor. When a | 
| 1801 | – | particle moves in a fluid, it may experience friction force or | 
| 1802 | – | torque along the opposite direction of the velocity or angular | 
| 1803 | – | velocity, | 
| 1804 | – | \[ | 
| 1805 | – | \left( \begin{array}{l} | 
| 1806 | – | F_R  \\ | 
| 1807 | – | \tau _R  \\ | 
| 1808 | – | \end{array} \right) =  - \left( {\begin{array}{*{20}c} | 
| 1809 | – | {\Xi ^{tt} } & {\Xi ^{rt} }  \\ | 
| 1810 | – | {\Xi ^{tr} } & {\Xi ^{rr} }  \\ | 
| 1811 | – | \end{array}} \right)\left( \begin{array}{l} | 
| 1812 | – | v \\ | 
| 1813 | – | w \\ | 
| 1814 | – | \end{array} \right) | 
| 1815 | – | \] | 
| 1816 | – | where $F_r$ is the friction force and $\tau _R$ is the friction | 
| 1817 | – | toque. | 
| 1818 | – |  | 
| 1819 | – | \subsubsection{\label{introSection:resistanceTensorRegular}The Resistance Tensor for Regular Shape} | 
| 1820 | – |  | 
| 1821 | – | For a spherical particle, the translational and rotational friction | 
| 1822 | – | constant can be calculated from Stoke's law, | 
| 1823 | – | \[ | 
| 1824 | – | \Xi ^{tt}  = \left( {\begin{array}{*{20}c} | 
| 1825 | – | {6\pi \eta R} & 0 & 0  \\ | 
| 1826 | – | 0 & {6\pi \eta R} & 0  \\ | 
| 1827 | – | 0 & 0 & {6\pi \eta R}  \\ | 
| 1828 | – | \end{array}} \right) | 
| 1829 | – | \] | 
| 1830 | – | and | 
| 1831 | – | \[ | 
| 1832 | – | \Xi ^{rr}  = \left( {\begin{array}{*{20}c} | 
| 1833 | – | {8\pi \eta R^3 } & 0 & 0  \\ | 
| 1834 | – | 0 & {8\pi \eta R^3 } & 0  \\ | 
| 1835 | – | 0 & 0 & {8\pi \eta R^3 }  \\ | 
| 1836 | – | \end{array}} \right) | 
| 1837 | – | \] | 
| 1838 | – | where $\eta$ is the viscosity of the solvent and $R$ is the | 
| 1839 | – | hydrodynamics radius. | 
| 1840 | – |  | 
| 1841 | – | Other non-spherical shape, such as cylinder and ellipsoid | 
| 1842 | – | \textit{etc}, are widely used as reference for developing new | 
| 1843 | – | hydrodynamics theory, because their properties can be calculated | 
| 1844 | – | exactly. In 1936, Perrin extended Stokes's law to general ellipsoid, | 
| 1845 | – | also called a triaxial ellipsoid, which is given in Cartesian | 
| 1846 | – | coordinates by | 
| 1847 | – | \[ | 
| 1848 | – | \frac{{x^2 }}{{a^2 }} + \frac{{y^2 }}{{b^2 }} + \frac{{z^2 }}{{c^2 | 
| 1849 | – | }} = 1 | 
| 1850 | – | \] | 
| 1851 | – | where the semi-axes are of lengths $a$, $b$, and $c$. Unfortunately, | 
| 1852 | – | due to the complexity of the elliptic integral, only the ellipsoid | 
| 1853 | – | with the restriction of two axes having to be equal, \textit{i.e.} | 
| 1854 | – | prolate($ a \ge b = c$) and oblate ($ a < b = c $), can be solved | 
| 1855 | – | exactly. Introducing an elliptic integral parameter $S$ for prolate, | 
| 1856 | – | \[ | 
| 1857 | – | S = \frac{2}{{\sqrt {a^2  - b^2 } }}\ln \frac{{a + \sqrt {a^2  - b^2 | 
| 1858 | – | } }}{b}, | 
| 1859 | – | \] | 
| 1860 | – | and oblate, | 
| 1861 | – | \[ | 
| 1862 | – | S = \frac{2}{{\sqrt {b^2  - a^2 } }}arctg\frac{{\sqrt {b^2  - a^2 } | 
| 1863 | – | }}{a} | 
| 1864 | – | \], | 
| 1865 | – | one can write down the translational and rotational resistance | 
| 1866 | – | tensors | 
| 1867 | – | \[ | 
| 1868 | – | \begin{array}{l} | 
| 1869 | – | \Xi _a^{tt}  = 16\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - b^2 )S - 2a}} \\ | 
| 1870 | – | \Xi _b^{tt}  = \Xi _c^{tt}  = 32\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - 3b^2 )S + 2a}} \\ | 
| 1871 | – | \end{array}, | 
| 1872 | – | \] | 
| 1873 | – | and | 
| 1874 | – | \[ | 
| 1875 | – | \begin{array}{l} | 
| 1876 | – | \Xi _a^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^2  - b^2 )b^2 }}{{2a - b^2 S}} \\ | 
| 1877 | – | \Xi _b^{rr}  = \Xi _c^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^4  - b^4 )}}{{(2a^2  - b^2 )S - 2a}} \\ | 
| 1878 | – | \end{array}. | 
| 1879 | – | \] | 
| 1880 | – |  | 
| 1881 | – | \subsubsection{\label{introSection:resistanceTensorRegularArbitrary}The Resistance Tensor for Arbitrary Shape} | 
| 1882 | – |  | 
| 1883 | – | Unlike spherical and other regular shaped molecules, there is not | 
| 1884 | – | analytical solution for friction tensor of any arbitrary shaped | 
| 1885 | – | rigid molecules. The ellipsoid of revolution model and general | 
| 1886 | – | triaxial ellipsoid model have been used to approximate the | 
| 1887 | – | hydrodynamic properties of rigid bodies. However, since the mapping | 
| 1888 | – | from all possible ellipsoidal space, $r$-space, to all possible | 
| 1889 | – | combination of rotational diffusion coefficients, $D$-space is not | 
| 1890 | – | unique\cite{Wegener79} as well as the intrinsic coupling between | 
| 1891 | – | translational and rotational motion of rigid body\cite{}, general | 
| 1892 | – | ellipsoid is not always suitable for modeling arbitrarily shaped | 
| 1893 | – | rigid molecule. A number of studies have been devoted to determine | 
| 1894 | – | the friction tensor for irregularly shaped rigid bodies using more | 
| 1895 | – | advanced method\cite{} where the molecule of interest was modeled by | 
| 1896 | – | combinations of spheres(beads)\cite{} and the hydrodynamics | 
| 1897 | – | properties of the molecule can be calculated using the hydrodynamic | 
| 1898 | – | interaction tensor. Let us consider a rigid assembly of $N$ beads | 
| 1899 | – | immersed in a continuous medium. Due to hydrodynamics interaction, | 
| 1900 | – | the ``net'' velocity of $i$th bead, $v'_i$ is different than its | 
| 1901 | – | unperturbed velocity $v_i$, | 
| 1902 | – | \[ | 
| 1903 | – | v'_i  = v_i  - \sum\limits_{j \ne i} {T_{ij} F_j } | 
| 1904 | – | \] | 
| 1905 | – | where $F_i$ is the frictional force, and $T_{ij}$ is the | 
| 1906 | – | hydrodynamic interaction tensor. The friction force of $i$th bead is | 
| 1907 | – | proportional to its ``net'' velocity | 
| 1908 | – | \begin{equation} | 
| 1909 | – | F_i  = \zeta _i v_i  - \zeta _i \sum\limits_{j \ne i} {T_{ij} F_j }. | 
| 1910 | – | \label{introEquation:tensorExpression} | 
| 1911 | – | \end{equation} | 
| 1912 | – | This equation is the basis for deriving the hydrodynamic tensor. In | 
| 1913 | – | 1930, Oseen and Burgers gave a simple solution to Equation | 
| 1914 | – | \ref{introEquation:tensorExpression} | 
| 1915 | – | \begin{equation} | 
| 1916 | – | T_{ij}  = \frac{1}{{8\pi \eta r_{ij} }}\left( {I + \frac{{R_{ij} | 
| 1917 | – | R_{ij}^T }}{{R_{ij}^2 }}} \right). | 
| 1918 | – | \label{introEquation:oseenTensor} | 
| 1919 | – | \end{equation} | 
| 1920 | – | Here $R_{ij}$ is the distance vector between bead $i$ and bead $j$. | 
| 1921 | – | A second order expression for element of different size was | 
| 1922 | – | introduced by Rotne and Prager\cite{} and improved by Garc\'{i}a de | 
| 1923 | – | la Torre and Bloomfield, | 
| 1924 | – | \begin{equation} | 
| 1925 | – | T_{ij}  = \frac{1}{{8\pi \eta R_{ij} }}\left[ {\left( {I + | 
| 1926 | – | \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right) + R\frac{{\sigma | 
| 1927 | – | _i^2  + \sigma _j^2 }}{{r_{ij}^2 }}\left( {\frac{I}{3} - | 
| 1928 | – | \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right)} \right]. | 
| 1929 | – | \label{introEquation:RPTensorNonOverlapped} | 
| 1930 | – | \end{equation} | 
| 1931 | – | Both of the Equation \ref{introEquation:oseenTensor} and Equation | 
| 1932 | – | \ref{introEquation:RPTensorNonOverlapped} have an assumption $R_{ij} | 
| 1933 | – | \ge \sigma _i  + \sigma _j$. An alternative expression for | 
| 1934 | – | overlapping beads with the same radius, $\sigma$, is given by | 
| 1935 | – | \begin{equation} | 
| 1936 | – | T_{ij}  = \frac{1}{{6\pi \eta R_{ij} }}\left[ {\left( {1 - | 
| 1937 | – | \frac{2}{{32}}\frac{{R_{ij} }}{\sigma }} \right)I + | 
| 1938 | – | \frac{2}{{32}}\frac{{R_{ij} R_{ij}^T }}{{R_{ij} \sigma }}} \right] | 
| 1939 | – | \label{introEquation:RPTensorOverlapped} | 
| 1940 | – | \end{equation} | 
| 1941 | – |  | 
| 1942 | – | To calculate the resistance tensor at an arbitrary origin $O$, we | 
| 1943 | – | construct a $3N \times 3N$ matrix consisting of $N \times N$ | 
| 1944 | – | $B_{ij}$ blocks | 
| 1945 | – | \begin{equation} | 
| 1946 | – | B = \left( {\begin{array}{*{20}c} | 
| 1947 | – | {B_{11} } &  \ldots  & {B_{1N} }  \\ | 
| 1948 | – | \vdots  &  \ddots  &  \vdots   \\ | 
| 1949 | – | {B_{N1} } &  \cdots  & {B_{NN} }  \\ | 
| 1950 | – | \end{array}} \right), | 
| 1951 | – | \end{equation} | 
| 1952 | – | where $B_{ij}$ is given by | 
| 1953 | – | \[ | 
| 1954 | – | B_{ij}  = \delta _{ij} \frac{I}{{6\pi \eta R}} + (1 - \delta _{ij} | 
| 1955 | – | )T_{ij} | 
| 1956 | – | \] | 
| 1957 | – | where $\delta _{ij}$ is Kronecker delta function. Inverting matrix | 
| 1958 | – | $B$, we obtain | 
| 1959 | – |  | 
| 1960 | – | \[ | 
| 1961 | – | C = B^{ - 1}  = \left( {\begin{array}{*{20}c} | 
| 1962 | – | {C_{11} } &  \ldots  & {C_{1N} }  \\ | 
| 1963 | – | \vdots  &  \ddots  &  \vdots   \\ | 
| 1964 | – | {C_{N1} } &  \cdots  & {C_{NN} }  \\ | 
| 1965 | – | \end{array}} \right) | 
| 1966 | – | \] | 
| 1967 | – | , which can be partitioned into $N \times N$ $3 \times 3$ block | 
| 1968 | – | $C_{ij}$. With the help of $C_{ij}$ and skew matrix $U_i$ | 
| 1969 | – | \[ | 
| 1970 | – | U_i  = \left( {\begin{array}{*{20}c} | 
| 1971 | – | 0 & { - z_i } & {y_i }  \\ | 
| 1972 | – | {z_i } & 0 & { - x_i }  \\ | 
| 1973 | – | { - y_i } & {x_i } & 0  \\ | 
| 1974 | – | \end{array}} \right) | 
| 1975 | – | \] | 
| 1976 | – | where $x_i$, $y_i$, $z_i$ are the components of the vector joining | 
| 1977 | – | bead $i$ and origin $O$. Hence, the elements of resistance tensor at | 
| 1978 | – | arbitrary origin $O$ can be written as | 
| 1979 | – | \begin{equation} | 
| 1980 | – | \begin{array}{l} | 
| 1981 | – | \Xi _{}^{tt}  = \sum\limits_i {\sum\limits_j {C_{ij} } } , \\ | 
| 1982 | – | \Xi _{}^{tr}  = \Xi _{}^{rt}  = \sum\limits_i {\sum\limits_j {U_i C_{ij} } } , \\ | 
| 1983 | – | \Xi _{}^{rr}  =  - \sum\limits_i {\sum\limits_j {U_i C_{ij} } } U_j  \\ | 
| 1984 | – | \end{array} | 
| 1985 | – | \label{introEquation:ResistanceTensorArbitraryOrigin} | 
| 1986 | – | \end{equation} | 
| 1987 | – |  | 
| 1988 | – | The resistance tensor depends on the origin to which they refer. The | 
| 1989 | – | proper location for applying friction force is the center of | 
| 1990 | – | resistance (reaction), at which the trace of rotational resistance | 
| 1991 | – | tensor, $ \Xi ^{rr}$ reaches minimum. Mathematically, the center of | 
| 1992 | – | resistance is defined as an unique point of the rigid body at which | 
| 1993 | – | the translation-rotation coupling tensor are symmetric, | 
| 1994 | – | \begin{equation} | 
| 1995 | – | \Xi^{tr}  = \left( {\Xi^{tr} } \right)^T | 
| 1996 | – | \label{introEquation:definitionCR} | 
| 1997 | – | \end{equation} | 
| 1998 | – | Form Equation \ref{introEquation:ResistanceTensorArbitraryOrigin}, | 
| 1999 | – | we can easily find out that the translational resistance tensor is | 
| 2000 | – | origin independent, while the rotational resistance tensor and | 
| 2001 | – | translation-rotation coupling resistance tensor depend on the | 
| 2002 | – | origin. Given resistance tensor at an arbitrary origin $O$, and a | 
| 2003 | – | vector ,$r_{OP}(x_{OP}, y_{OP}, z_{OP})$, from $O$ to $P$, we can | 
| 2004 | – | obtain the resistance tensor at $P$ by | 
| 2005 | – | \begin{equation} | 
| 2006 | – | \begin{array}{l} | 
| 2007 | – | \Xi _P^{tt}  = \Xi _O^{tt}  \\ | 
| 2008 | – | \Xi _P^{tr}  = \Xi _P^{rt}  = \Xi _O^{tr}  - U_{OP} \Xi _O^{tt}  \\ | 
| 2009 | – | \Xi _P^{rr}  = \Xi _O^{rr}  - U_{OP} \Xi _O^{tt} U_{OP}  + \Xi _O^{tr} U_{OP}  - U_{OP} \Xi _O^{tr} ^{^T }  \\ | 
| 2010 | – | \end{array} | 
| 2011 | – | \label{introEquation:resistanceTensorTransformation} | 
| 2012 | – | \end{equation} | 
| 2013 | – | where | 
| 2014 | – | \[ | 
| 2015 | – | U_{OP}  = \left( {\begin{array}{*{20}c} | 
| 2016 | – | 0 & { - z_{OP} } & {y_{OP} }  \\ | 
| 2017 | – | {z_i } & 0 & { - x_{OP} }  \\ | 
| 2018 | – | { - y_{OP} } & {x_{OP} } & 0  \\ | 
| 2019 | – | \end{array}} \right) | 
| 2020 | – | \] | 
| 2021 | – | Using Equations \ref{introEquation:definitionCR} and | 
| 2022 | – | \ref{introEquation:resistanceTensorTransformation}, one can locate | 
| 2023 | – | the position of center of resistance, | 
| 2024 | – | \[ | 
| 2025 | – | \left( \begin{array}{l} | 
| 2026 | – | x_{OR}  \\ | 
| 2027 | – | y_{OR}  \\ | 
| 2028 | – | z_{OR}  \\ | 
| 2029 | – | \end{array} \right) = \left( {\begin{array}{*{20}c} | 
| 2030 | – | {(\Xi _O^{rr} )_{yy}  + (\Xi _O^{rr} )_{zz} } & { - (\Xi _O^{rr} )_{xy} } & { - (\Xi _O^{rr} )_{xz} }  \\ | 
| 2031 | – | { - (\Xi _O^{rr} )_{xy} } & {(\Xi _O^{rr} )_{zz}  + (\Xi _O^{rr} )_{xx} } & { - (\Xi _O^{rr} )_{yz} }  \\ | 
| 2032 | – | { - (\Xi _O^{rr} )_{xz} } & { - (\Xi _O^{rr} )_{yz} } & {(\Xi _O^{rr} )_{xx}  + (\Xi _O^{rr} )_{yy} }  \\ | 
| 2033 | – | \end{array}} \right)^{ - 1} \left( \begin{array}{l} | 
| 2034 | – | (\Xi _O^{tr} )_{yz}  - (\Xi _O^{tr} )_{zy}  \\ | 
| 2035 | – | (\Xi _O^{tr} )_{zx}  - (\Xi _O^{tr} )_{xz}  \\ | 
| 2036 | – | (\Xi _O^{tr} )_{xy}  - (\Xi _O^{tr} )_{yx}  \\ | 
| 2037 | – | \end{array} \right). | 
| 2038 | – | \] | 
| 2039 | – | where $x_OR$, $y_OR$, $z_OR$ are the components of the vector | 
| 2040 | – | joining center of resistance $R$ and origin $O$. |