| 1 | \chapter{\label{chapt:introduction}INTRODUCTION AND THEORETICAL BACKGROUND} | 
| 2 |  | 
| 3 | \section{\label{introSection:classicalMechanics}Classical | 
| 4 | Mechanics} | 
| 5 |  | 
| 6 | Closely related to Classical Mechanics, Molecular Dynamics | 
| 7 | simulations are carried out by integrating the equations of motion | 
| 8 | for a given system of particles. There are three fundamental ideas | 
| 9 | behind classical mechanics. Firstly, One can determine the state of | 
| 10 | a mechanical system at any time of interest; Secondly, all the | 
| 11 | mechanical properties of the system at that time can be determined | 
| 12 | by combining the knowledge of the properties of the system with the | 
| 13 | specification of this state; Finally, the specification of the state | 
| 14 | when further combine with the laws of mechanics will also be | 
| 15 | sufficient to predict the future behavior of the system. | 
| 16 |  | 
| 17 | \subsection{\label{introSection:newtonian}Newtonian Mechanics} | 
| 18 | The discovery of Newton's three laws of mechanics which govern the | 
| 19 | motion of particles is the foundation of the classical mechanics. | 
| 20 | Newton¡¯s first law defines a class of inertial frames. Inertial | 
| 21 | frames are reference frames where a particle not interacting with | 
| 22 | other bodies will move with constant speed in the same direction. | 
| 23 | With respect to inertial frames Newton¡¯s second law has the form | 
| 24 | \begin{equation} | 
| 25 | F = \frac {dp}{dt} = \frac {mv}{dt} | 
| 26 | \label{introEquation:newtonSecondLaw} | 
| 27 | \end{equation} | 
| 28 | A point mass interacting with other bodies moves with the | 
| 29 | acceleration along the direction of the force acting on it. Let | 
| 30 | $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and | 
| 31 | $F_{ji}$ be the force that particle $j$ exerts on particle $i$. | 
| 32 | Newton¡¯s third law states that | 
| 33 | \begin{equation} | 
| 34 | F_{ij} = -F_{ji} | 
| 35 | \label{introEquation:newtonThirdLaw} | 
| 36 | \end{equation} | 
| 37 |  | 
| 38 | Conservation laws of Newtonian Mechanics play very important roles | 
| 39 | in solving mechanics problems. The linear momentum of a particle is | 
| 40 | conserved if it is free or it experiences no force. The second | 
| 41 | conservation theorem concerns the angular momentum of a particle. | 
| 42 | The angular momentum $L$ of a particle with respect to an origin | 
| 43 | from which $r$ is measured is defined to be | 
| 44 | \begin{equation} | 
| 45 | L \equiv r \times p \label{introEquation:angularMomentumDefinition} | 
| 46 | \end{equation} | 
| 47 | The torque $\tau$ with respect to the same origin is defined to be | 
| 48 | \begin{equation} | 
| 49 | N \equiv r \times F \label{introEquation:torqueDefinition} | 
| 50 | \end{equation} | 
| 51 | Differentiating Eq.~\ref{introEquation:angularMomentumDefinition}, | 
| 52 | \[ | 
| 53 | \dot L = \frac{d}{{dt}}(r \times p) = (\dot r \times p) + (r \times | 
| 54 | \dot p) | 
| 55 | \] | 
| 56 | since | 
| 57 | \[ | 
| 58 | \dot r \times p = \dot r \times mv = m\dot r \times \dot r \equiv 0 | 
| 59 | \] | 
| 60 | thus, | 
| 61 | \begin{equation} | 
| 62 | \dot L = r \times \dot p = N | 
| 63 | \end{equation} | 
| 64 | If there are no external torques acting on a body, the angular | 
| 65 | momentum of it is conserved. The last conservation theorem state | 
| 66 | that if all forces are conservative, Energy | 
| 67 | \begin{equation}E = T + V \label{introEquation:energyConservation} | 
| 68 | \end{equation} | 
| 69 | is conserved. All of these conserved quantities are | 
| 70 | important factors to determine the quality of numerical integration | 
| 71 | scheme for rigid body \cite{Dullweber1997}. | 
| 72 |  | 
| 73 | \subsection{\label{introSection:lagrangian}Lagrangian Mechanics} | 
| 74 |  | 
| 75 | Newtonian Mechanics suffers from two important limitations: it | 
| 76 | describes their motion in special cartesian coordinate systems. | 
| 77 | Another limitation of Newtonian mechanics becomes obvious when we | 
| 78 | try to describe systems with large numbers of particles. It becomes | 
| 79 | very difficult to predict the properties of the system by carrying | 
| 80 | out calculations involving the each individual interaction between | 
| 81 | all the particles, even if we know all of the details of the | 
| 82 | interaction. In order to overcome some of the practical difficulties | 
| 83 | which arise in attempts to apply Newton's equation to complex | 
| 84 | system, alternative procedures may be developed. | 
| 85 |  | 
| 86 | \subsubsection{\label{introSection:halmiltonPrinciple}Hamilton's | 
| 87 | Principle} | 
| 88 |  | 
| 89 | Hamilton introduced the dynamical principle upon which it is | 
| 90 | possible to base all of mechanics and, indeed, most of classical | 
| 91 | physics. Hamilton's Principle may be stated as follow, | 
| 92 |  | 
| 93 | The actual trajectory, along which a dynamical system may move from | 
| 94 | one point to another within a specified time, is derived by finding | 
| 95 | the path which minimizes the time integral of the difference between | 
| 96 | the kinetic, $K$, and potential energies, $U$ \cite{Tolman1979}. | 
| 97 | \begin{equation} | 
| 98 | \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0} , | 
| 99 | \label{introEquation:halmitonianPrinciple1} | 
| 100 | \end{equation} | 
| 101 |  | 
| 102 | For simple mechanical systems, where the forces acting on the | 
| 103 | different part are derivable from a potential and the velocities are | 
| 104 | small compared with that of light, the Lagrangian function $L$ can | 
| 105 | be define as the difference between the kinetic energy of the system | 
| 106 | and its potential energy, | 
| 107 | \begin{equation} | 
| 108 | L \equiv K - U = L(q_i ,\dot q_i ) , | 
| 109 | \label{introEquation:lagrangianDef} | 
| 110 | \end{equation} | 
| 111 | then Eq.~\ref{introEquation:halmitonianPrinciple1} becomes | 
| 112 | \begin{equation} | 
| 113 | \delta \int_{t_1 }^{t_2 } {L dt = 0} , | 
| 114 | \label{introEquation:halmitonianPrinciple2} | 
| 115 | \end{equation} | 
| 116 |  | 
| 117 | \subsubsection{\label{introSection:equationOfMotionLagrangian}The | 
| 118 | Equations of Motion in Lagrangian Mechanics} | 
| 119 |  | 
| 120 | For a holonomic system of $f$ degrees of freedom, the equations of | 
| 121 | motion in the Lagrangian form is | 
| 122 | \begin{equation} | 
| 123 | \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} - | 
| 124 | \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f | 
| 125 | \label{introEquation:eqMotionLagrangian} | 
| 126 | \end{equation} | 
| 127 | where $q_{i}$ is generalized coordinate and $\dot{q_{i}}$ is | 
| 128 | generalized velocity. | 
| 129 |  | 
| 130 | \subsection{\label{introSection:hamiltonian}Hamiltonian Mechanics} | 
| 131 |  | 
| 132 | Arising from Lagrangian Mechanics, Hamiltonian Mechanics was | 
| 133 | introduced by William Rowan Hamilton in 1833 as a re-formulation of | 
| 134 | classical mechanics. If the potential energy of a system is | 
| 135 | independent of generalized velocities, the generalized momenta can | 
| 136 | be defined as | 
| 137 | \begin{equation} | 
| 138 | p_i = \frac{\partial L}{\partial \dot q_i} | 
| 139 | \label{introEquation:generalizedMomenta} | 
| 140 | \end{equation} | 
| 141 | The Lagrange equations of motion are then expressed by | 
| 142 | \begin{equation} | 
| 143 | p_i  = \frac{{\partial L}}{{\partial q_i }} | 
| 144 | \label{introEquation:generalizedMomentaDot} | 
| 145 | \end{equation} | 
| 146 |  | 
| 147 | With the help of the generalized momenta, we may now define a new | 
| 148 | quantity $H$ by the equation | 
| 149 | \begin{equation} | 
| 150 | H = \sum\limits_k {p_k \dot q_k }  - L , | 
| 151 | \label{introEquation:hamiltonianDefByLagrangian} | 
| 152 | \end{equation} | 
| 153 | where $ \dot q_1  \ldots \dot q_f $ are generalized velocities and | 
| 154 | $L$ is the Lagrangian function for the system. | 
| 155 |  | 
| 156 | Differentiating Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, | 
| 157 | one can obtain | 
| 158 | \begin{equation} | 
| 159 | dH = \sum\limits_k {\left( {p_k d\dot q_k  + \dot q_k dp_k  - | 
| 160 | \frac{{\partial L}}{{\partial q_k }}dq_k  - \frac{{\partial | 
| 161 | L}}{{\partial \dot q_k }}d\dot q_k } \right)}  - \frac{{\partial | 
| 162 | L}}{{\partial t}}dt \label{introEquation:diffHamiltonian1} | 
| 163 | \end{equation} | 
| 164 | Making use of  Eq.~\ref{introEquation:generalizedMomenta}, the | 
| 165 | second and fourth terms in the parentheses cancel. Therefore, | 
| 166 | Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as | 
| 167 | \begin{equation} | 
| 168 | dH = \sum\limits_k {\left( {\dot q_k dp_k  - \dot p_k dq_k } | 
| 169 | \right)}  - \frac{{\partial L}}{{\partial t}}dt | 
| 170 | \label{introEquation:diffHamiltonian2} | 
| 171 | \end{equation} | 
| 172 | By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can | 
| 173 | find | 
| 174 | \begin{equation} | 
| 175 | \frac{{\partial H}}{{\partial p_k }} = q_k | 
| 176 | \label{introEquation:motionHamiltonianCoordinate} | 
| 177 | \end{equation} | 
| 178 | \begin{equation} | 
| 179 | \frac{{\partial H}}{{\partial q_k }} =  - p_k | 
| 180 | \label{introEquation:motionHamiltonianMomentum} | 
| 181 | \end{equation} | 
| 182 | and | 
| 183 | \begin{equation} | 
| 184 | \frac{{\partial H}}{{\partial t}} =  - \frac{{\partial L}}{{\partial | 
| 185 | t}} | 
| 186 | \label{introEquation:motionHamiltonianTime} | 
| 187 | \end{equation} | 
| 188 |  | 
| 189 | Eq.~\ref{introEquation:motionHamiltonianCoordinate} and | 
| 190 | Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's | 
| 191 | equation of motion. Due to their symmetrical formula, they are also | 
| 192 | known as the canonical equations of motions \cite{Goldstein2001}. | 
| 193 |  | 
| 194 | An important difference between Lagrangian approach and the | 
| 195 | Hamiltonian approach is that the Lagrangian is considered to be a | 
| 196 | function of the generalized velocities $\dot q_i$ and the | 
| 197 | generalized coordinates $q_i$, while the Hamiltonian is considered | 
| 198 | to be a function of the generalized momenta $p_i$ and the conjugate | 
| 199 | generalized coordinate $q_i$. Hamiltonian Mechanics is more | 
| 200 | appropriate for application to statistical mechanics and quantum | 
| 201 | mechanics, since it treats the coordinate and its time derivative as | 
| 202 | independent variables and it only works with 1st-order differential | 
| 203 | equations\cite{Marion1990}. | 
| 204 |  | 
| 205 | In Newtonian Mechanics, a system described by conservative forces | 
| 206 | conserves the total energy \ref{introEquation:energyConservation}. | 
| 207 | It follows that Hamilton's equations of motion conserve the total | 
| 208 | Hamiltonian. | 
| 209 | \begin{equation} | 
| 210 | \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial | 
| 211 | H}}{{\partial q_i }}\dot q_i  + \frac{{\partial H}}{{\partial p_i | 
| 212 | }}\dot p_i } \right)}  = \sum\limits_i {\left( {\frac{{\partial | 
| 213 | H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} - | 
| 214 | \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial | 
| 215 | q_i }}} \right) = 0} \label{introEquation:conserveHalmitonian} | 
| 216 | \end{equation} | 
| 217 |  | 
| 218 | \section{\label{introSection:statisticalMechanics}Statistical | 
| 219 | Mechanics} | 
| 220 |  | 
| 221 | The thermodynamic behaviors and properties of Molecular Dynamics | 
| 222 | simulation are governed by the principle of Statistical Mechanics. | 
| 223 | The following section will give a brief introduction to some of the | 
| 224 | Statistical Mechanics concepts and theorem presented in this | 
| 225 | dissertation. | 
| 226 |  | 
| 227 | \subsection{\label{introSection:ensemble}Phase Space and Ensemble} | 
| 228 |  | 
| 229 | Mathematically, phase space is the space which represents all | 
| 230 | possible states. Each possible state of the system corresponds to | 
| 231 | one unique point in the phase space. For mechanical systems, the | 
| 232 | phase space usually consists of all possible values of position and | 
| 233 | momentum variables. Consider a dynamic system in a cartesian space, | 
| 234 | where each of the $6f$ coordinates and momenta is assigned to one of | 
| 235 | $6f$ mutually orthogonal axes, the phase space of this system is a | 
| 236 | $6f$ dimensional space. A point, $x = (q_1 , \ldots ,q_f ,p_1 , | 
| 237 | \ldots ,p_f )$, with a unique set of values of $6f$ coordinates and | 
| 238 | momenta is a phase space vector. | 
| 239 |  | 
| 240 | A microscopic state or microstate of a classical system is | 
| 241 | specification of the complete phase space vector of a system at any | 
| 242 | instant in time. An ensemble is defined as a collection of systems | 
| 243 | sharing one or more macroscopic characteristics but each being in a | 
| 244 | unique microstate. The complete ensemble is specified by giving all | 
| 245 | systems or microstates consistent with the common macroscopic | 
| 246 | characteristics of the ensemble. Although the state of each | 
| 247 | individual system in the ensemble could be precisely described at | 
| 248 | any instance in time by a suitable phase space vector, when using | 
| 249 | ensembles for statistical purposes, there is no need to maintain | 
| 250 | distinctions between individual systems, since the numbers of | 
| 251 | systems at any time in the different states which correspond to | 
| 252 | different regions of the phase space are more interesting. Moreover, | 
| 253 | in the point of view of statistical mechanics, one would prefer to | 
| 254 | use ensembles containing a large enough population of separate | 
| 255 | members so that the numbers of systems in such different states can | 
| 256 | be regarded as changing continuously as we traverse different | 
| 257 | regions of the phase space. The condition of an ensemble at any time | 
| 258 | can be regarded as appropriately specified by the density $\rho$ | 
| 259 | with which representative points are distributed over the phase | 
| 260 | space. The density of distribution for an ensemble with $f$ degrees | 
| 261 | of freedom is defined as, | 
| 262 | \begin{equation} | 
| 263 | \rho  = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t). | 
| 264 | \label{introEquation:densityDistribution} | 
| 265 | \end{equation} | 
| 266 | Governed by the principles of mechanics, the phase points change | 
| 267 | their value which would change the density at any time at phase | 
| 268 | space. Hence, the density of distribution is also to be taken as a | 
| 269 | function of the time. | 
| 270 |  | 
| 271 | The number of systems $\delta N$ at time $t$ can be determined by, | 
| 272 | \begin{equation} | 
| 273 | \delta N = \rho (q,p,t)dq_1  \ldots dq_f dp_1  \ldots dp_f. | 
| 274 | \label{introEquation:deltaN} | 
| 275 | \end{equation} | 
| 276 | Assuming a large enough population of systems are exploited, we can | 
| 277 | sufficiently approximate $\delta N$ without introducing | 
| 278 | discontinuity when we go from one region in the phase space to | 
| 279 | another. By integrating over the whole phase space, | 
| 280 | \begin{equation} | 
| 281 | N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f | 
| 282 | \label{introEquation:totalNumberSystem} | 
| 283 | \end{equation} | 
| 284 | gives us an expression for the total number of the systems. Hence, | 
| 285 | the probability per unit in the phase space can be obtained by, | 
| 286 | \begin{equation} | 
| 287 | \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int | 
| 288 | {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}. | 
| 289 | \label{introEquation:unitProbability} | 
| 290 | \end{equation} | 
| 291 | With the help of Equation(\ref{introEquation:unitProbability}) and | 
| 292 | the knowledge of the system, it is possible to calculate the average | 
| 293 | value of any desired quantity which depends on the coordinates and | 
| 294 | momenta of the system. Even when the dynamics of the real system is | 
| 295 | complex, or stochastic, or even discontinuous, the average | 
| 296 | properties of the ensemble of possibilities as a whole may still | 
| 297 | remain well defined. For a classical system in thermal equilibrium | 
| 298 | with its environment, the ensemble average of a mechanical quantity, | 
| 299 | $\langle A(q , p) \rangle_t$, takes the form of an integral over the | 
| 300 | phase space of the system, | 
| 301 | \begin{equation} | 
| 302 | \langle  A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho | 
| 303 | (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho | 
| 304 | (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }} | 
| 305 | \label{introEquation:ensembelAverage} | 
| 306 | \end{equation} | 
| 307 |  | 
| 308 | There are several different types of ensembles with different | 
| 309 | statistical characteristics. As a function of macroscopic | 
| 310 | parameters, such as temperature \textit{etc}, partition function can | 
| 311 | be used to describe the statistical properties of a system in | 
| 312 | thermodynamic equilibrium. | 
| 313 |  | 
| 314 | As an ensemble of systems, each of which is known to be thermally | 
| 315 | isolated and conserve energy, Microcanonical ensemble(NVE) has a | 
| 316 | partition function like, | 
| 317 | \begin{equation} | 
| 318 | \Omega (N,V,E) = e^{\beta TS} \label{introEquation:NVEPartition}. | 
| 319 | \end{equation} | 
| 320 | A canonical ensemble(NVT)is an ensemble of systems, each of which | 
| 321 | can share its energy with a large heat reservoir. The distribution | 
| 322 | of the total energy amongst the possible dynamical states is given | 
| 323 | by the partition function, | 
| 324 | \begin{equation} | 
| 325 | \Omega (N,V,T) = e^{ - \beta A} | 
| 326 | \label{introEquation:NVTPartition} | 
| 327 | \end{equation} | 
| 328 | Here, $A$ is the Helmholtz free energy which is defined as $ A = U - | 
| 329 | TS$. Since most experiment are carried out under constant pressure | 
| 330 | condition, isothermal-isobaric ensemble(NPT) play a very important | 
| 331 | role in molecular simulation. The isothermal-isobaric ensemble allow | 
| 332 | the system to exchange energy with a heat bath of temperature $T$ | 
| 333 | and to change the volume as well. Its partition function is given as | 
| 334 | \begin{equation} | 
| 335 | \Delta (N,P,T) =  - e^{\beta G}. | 
| 336 | \label{introEquation:NPTPartition} | 
| 337 | \end{equation} | 
| 338 | Here, $G = U - TS + PV$, and $G$ is called Gibbs free energy. | 
| 339 |  | 
| 340 | \subsection{\label{introSection:liouville}Liouville's theorem} | 
| 341 |  | 
| 342 | The Liouville's theorem is the foundation on which statistical | 
| 343 | mechanics rests. It describes the time evolution of phase space | 
| 344 | distribution function. In order to calculate the rate of change of | 
| 345 | $\rho$, we begin from Equation(\ref{introEquation:deltaN}). If we | 
| 346 | consider the two faces perpendicular to the $q_1$ axis, which are | 
| 347 | located at $q_1$ and $q_1 + \delta q_1$, the number of phase points | 
| 348 | leaving the opposite face is given by the expression, | 
| 349 | \begin{equation} | 
| 350 | \left( {\rho  + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 } | 
| 351 | \right)\left( {\dot q_1  + \frac{{\partial \dot q_1 }}{{\partial q_1 | 
| 352 | }}\delta q_1 } \right)\delta q_2  \ldots \delta q_f \delta p_1 | 
| 353 | \ldots \delta p_f . | 
| 354 | \end{equation} | 
| 355 | Summing all over the phase space, we obtain | 
| 356 | \begin{equation} | 
| 357 | \frac{{d(\delta N)}}{{dt}} =  - \sum\limits_{i = 1}^f {\left[ {\rho | 
| 358 | \left( {\frac{{\partial \dot q_i }}{{\partial q_i }} + | 
| 359 | \frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left( | 
| 360 | {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i  + \frac{{\partial | 
| 361 | \rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1 | 
| 362 | \ldots \delta q_f \delta p_1  \ldots \delta p_f . | 
| 363 | \end{equation} | 
| 364 | Differentiating the equations of motion in Hamiltonian formalism | 
| 365 | (\ref{introEquation:motionHamiltonianCoordinate}, | 
| 366 | \ref{introEquation:motionHamiltonianMomentum}), we can show, | 
| 367 | \begin{equation} | 
| 368 | \sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }} | 
| 369 | + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)}  = 0 , | 
| 370 | \end{equation} | 
| 371 | which cancels the first terms of the right hand side. Furthermore, | 
| 372 | divining $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta | 
| 373 | p_f $ in both sides, we can write out Liouville's theorem in a | 
| 374 | simple form, | 
| 375 | \begin{equation} | 
| 376 | \frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f | 
| 377 | {\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i  + | 
| 378 | \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)}  = 0 . | 
| 379 | \label{introEquation:liouvilleTheorem} | 
| 380 | \end{equation} | 
| 381 |  | 
| 382 | Liouville's theorem states that the distribution function is | 
| 383 | constant along any trajectory in phase space. In classical | 
| 384 | statistical mechanics, since the number of particles in the system | 
| 385 | is huge, we may be able to believe the system is stationary, | 
| 386 | \begin{equation} | 
| 387 | \frac{{\partial \rho }}{{\partial t}} = 0. | 
| 388 | \label{introEquation:stationary} | 
| 389 | \end{equation} | 
| 390 | In such stationary system, the density of distribution $\rho$ can be | 
| 391 | connected to the Hamiltonian $H$ through Maxwell-Boltzmann | 
| 392 | distribution, | 
| 393 | \begin{equation} | 
| 394 | \rho  \propto e^{ - \beta H} | 
| 395 | \label{introEquation:densityAndHamiltonian} | 
| 396 | \end{equation} | 
| 397 |  | 
| 398 | \subsubsection{\label{introSection:phaseSpaceConservation}Conservation of Phase Space} | 
| 399 | Lets consider a region in the phase space, | 
| 400 | \begin{equation} | 
| 401 | \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f . | 
| 402 | \end{equation} | 
| 403 | If this region is small enough, the density $\rho$ can be regarded | 
| 404 | as uniform over the whole phase space. Thus, the number of phase | 
| 405 | points inside this region is given by, | 
| 406 | \begin{equation} | 
| 407 | \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f | 
| 408 | dp_1 } ..dp_f. | 
| 409 | \end{equation} | 
| 410 |  | 
| 411 | \begin{equation} | 
| 412 | \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho | 
| 413 | \frac{d}{{dt}}(\delta v) = 0. | 
| 414 | \end{equation} | 
| 415 | With the help of stationary assumption | 
| 416 | (\ref{introEquation:stationary}), we obtain the principle of the | 
| 417 | \emph{conservation of extension in phase space}, | 
| 418 | \begin{equation} | 
| 419 | \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 } | 
| 420 | ...dq_f dp_1 } ..dp_f  = 0. | 
| 421 | \label{introEquation:volumePreserving} | 
| 422 | \end{equation} | 
| 423 |  | 
| 424 | \subsubsection{\label{introSection:liouvilleInOtherForms}Liouville's Theorem in Other Forms} | 
| 425 |  | 
| 426 | Liouville's theorem can be expresses in a variety of different forms | 
| 427 | which are convenient within different contexts. For any two function | 
| 428 | $F$ and $G$ of the coordinates and momenta of a system, the Poisson | 
| 429 | bracket ${F, G}$ is defined as | 
| 430 | \begin{equation} | 
| 431 | \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial | 
| 432 | F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} - | 
| 433 | \frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial | 
| 434 | q_i }}} \right)}. | 
| 435 | \label{introEquation:poissonBracket} | 
| 436 | \end{equation} | 
| 437 | Substituting equations of motion in Hamiltonian formalism( | 
| 438 | \ref{introEquation:motionHamiltonianCoordinate} , | 
| 439 | \ref{introEquation:motionHamiltonianMomentum} ) into | 
| 440 | (\ref{introEquation:liouvilleTheorem}), we can rewrite Liouville's | 
| 441 | theorem using Poisson bracket notion, | 
| 442 | \begin{equation} | 
| 443 | \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - \left\{ | 
| 444 | {\rho ,H} \right\}. | 
| 445 | \label{introEquation:liouvilleTheromInPoissin} | 
| 446 | \end{equation} | 
| 447 | Moreover, the Liouville operator is defined as | 
| 448 | \begin{equation} | 
| 449 | iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial | 
| 450 | p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial | 
| 451 | H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)} | 
| 452 | \label{introEquation:liouvilleOperator} | 
| 453 | \end{equation} | 
| 454 | In terms of Liouville operator, Liouville's equation can also be | 
| 455 | expressed as | 
| 456 | \begin{equation} | 
| 457 | \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - iL\rho | 
| 458 | \label{introEquation:liouvilleTheoremInOperator} | 
| 459 | \end{equation} | 
| 460 |  | 
| 461 | \subsection{\label{introSection:ergodic}The Ergodic Hypothesis} | 
| 462 |  | 
| 463 | Various thermodynamic properties can be calculated from Molecular | 
| 464 | Dynamics simulation. By comparing experimental values with the | 
| 465 | calculated properties, one can determine the accuracy of the | 
| 466 | simulation and the quality of the underlying model. However, both of | 
| 467 | experiment and computer simulation are usually performed during a | 
| 468 | certain time interval and the measurements are averaged over a | 
| 469 | period of them which is different from the average behavior of | 
| 470 | many-body system in Statistical Mechanics. Fortunately, Ergodic | 
| 471 | Hypothesis is proposed to make a connection between time average and | 
| 472 | ensemble average. It states that time average and average over the | 
| 473 | statistical ensemble are identical \cite{Frenkel1996, Leach2001}. | 
| 474 | \begin{equation} | 
| 475 | \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty } | 
| 476 | \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma | 
| 477 | {A(q(t),p(t))} } \rho (q(t), p(t)) dqdp | 
| 478 | \end{equation} | 
| 479 | where $\langle  A(q , p) \rangle_t$ is an equilibrium value of a | 
| 480 | physical quantity and $\rho (p(t), q(t))$ is the equilibrium | 
| 481 | distribution function. If an observation is averaged over a | 
| 482 | sufficiently long time (longer than relaxation time), all accessible | 
| 483 | microstates in phase space are assumed to be equally probed, giving | 
| 484 | a properly weighted statistical average. This allows the researcher | 
| 485 | freedom of choice when deciding how best to measure a given | 
| 486 | observable. In case an ensemble averaged approach sounds most | 
| 487 | reasonable, the Monte Carlo techniques\cite{Metropolis1949} can be | 
| 488 | utilized. Or if the system lends itself to a time averaging | 
| 489 | approach, the Molecular Dynamics techniques in | 
| 490 | Sec.~\ref{introSection:molecularDynamics} will be the best | 
| 491 | choice\cite{Frenkel1996}. | 
| 492 |  | 
| 493 | \section{\label{introSection:geometricIntegratos}Geometric Integrators} | 
| 494 | A variety of numerical integrators were proposed to simulate the | 
| 495 | motions. They usually begin with an initial conditionals and move | 
| 496 | the objects in the direction governed by the differential equations. | 
| 497 | However, most of them ignore the hidden physical law contained | 
| 498 | within the equations. Since 1990, geometric integrators, which | 
| 499 | preserve various phase-flow invariants such as symplectic structure, | 
| 500 | volume and time reversal symmetry, are developed to address this | 
| 501 | issue\cite{Dullweber1997, McLachlan1998, Leimkuhler1999}. The | 
| 502 | velocity verlet method, which happens to be a simple example of | 
| 503 | symplectic integrator, continues to gain its popularity in molecular | 
| 504 | dynamics community. This fact can be partly explained by its | 
| 505 | geometric nature. | 
| 506 |  | 
| 507 | \subsection{\label{introSection:symplecticManifold}Symplectic Manifold} | 
| 508 | A \emph{manifold} is an abstract mathematical space. It locally | 
| 509 | looks like Euclidean space, but when viewed globally, it may have | 
| 510 | more complicate structure. A good example of manifold is the surface | 
| 511 | of Earth. It seems to be flat locally, but it is round if viewed as | 
| 512 | a whole. A \emph{differentiable manifold} (also known as | 
| 513 | \emph{smooth manifold}) is a manifold with an open cover in which | 
| 514 | the covering neighborhoods are all smoothly isomorphic to one | 
| 515 | another. In other words,it is possible to apply calculus on | 
| 516 | \emph{differentiable manifold}. A \emph{symplectic manifold} is | 
| 517 | defined as a pair $(M, \omega)$ which consisting of a | 
| 518 | \emph{differentiable manifold} $M$ and a close, non-degenerated, | 
| 519 | bilinear symplectic form, $\omega$. A symplectic form on a vector | 
| 520 | space $V$ is a function $\omega(x, y)$ which satisfies | 
| 521 | $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+ | 
| 522 | \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and | 
| 523 | $\omega(x, x) = 0$. Cross product operation in vector field is an | 
| 524 | example of symplectic form. | 
| 525 |  | 
| 526 | One of the motivations to study \emph{symplectic manifold} in | 
| 527 | Hamiltonian Mechanics is that a symplectic manifold can represent | 
| 528 | all possible configurations of the system and the phase space of the | 
| 529 | system can be described by it's cotangent bundle. Every symplectic | 
| 530 | manifold is even dimensional. For instance, in Hamilton equations, | 
| 531 | coordinate and momentum always appear in pairs. | 
| 532 |  | 
| 533 | Let  $(M,\omega)$ and $(N, \eta)$ be symplectic manifolds. A map | 
| 534 | \[ | 
| 535 | f : M \rightarrow N | 
| 536 | \] | 
| 537 | is a \emph{symplectomorphism} if it is a \emph{diffeomorphims} and | 
| 538 | the \emph{pullback} of $\eta$ under f is equal to $\omega$. | 
| 539 | Canonical transformation is an example of symplectomorphism in | 
| 540 | classical mechanics. | 
| 541 |  | 
| 542 | \subsection{\label{introSection:ODE}Ordinary Differential Equations} | 
| 543 |  | 
| 544 | For a ordinary differential system defined as | 
| 545 | \begin{equation} | 
| 546 | \dot x = f(x) | 
| 547 | \end{equation} | 
| 548 | where $x = x(q,p)^T$, this system is canonical Hamiltonian, if | 
| 549 | \begin{equation} | 
| 550 | f(r) = J\nabla _x H(r). | 
| 551 | \end{equation} | 
| 552 | $H = H (q, p)$ is Hamiltonian function and $J$ is the skew-symmetric | 
| 553 | matrix | 
| 554 | \begin{equation} | 
| 555 | J = \left( {\begin{array}{*{20}c} | 
| 556 | 0 & I  \\ | 
| 557 | { - I} & 0  \\ | 
| 558 | \end{array}} \right) | 
| 559 | \label{introEquation:canonicalMatrix} | 
| 560 | \end{equation} | 
| 561 | where $I$ is an identity matrix. Using this notation, Hamiltonian | 
| 562 | system can be rewritten as, | 
| 563 | \begin{equation} | 
| 564 | \frac{d}{{dt}}x = J\nabla _x H(x) | 
| 565 | \label{introEquation:compactHamiltonian} | 
| 566 | \end{equation}In this case, $f$ is | 
| 567 | called a \emph{Hamiltonian vector field}. | 
| 568 |  | 
| 569 | Another generalization of Hamiltonian dynamics is Poisson | 
| 570 | Dynamics\cite{Olver1986}, | 
| 571 | \begin{equation} | 
| 572 | \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian} | 
| 573 | \end{equation} | 
| 574 | The most obvious change being that matrix $J$ now depends on $x$. | 
| 575 |  | 
| 576 | \subsection{\label{introSection:exactFlow}Exact Flow} | 
| 577 |  | 
| 578 | Let $x(t)$ be the exact solution of the ODE system, | 
| 579 | \begin{equation} | 
| 580 | \frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE} | 
| 581 | \end{equation} | 
| 582 | The exact flow(solution) $\varphi_\tau$ is defined by | 
| 583 | \[ | 
| 584 | x(t+\tau) =\varphi_\tau(x(t)) | 
| 585 | \] | 
| 586 | where $\tau$ is a fixed time step and $\varphi$ is a map from phase | 
| 587 | space to itself. The flow has the continuous group property, | 
| 588 | \begin{equation} | 
| 589 | \varphi _{\tau _1 }  \circ \varphi _{\tau _2 }  = \varphi _{\tau _1 | 
| 590 | + \tau _2 } . | 
| 591 | \end{equation} | 
| 592 | In particular, | 
| 593 | \begin{equation} | 
| 594 | \varphi _\tau   \circ \varphi _{ - \tau }  = I | 
| 595 | \end{equation} | 
| 596 | Therefore, the exact flow is self-adjoint, | 
| 597 | \begin{equation} | 
| 598 | \varphi _\tau   = \varphi _{ - \tau }^{ - 1}. | 
| 599 | \end{equation} | 
| 600 | The exact flow can also be written in terms of the of an operator, | 
| 601 | \begin{equation} | 
| 602 | \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial | 
| 603 | }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x). | 
| 604 | \label{introEquation:exponentialOperator} | 
| 605 | \end{equation} | 
| 606 |  | 
| 607 | In most cases, it is not easy to find the exact flow $\varphi_\tau$. | 
| 608 | Instead, we use a approximate map, $\psi_\tau$, which is usually | 
| 609 | called integrator. The order of an integrator $\psi_\tau$ is $p$, if | 
| 610 | the Taylor series of $\psi_\tau$ agree to order $p$, | 
| 611 | \begin{equation} | 
| 612 | \psi_tau(x) = x + \tau f(x) + O(\tau^{p+1}) | 
| 613 | \end{equation} | 
| 614 |  | 
| 615 | \subsection{\label{introSection:geometricProperties}Geometric Properties} | 
| 616 |  | 
| 617 | The hidden geometric properties\cite{Budd1999, Marsden1998} of ODE | 
| 618 | and its flow play important roles in numerical studies. Many of them | 
| 619 | can be found in systems which occur naturally in applications. | 
| 620 |  | 
| 621 | Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is | 
| 622 | a \emph{symplectic} flow if it satisfies, | 
| 623 | \begin{equation} | 
| 624 | {\varphi '}^T J \varphi ' = J. | 
| 625 | \end{equation} | 
| 626 | According to Liouville's theorem, the symplectic volume is invariant | 
| 627 | under a Hamiltonian flow, which is the basis for classical | 
| 628 | statistical mechanics. Furthermore, the flow of a Hamiltonian vector | 
| 629 | field on a symplectic manifold can be shown to be a | 
| 630 | symplectomorphism. As to the Poisson system, | 
| 631 | \begin{equation} | 
| 632 | {\varphi '}^T J \varphi ' = J \circ \varphi | 
| 633 | \end{equation} | 
| 634 | is the property must be preserved by the integrator. | 
| 635 |  | 
| 636 | It is possible to construct a \emph{volume-preserving} flow for a | 
| 637 | source free($ \nabla \cdot f = 0 $) ODE, if the flow satisfies $ | 
| 638 | \det d\varphi  = 1$. One can show easily that a symplectic flow will | 
| 639 | be volume-preserving. | 
| 640 |  | 
| 641 | Changing the variables $y = h(x)$ in a ODE\ref{introEquation:ODE} | 
| 642 | will result in a new system, | 
| 643 | \[ | 
| 644 | \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y). | 
| 645 | \] | 
| 646 | The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$. | 
| 647 | In other words, the flow of this vector field is reversible if and | 
| 648 | only if $ h \circ \varphi ^{ - 1}  = \varphi  \circ h $. | 
| 649 |  | 
| 650 | A \emph{first integral}, or conserved quantity of a general | 
| 651 | differential function is a function $ G:R^{2d}  \to R^d $ which is | 
| 652 | constant for all solutions of the ODE $\frac{{dx}}{{dt}} = f(x)$ , | 
| 653 | \[ | 
| 654 | \frac{{dG(x(t))}}{{dt}} = 0. | 
| 655 | \] | 
| 656 | Using chain rule, one may obtain, | 
| 657 | \[ | 
| 658 | \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \bullet \nabla G, | 
| 659 | \] | 
| 660 | which is the condition for conserving \emph{first integral}. For a | 
| 661 | canonical Hamiltonian system, the time evolution of an arbitrary | 
| 662 | smooth function $G$ is given by, | 
| 663 |  | 
| 664 | \begin{eqnarray} | 
| 665 | \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \\ | 
| 666 | & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\ | 
| 667 | \label{introEquation:firstIntegral1} | 
| 668 | \end{eqnarray} | 
| 669 |  | 
| 670 |  | 
| 671 | Using poisson bracket notion, Equation | 
| 672 | \ref{introEquation:firstIntegral1} can be rewritten as | 
| 673 | \[ | 
| 674 | \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)). | 
| 675 | \] | 
| 676 | Therefore, the sufficient condition for $G$ to be the \emph{first | 
| 677 | integral} of a Hamiltonian system is | 
| 678 | \[ | 
| 679 | \left\{ {G,H} \right\} = 0. | 
| 680 | \] | 
| 681 | As well known, the Hamiltonian (or energy) H of a Hamiltonian system | 
| 682 | is a \emph{first integral}, which is due to the fact $\{ H,H\}  = | 
| 683 | 0$. | 
| 684 |  | 
| 685 | When designing any numerical methods, one should always try to | 
| 686 | preserve the structural properties of the original ODE and its flow. | 
| 687 |  | 
| 688 | \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods} | 
| 689 | A lot of well established and very effective numerical methods have | 
| 690 | been successful precisely because of their symplecticities even | 
| 691 | though this fact was not recognized when they were first | 
| 692 | constructed. The most famous example is leapfrog methods in | 
| 693 | molecular dynamics. In general, symplectic integrators can be | 
| 694 | constructed using one of four different methods. | 
| 695 | \begin{enumerate} | 
| 696 | \item Generating functions | 
| 697 | \item Variational methods | 
| 698 | \item Runge-Kutta methods | 
| 699 | \item Splitting methods | 
| 700 | \end{enumerate} | 
| 701 |  | 
| 702 | Generating function\cite{Channell1990} tends to lead to methods | 
| 703 | which are cumbersome and difficult to use. In dissipative systems, | 
| 704 | variational methods can capture the decay of energy | 
| 705 | accurately\cite{Kane2000}. Since their geometrically unstable nature | 
| 706 | against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta | 
| 707 | methods are not suitable for Hamiltonian system. Recently, various | 
| 708 | high-order explicit Runge-Kutta methods | 
| 709 | \cite{Owren1992,Chen2003}have been developed to overcome this | 
| 710 | instability. However, due to computational penalty involved in | 
| 711 | implementing the Runge-Kutta methods, they do not attract too much | 
| 712 | attention from Molecular Dynamics community. Instead, splitting have | 
| 713 | been widely accepted since they exploit natural decompositions of | 
| 714 | the system\cite{Tuckerman1992, McLachlan1998}. | 
| 715 |  | 
| 716 | \subsubsection{\label{introSection:splittingMethod}Splitting Method} | 
| 717 |  | 
| 718 | The main idea behind splitting methods is to decompose the discrete | 
| 719 | $\varphi_h$ as a composition of simpler flows, | 
| 720 | \begin{equation} | 
| 721 | \varphi _h  = \varphi _{h_1 }  \circ \varphi _{h_2 }  \ldots  \circ | 
| 722 | \varphi _{h_n } | 
| 723 | \label{introEquation:FlowDecomposition} | 
| 724 | \end{equation} | 
| 725 | where each of the sub-flow is chosen such that each represent a | 
| 726 | simpler integration of the system. | 
| 727 |  | 
| 728 | Suppose that a Hamiltonian system takes the form, | 
| 729 | \[ | 
| 730 | H = H_1 + H_2. | 
| 731 | \] | 
| 732 | Here, $H_1$ and $H_2$ may represent different physical processes of | 
| 733 | the system. For instance, they may relate to kinetic and potential | 
| 734 | energy respectively, which is a natural decomposition of the | 
| 735 | problem. If $H_1$ and $H_2$ can be integrated using exact flows | 
| 736 | $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a simple first | 
| 737 | order is then given by the Lie-Trotter formula | 
| 738 | \begin{equation} | 
| 739 | \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h}, | 
| 740 | \label{introEquation:firstOrderSplitting} | 
| 741 | \end{equation} | 
| 742 | where $\varphi _h$ is the result of applying the corresponding | 
| 743 | continuous $\varphi _i$ over a time $h$. By definition, as | 
| 744 | $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it | 
| 745 | must follow that each operator $\varphi_i(t)$ is a symplectic map. | 
| 746 | It is easy to show that any composition of symplectic flows yields a | 
| 747 | symplectic map, | 
| 748 | \begin{equation} | 
| 749 | (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi | 
| 750 | '\phi ' = \phi '^T J\phi ' = J, | 
| 751 | \label{introEquation:SymplecticFlowComposition} | 
| 752 | \end{equation} | 
| 753 | where $\phi$ and $\psi$ both are symplectic maps. Thus operator | 
| 754 | splitting in this context automatically generates a symplectic map. | 
| 755 |  | 
| 756 | The Lie-Trotter splitting(\ref{introEquation:firstOrderSplitting}) | 
| 757 | introduces local errors proportional to $h^2$, while Strang | 
| 758 | splitting gives a second-order decomposition, | 
| 759 | \begin{equation} | 
| 760 | \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi | 
| 761 | _{1,h/2} , \label{introEquation:secondOrderSplitting} | 
| 762 | \end{equation} | 
| 763 | which has a local error proportional to $h^3$. Sprang splitting's | 
| 764 | popularity in molecular simulation community attribute to its | 
| 765 | symmetric property, | 
| 766 | \begin{equation} | 
| 767 | \varphi _h^{ - 1} = \varphi _{ - h}. | 
| 768 | \label{introEquation:timeReversible} | 
| 769 | \end{equation} | 
| 770 |  | 
| 771 | \subsubsection{\label{introSection:exampleSplittingMethod}Example of Splitting Method} | 
| 772 | The classical equation for a system consisting of interacting | 
| 773 | particles can be written in Hamiltonian form, | 
| 774 | \[ | 
| 775 | H = T + V | 
| 776 | \] | 
| 777 | where $T$ is the kinetic energy and $V$ is the potential energy. | 
| 778 | Setting $H_1 = T, H_2 = V$ and applying Strang splitting, one | 
| 779 | obtains the following: | 
| 780 | \begin{align} | 
| 781 | q(\Delta t) &= q(0) + \dot{q}(0)\Delta t + | 
| 782 | \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, % | 
| 783 | \label{introEquation:Lp10a} \\% | 
| 784 | % | 
| 785 | \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m} | 
| 786 | \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. % | 
| 787 | \label{introEquation:Lp10b} | 
| 788 | \end{align} | 
| 789 | where $F(t)$ is the force at time $t$. This integration scheme is | 
| 790 | known as \emph{velocity verlet} which is | 
| 791 | symplectic(\ref{introEquation:SymplecticFlowComposition}), | 
| 792 | time-reversible(\ref{introEquation:timeReversible}) and | 
| 793 | volume-preserving (\ref{introEquation:volumePreserving}). These | 
| 794 | geometric properties attribute to its long-time stability and its | 
| 795 | popularity in the community. However, the most commonly used | 
| 796 | velocity verlet integration scheme is written as below, | 
| 797 | \begin{align} | 
| 798 | \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &= | 
| 799 | \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\% | 
| 800 | % | 
| 801 | q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),% | 
| 802 | \label{introEquation:Lp9b}\\% | 
| 803 | % | 
| 804 | \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) + | 
| 805 | \frac{\Delta t}{2m}\, F[q(0)]. \label{introEquation:Lp9c} | 
| 806 | \end{align} | 
| 807 | From the preceding splitting, one can see that the integration of | 
| 808 | the equations of motion would follow: | 
| 809 | \begin{enumerate} | 
| 810 | \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position. | 
| 811 |  | 
| 812 | \item Use the half step velocities to move positions one whole step, $\Delta t$. | 
| 813 |  | 
| 814 | \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move. | 
| 815 |  | 
| 816 | \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values. | 
| 817 | \end{enumerate} | 
| 818 |  | 
| 819 | Simply switching the order of splitting and composing, a new | 
| 820 | integrator, the \emph{position verlet} integrator, can be generated, | 
| 821 | \begin{align} | 
| 822 | \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) + | 
| 823 | \frac{{\Delta t}}{{2m}}\dot q(0)} \right], % | 
| 824 | \label{introEquation:positionVerlet1} \\% | 
| 825 | % | 
| 826 | q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot | 
| 827 | q(\Delta t)} \right]. % | 
| 828 | \label{introEquation:positionVerlet2} | 
| 829 | \end{align} | 
| 830 |  | 
| 831 | \subsubsection{\label{introSection:errorAnalysis}Error Analysis and Higher Order Methods} | 
| 832 |  | 
| 833 | Baker-Campbell-Hausdorff formula can be used to determine the local | 
| 834 | error of splitting method in terms of commutator of the | 
| 835 | operators(\ref{introEquation:exponentialOperator}) associated with | 
| 836 | the sub-flow. For operators $hX$ and $hY$ which are associate to | 
| 837 | $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have | 
| 838 | \begin{equation} | 
| 839 | \exp (hX + hY) = \exp (hZ) | 
| 840 | \end{equation} | 
| 841 | where | 
| 842 | \begin{equation} | 
| 843 | hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left( | 
| 844 | {[X,[X,Y]] + [Y,[Y,X]]} \right) +  \ldots . | 
| 845 | \end{equation} | 
| 846 | Here, $[X,Y]$ is the commutators of operator $X$ and $Y$ given by | 
| 847 | \[ | 
| 848 | [X,Y] = XY - YX . | 
| 849 | \] | 
| 850 | Applying Baker-Campbell-Hausdorff formula\cite{Varadarajan1974} to | 
| 851 | Sprang splitting, we can obtain | 
| 852 | \begin{eqnarray*} | 
| 853 | \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\ | 
| 854 | &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\ | 
| 855 | &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots ) | 
| 856 | \end{eqnarray*} | 
| 857 | Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0\], the dominant local | 
| 858 | error of Spring splitting is proportional to $h^3$. The same | 
| 859 | procedure can be applied to general splitting,  of the form | 
| 860 | \begin{equation} | 
| 861 | \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m - | 
| 862 | 1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 . | 
| 863 | \end{equation} | 
| 864 | Careful choice of coefficient $a_1 \ldot b_m$ will lead to higher | 
| 865 | order method. Yoshida proposed an elegant way to compose higher | 
| 866 | order methods based on symmetric splitting\cite{Yoshida1990}. Given | 
| 867 | a symmetric second order base method $ \varphi _h^{(2)} $, a | 
| 868 | fourth-order symmetric method can be constructed by composing, | 
| 869 | \[ | 
| 870 | \varphi _h^{(4)}  = \varphi _{\alpha h}^{(2)}  \circ \varphi _{\beta | 
| 871 | h}^{(2)}  \circ \varphi _{\alpha h}^{(2)} | 
| 872 | \] | 
| 873 | where $ \alpha  =  - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta | 
| 874 | = \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric | 
| 875 | integrator $ \varphi _h^{(2n + 2)}$ can be composed by | 
| 876 | \begin{equation} | 
| 877 | \varphi _h^{(2n + 2)}  = \varphi _{\alpha h}^{(2n)}  \circ \varphi | 
| 878 | _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)} | 
| 879 | \end{equation} | 
| 880 | , if the weights are chosen as | 
| 881 | \[ | 
| 882 | \alpha  =  - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta = | 
| 883 | \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} . | 
| 884 | \] | 
| 885 |  | 
| 886 | \section{\label{introSection:molecularDynamics}Molecular Dynamics} | 
| 887 |  | 
| 888 | As one of the principal tools of molecular modeling, Molecular | 
| 889 | dynamics has proven to be a powerful tool for studying the functions | 
| 890 | of biological systems, providing structural, thermodynamic and | 
| 891 | dynamical information. The basic idea of molecular dynamics is that | 
| 892 | macroscopic properties are related to microscopic behavior and | 
| 893 | microscopic behavior can be calculated from the trajectories in | 
| 894 | simulations. For instance, instantaneous temperature of an | 
| 895 | Hamiltonian system of $N$ particle can be measured by | 
| 896 | \[ | 
| 897 | T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}} | 
| 898 | \] | 
| 899 | where $m_i$ and $v_i$ are the mass and velocity of $i$th particle | 
| 900 | respectively, $f$ is the number of degrees of freedom, and $k_B$ is | 
| 901 | the boltzman constant. | 
| 902 |  | 
| 903 | A typical molecular dynamics run consists of three essential steps: | 
| 904 | \begin{enumerate} | 
| 905 | \item Initialization | 
| 906 | \begin{enumerate} | 
| 907 | \item Preliminary preparation | 
| 908 | \item Minimization | 
| 909 | \item Heating | 
| 910 | \item Equilibration | 
| 911 | \end{enumerate} | 
| 912 | \item Production | 
| 913 | \item Analysis | 
| 914 | \end{enumerate} | 
| 915 | These three individual steps will be covered in the following | 
| 916 | sections. Sec.~\ref{introSec:initialSystemSettings} deals with the | 
| 917 | initialization of a simulation. Sec.~\ref{introSec:production} will | 
| 918 | discusses issues in production run. Sec.~\ref{introSection:Analysis} | 
| 919 | provides the theoretical tools for trajectory analysis. | 
| 920 |  | 
| 921 | \subsection{\label{introSec:initialSystemSettings}Initialization} | 
| 922 |  | 
| 923 | \subsubsection{Preliminary preparation} | 
| 924 |  | 
| 925 | When selecting the starting structure of a molecule for molecular | 
| 926 | simulation, one may retrieve its Cartesian coordinates from public | 
| 927 | databases, such as RCSB Protein Data Bank \textit{etc}. Although | 
| 928 | thousands of crystal structures of molecules are discovered every | 
| 929 | year, many more remain unknown due to the difficulties of | 
| 930 | purification and crystallization. Even for the molecule with known | 
| 931 | structure, some important information is missing. For example, the | 
| 932 | missing hydrogen atom which acts as donor in hydrogen bonding must | 
| 933 | be added. Moreover, in order to include electrostatic interaction, | 
| 934 | one may need to specify the partial charges for individual atoms. | 
| 935 | Under some circumstances, we may even need to prepare the system in | 
| 936 | a special setup. For instance, when studying transport phenomenon in | 
| 937 | membrane system, we may prepare the lipids in bilayer structure | 
| 938 | instead of placing lipids randomly in solvent, since we are not | 
| 939 | interested in self-aggregation and it takes a long time to happen. | 
| 940 |  | 
| 941 | \subsubsection{Minimization} | 
| 942 |  | 
| 943 | It is quite possible that some of molecules in the system from | 
| 944 | preliminary preparation may be overlapped with each other. This | 
| 945 | close proximity leads to high potential energy which consequently | 
| 946 | jeopardizes any molecular dynamics simulations. To remove these | 
| 947 | steric overlaps, one typically performs energy minimization to find | 
| 948 | a more reasonable conformation. Several energy minimization methods | 
| 949 | have been developed to exploit the energy surface and to locate the | 
| 950 | local minimum. While converging slowly near the minimum, steepest | 
| 951 | descent method is extremely robust when systems are far from | 
| 952 | harmonic. Thus, it is often used to refine structure from | 
| 953 | crystallographic data. Relied on the gradient or hessian, advanced | 
| 954 | methods like conjugate gradient and Newton-Raphson converge rapidly | 
| 955 | to a local minimum, while become unstable if the energy surface is | 
| 956 | far from quadratic. Another factor must be taken into account, when | 
| 957 | choosing energy minimization method, is the size of the system. | 
| 958 | Steepest descent and conjugate gradient can deal with models of any | 
| 959 | size. Because of the limit of computation power to calculate hessian | 
| 960 | matrix and insufficient storage capacity to store them, most | 
| 961 | Newton-Raphson methods can not be used with very large models. | 
| 962 |  | 
| 963 | \subsubsection{Heating} | 
| 964 |  | 
| 965 | Typically, Heating is performed by assigning random velocities | 
| 966 | according to a Gaussian distribution for a temperature. Beginning at | 
| 967 | a lower temperature and gradually increasing the temperature by | 
| 968 | assigning greater random velocities, we end up with setting the | 
| 969 | temperature of the system to a final temperature at which the | 
| 970 | simulation will be conducted. In heating phase, we should also keep | 
| 971 | the system from drifting or rotating as a whole. Equivalently, the | 
| 972 | net linear momentum and angular momentum of the system should be | 
| 973 | shifted to zero. | 
| 974 |  | 
| 975 | \subsubsection{Equilibration} | 
| 976 |  | 
| 977 | The purpose of equilibration is to allow the system to evolve | 
| 978 | spontaneously for a period of time and reach equilibrium. The | 
| 979 | procedure is continued until various statistical properties, such as | 
| 980 | temperature, pressure, energy, volume and other structural | 
| 981 | properties \textit{etc}, become independent of time. Strictly | 
| 982 | speaking, minimization and heating are not necessary, provided the | 
| 983 | equilibration process is long enough. However, these steps can serve | 
| 984 | as a means to arrive at an equilibrated structure in an effective | 
| 985 | way. | 
| 986 |  | 
| 987 | \subsection{\label{introSection:production}Production} | 
| 988 |  | 
| 989 | Production run is the most important step of the simulation, in | 
| 990 | which the equilibrated structure is used as a starting point and the | 
| 991 | motions of the molecules are collected for later analysis. In order | 
| 992 | to capture the macroscopic properties of the system, the molecular | 
| 993 | dynamics simulation must be performed in correct and efficient way. | 
| 994 |  | 
| 995 | The most expensive part of a molecular dynamics simulation is the | 
| 996 | calculation of non-bonded forces, such as van der Waals force and | 
| 997 | Coulombic forces \textit{etc}. For a system of $N$ particles, the | 
| 998 | complexity of the algorithm for pair-wise interactions is $O(N^2 )$, | 
| 999 | which making large simulations prohibitive in the absence of any | 
| 1000 | computation saving techniques. | 
| 1001 |  | 
| 1002 | A natural approach to avoid system size issue is to represent the | 
| 1003 | bulk behavior by a finite number of the particles. However, this | 
| 1004 | approach will suffer from the surface effect. To offset this, | 
| 1005 | \textit{Periodic boundary condition} (see Fig.~\ref{introFig:pbc}) | 
| 1006 | is developed to simulate bulk properties with a relatively small | 
| 1007 | number of particles. In this method, the simulation box is | 
| 1008 | replicated throughout space to form an infinite lattice. During the | 
| 1009 | simulation, when a particle moves in the primary cell, its image in | 
| 1010 | other cells move in exactly the same direction with exactly the same | 
| 1011 | orientation. Thus, as a particle leaves the primary cell, one of its | 
| 1012 | images will enter through the opposite face. | 
| 1013 | \begin{figure} | 
| 1014 | \centering | 
| 1015 | \includegraphics[width=\linewidth]{pbc.eps} | 
| 1016 | \caption[An illustration of periodic boundary conditions]{A 2-D | 
| 1017 | illustration of periodic boundary conditions. As one particle leaves | 
| 1018 | the left of the simulation box, an image of it enters the right.} | 
| 1019 | \label{introFig:pbc} | 
| 1020 | \end{figure} | 
| 1021 |  | 
| 1022 | %cutoff and minimum image convention | 
| 1023 | Another important technique to improve the efficiency of force | 
| 1024 | evaluation is to apply cutoff where particles farther than a | 
| 1025 | predetermined distance, are not included in the calculation | 
| 1026 | \cite{Frenkel1996}. The use of a cutoff radius will cause a | 
| 1027 | discontinuity in the potential energy curve. Fortunately, one can | 
| 1028 | shift the potential to ensure the potential curve go smoothly to | 
| 1029 | zero at the cutoff radius. Cutoff strategy works pretty well for | 
| 1030 | Lennard-Jones interaction because of its short range nature. | 
| 1031 | However, simply truncating the electrostatic interaction with the | 
| 1032 | use of cutoff has been shown to lead to severe artifacts in | 
| 1033 | simulations. Ewald summation, in which the slowly conditionally | 
| 1034 | convergent Coulomb potential is transformed into direct and | 
| 1035 | reciprocal sums with rapid and absolute convergence, has proved to | 
| 1036 | minimize the periodicity artifacts in liquid simulations. Taking the | 
| 1037 | advantages of the fast Fourier transform (FFT) for calculating | 
| 1038 | discrete Fourier transforms, the particle mesh-based | 
| 1039 | methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from | 
| 1040 | $O(N^{3/2})$ to $O(N logN)$. An alternative approach is \emph{fast | 
| 1041 | multipole method}\cite{Greengard1987, Greengard1994}, which treats | 
| 1042 | Coulombic interaction exactly at short range, and approximate the | 
| 1043 | potential at long range through multipolar expansion. In spite of | 
| 1044 | their wide acceptances at the molecular simulation community, these | 
| 1045 | two methods are hard to be implemented correctly and efficiently. | 
| 1046 | Instead, we use a damped and charge-neutralized Coulomb potential | 
| 1047 | method developed by Wolf and his coworkers\cite{Wolf1999}. The | 
| 1048 | shifted Coulomb potential for particle $i$ and particle $j$ at | 
| 1049 | distance $r_{rj}$ is given by: | 
| 1050 | \begin{equation} | 
| 1051 | V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha | 
| 1052 | r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow | 
| 1053 | R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha | 
| 1054 | r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb} | 
| 1055 | \end{equation} | 
| 1056 | where $\alpha$ is the convergence parameter. Due to the lack of | 
| 1057 | inherent periodicity and rapid convergence,this method is extremely | 
| 1058 | efficient and easy to implement. | 
| 1059 | \begin{figure} | 
| 1060 | \centering | 
| 1061 | \includegraphics[width=\linewidth]{shifted_coulomb.eps} | 
| 1062 | \caption[An illustration of shifted Coulomb potential]{An | 
| 1063 | illustration of shifted Coulomb potential.} | 
| 1064 | \label{introFigure:shiftedCoulomb} | 
| 1065 | \end{figure} | 
| 1066 |  | 
| 1067 | %multiple time step | 
| 1068 |  | 
| 1069 | \subsection{\label{introSection:Analysis} Analysis} | 
| 1070 |  | 
| 1071 | Recently, advanced visualization technique are widely applied to | 
| 1072 | monitor the motions of molecules. Although the dynamics of the | 
| 1073 | system can be described qualitatively from animation, quantitative | 
| 1074 | trajectory analysis are more appreciable. According to the | 
| 1075 | principles of Statistical Mechanics, | 
| 1076 | Sec.~\ref{introSection:statisticalMechanics}, one can compute | 
| 1077 | thermodynamics properties, analyze fluctuations of structural | 
| 1078 | parameters, and investigate time-dependent processes of the molecule | 
| 1079 | from the trajectories. | 
| 1080 |  | 
| 1081 | \subsubsection{\label{introSection:thermodynamicsProperties}Thermodynamics Properties} | 
| 1082 |  | 
| 1083 | Thermodynamics properties, which can be expressed in terms of some | 
| 1084 | function of the coordinates and momenta of all particles in the | 
| 1085 | system, can be directly computed from molecular dynamics. The usual | 
| 1086 | way to measure the pressure is based on virial theorem of Clausius | 
| 1087 | which states that the virial is equal to $-3Nk_BT$. For a system | 
| 1088 | with forces between particles, the total virial, $W$, contains the | 
| 1089 | contribution from external pressure and interaction between the | 
| 1090 | particles: | 
| 1091 | \[ | 
| 1092 | W =  - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot | 
| 1093 | f_{ij} } } \right\rangle | 
| 1094 | \] | 
| 1095 | where $f_{ij}$ is the force between particle $i$ and $j$ at a | 
| 1096 | distance $r_{ij}$. Thus, the expression for the pressure is given | 
| 1097 | by: | 
| 1098 | \begin{equation} | 
| 1099 | P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i | 
| 1100 | < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle | 
| 1101 | \end{equation} | 
| 1102 |  | 
| 1103 | \subsubsection{\label{introSection:structuralProperties}Structural Properties} | 
| 1104 |  | 
| 1105 | Structural Properties of a simple fluid can be described by a set of | 
| 1106 | distribution functions. Among these functions,\emph{pair | 
| 1107 | distribution function}, also known as \emph{radial distribution | 
| 1108 | function}, is of most fundamental importance to liquid-state theory. | 
| 1109 | Pair distribution function can be gathered by Fourier transforming | 
| 1110 | raw data from a series of neutron diffraction experiments and | 
| 1111 | integrating over the surface factor \cite{Powles1973}. The | 
| 1112 | experiment result can serve as a criterion to justify the | 
| 1113 | correctness of the theory. Moreover, various equilibrium | 
| 1114 | thermodynamic and structural properties can also be expressed in | 
| 1115 | terms of radial distribution function \cite{Allen1987}. | 
| 1116 |  | 
| 1117 | A pair distribution functions $g(r)$ gives the probability that a | 
| 1118 | particle $i$ will be located at a distance $r$ from a another | 
| 1119 | particle $j$ in the system | 
| 1120 | \[ | 
| 1121 | g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j | 
| 1122 | \ne i} {\delta (r - r_{ij} )} } } \right\rangle. | 
| 1123 | \] | 
| 1124 | Note that the delta function can be replaced by a histogram in | 
| 1125 | computer simulation. Figure | 
| 1126 | \ref{introFigure:pairDistributionFunction} shows a typical pair | 
| 1127 | distribution function for the liquid argon system. The occurrence of | 
| 1128 | several peaks in the plot of $g(r)$ suggests that it is more likely | 
| 1129 | to find particles at certain radial values than at others. This is a | 
| 1130 | result of the attractive interaction at such distances. Because of | 
| 1131 | the strong repulsive forces at short distance, the probability of | 
| 1132 | locating particles at distances less than about 2.5{\AA} from each | 
| 1133 | other is essentially zero. | 
| 1134 |  | 
| 1135 | %\begin{figure} | 
| 1136 | %\centering | 
| 1137 | %\includegraphics[width=\linewidth]{pdf.eps} | 
| 1138 | %\caption[Pair distribution function for the liquid argon | 
| 1139 | %]{Pair distribution function for the liquid argon} | 
| 1140 | %\label{introFigure:pairDistributionFunction} | 
| 1141 | %\end{figure} | 
| 1142 |  | 
| 1143 | \subsubsection{\label{introSection:timeDependentProperties}Time-dependent | 
| 1144 | Properties} | 
| 1145 |  | 
| 1146 | Time-dependent properties are usually calculated using \emph{time | 
| 1147 | correlation function}, which correlates random variables $A$ and $B$ | 
| 1148 | at two different time | 
| 1149 | \begin{equation} | 
| 1150 | C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle. | 
| 1151 | \label{introEquation:timeCorrelationFunction} | 
| 1152 | \end{equation} | 
| 1153 | If $A$ and $B$ refer to same variable, this kind of correlation | 
| 1154 | function is called \emph{auto correlation function}. One example of | 
| 1155 | auto correlation function is velocity auto-correlation function | 
| 1156 | which is directly related to transport properties of molecular | 
| 1157 | liquids: | 
| 1158 | \[ | 
| 1159 | D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)} | 
| 1160 | \right\rangle } dt | 
| 1161 | \] | 
| 1162 | where $D$ is diffusion constant. Unlike velocity autocorrelation | 
| 1163 | function which is averaging over time origins and over all the | 
| 1164 | atoms, dipole autocorrelation are calculated for the entire system. | 
| 1165 | The dipole autocorrelation function is given by: | 
| 1166 | \[ | 
| 1167 | c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)} | 
| 1168 | \right\rangle | 
| 1169 | \] | 
| 1170 | Here $u_{tot}$ is the net dipole of the entire system and is given | 
| 1171 | by | 
| 1172 | \[ | 
| 1173 | u_{tot} (t) = \sum\limits_i {u_i (t)} | 
| 1174 | \] | 
| 1175 | In principle, many time correlation functions can be related with | 
| 1176 | Fourier transforms of the infrared, Raman, and inelastic neutron | 
| 1177 | scattering spectra of molecular liquids. In practice, one can | 
| 1178 | extract the IR spectrum from the intensity of dipole fluctuation at | 
| 1179 | each frequency using the following relationship: | 
| 1180 | \[ | 
| 1181 | \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ - | 
| 1182 | i2\pi vt} dt} | 
| 1183 | \] | 
| 1184 |  | 
| 1185 | \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies} | 
| 1186 |  | 
| 1187 | Rigid bodies are frequently involved in the modeling of different | 
| 1188 | areas, from engineering, physics, to chemistry. For example, | 
| 1189 | missiles and vehicle are usually modeled by rigid bodies.  The | 
| 1190 | movement of the objects in 3D gaming engine or other physics | 
| 1191 | simulator is governed by the rigid body dynamics. In molecular | 
| 1192 | simulation, rigid body is used to simplify the model in | 
| 1193 | protein-protein docking study\cite{Gray2003}. | 
| 1194 |  | 
| 1195 | It is very important to develop stable and efficient methods to | 
| 1196 | integrate the equations of motion of orientational degrees of | 
| 1197 | freedom. Euler angles are the nature choice to describe the | 
| 1198 | rotational degrees of freedom. However, due to its singularity, the | 
| 1199 | numerical integration of corresponding equations of motion is very | 
| 1200 | inefficient and inaccurate. Although an alternative integrator using | 
| 1201 | different sets of Euler angles can overcome this | 
| 1202 | difficulty\cite{Barojas1973}, the computational penalty and the lost | 
| 1203 | of angular momentum conservation still remain. A singularity free | 
| 1204 | representation utilizing quaternions was developed by Evans in | 
| 1205 | 1977\cite{Evans1977}. Unfortunately, this approach suffer from the | 
| 1206 | nonseparable Hamiltonian resulted from quaternion representation, | 
| 1207 | which prevents the symplectic algorithm to be utilized. Another | 
| 1208 | different approach is to apply holonomic constraints to the atoms | 
| 1209 | belonging to the rigid body. Each atom moves independently under the | 
| 1210 | normal forces deriving from potential energy and constraint forces | 
| 1211 | which are used to guarantee the rigidness. However, due to their | 
| 1212 | iterative nature, SHAKE and Rattle algorithm converge very slowly | 
| 1213 | when the number of constraint increases\cite{Ryckaert1977, | 
| 1214 | Andersen1983}. | 
| 1215 |  | 
| 1216 | The break through in geometric literature suggests that, in order to | 
| 1217 | develop a long-term integration scheme, one should preserve the | 
| 1218 | symplectic structure of the flow. Introducing conjugate momentum to | 
| 1219 | rotation matrix $Q$ and re-formulating Hamiltonian's equation, a | 
| 1220 | symplectic integrator, RSHAKE\cite{Kol1997}, was proposed to evolve | 
| 1221 | the Hamiltonian system in a constraint manifold by iteratively | 
| 1222 | satisfying the orthogonality constraint $Q_T Q = 1$. An alternative | 
| 1223 | method using quaternion representation was developed by | 
| 1224 | Omelyan\cite{Omelyan1998}. However, both of these methods are | 
| 1225 | iterative and inefficient. In this section, we will present a | 
| 1226 | symplectic Lie-Poisson integrator for rigid body developed by | 
| 1227 | Dullweber and his coworkers\cite{Dullweber1997} in depth. | 
| 1228 |  | 
| 1229 | \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Body} | 
| 1230 | The motion of the rigid body is Hamiltonian with the Hamiltonian | 
| 1231 | function | 
| 1232 | \begin{equation} | 
| 1233 | H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) + | 
| 1234 | V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ]. | 
| 1235 | \label{introEquation:RBHamiltonian} | 
| 1236 | \end{equation} | 
| 1237 | Here, $q$ and $Q$  are the position and rotation matrix for the | 
| 1238 | rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ , and | 
| 1239 | $J$, a diagonal matrix, is defined by | 
| 1240 | \[ | 
| 1241 | I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} } | 
| 1242 | \] | 
| 1243 | where $I_{ii}$ is the diagonal element of the inertia tensor. This | 
| 1244 | constrained Hamiltonian equation subjects to a holonomic constraint, | 
| 1245 | \begin{equation} | 
| 1246 | Q^T Q = 1, \label{introEquation:orthogonalConstraint} | 
| 1247 | \end{equation} | 
| 1248 | which is used to ensure rotation matrix's orthogonality. | 
| 1249 | Differentiating \ref{introEquation:orthogonalConstraint} and using | 
| 1250 | Equation \ref{introEquation:RBMotionMomentum}, one may obtain, | 
| 1251 | \begin{equation} | 
| 1252 | Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\ | 
| 1253 | \label{introEquation:RBFirstOrderConstraint} | 
| 1254 | \end{equation} | 
| 1255 |  | 
| 1256 | Using Equation (\ref{introEquation:motionHamiltonianCoordinate}, | 
| 1257 | \ref{introEquation:motionHamiltonianMomentum}), one can write down | 
| 1258 | the equations of motion, | 
| 1259 | \[ | 
| 1260 | \begin{array}{c} | 
| 1261 | \frac{{dq}}{{dt}} = \frac{p}{m} \label{introEquation:RBMotionPosition}\\ | 
| 1262 | \frac{{dp}}{{dt}} =  - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\ | 
| 1263 | \frac{{dQ}}{{dt}} = PJ^{ - 1}  \label{introEquation:RBMotionRotation}\\ | 
| 1264 | \frac{{dP}}{{dt}} =  - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}\\ | 
| 1265 | \end{array} | 
| 1266 | \] | 
| 1267 |  | 
| 1268 | In general, there are two ways to satisfy the holonomic constraints. | 
| 1269 | We can use constraint force provided by lagrange multiplier on the | 
| 1270 | normal manifold to keep the motion on constraint space. Or we can | 
| 1271 | simply evolve the system in constraint manifold. These two methods | 
| 1272 | are proved to be equivalent. The holonomic constraint and equations | 
| 1273 | of motions define a constraint manifold for rigid body | 
| 1274 | \[ | 
| 1275 | M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0} | 
| 1276 | \right\}. | 
| 1277 | \] | 
| 1278 |  | 
| 1279 | Unfortunately, this constraint manifold is not the cotangent bundle | 
| 1280 | $T_{\star}SO(3)$. However, it turns out that under symplectic | 
| 1281 | transformation, the cotangent space and the phase space are | 
| 1282 | diffeomorphic. Introducing | 
| 1283 | \[ | 
| 1284 | \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right), | 
| 1285 | \] | 
| 1286 | the mechanical system subject to a holonomic constraint manifold $M$ | 
| 1287 | can be re-formulated as a Hamiltonian system on the cotangent space | 
| 1288 | \[ | 
| 1289 | T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q = | 
| 1290 | 1,\tilde Q^T \tilde PJ^{ - 1}  + J^{ - 1} P^T \tilde Q = 0} \right\} | 
| 1291 | \] | 
| 1292 |  | 
| 1293 | For a body fixed vector $X_i$ with respect to the center of mass of | 
| 1294 | the rigid body, its corresponding lab fixed vector $X_0^{lab}$  is | 
| 1295 | given as | 
| 1296 | \begin{equation} | 
| 1297 | X_i^{lab} = Q X_i + q. | 
| 1298 | \end{equation} | 
| 1299 | Therefore, potential energy $V(q,Q)$ is defined by | 
| 1300 | \[ | 
| 1301 | V(q,Q) = V(Q X_0 + q). | 
| 1302 | \] | 
| 1303 | Hence, the force and torque are given by | 
| 1304 | \[ | 
| 1305 | \nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)}, | 
| 1306 | \] | 
| 1307 | and | 
| 1308 | \[ | 
| 1309 | \nabla _Q V(q,Q) = F(q,Q)X_i^t | 
| 1310 | \] | 
| 1311 | respectively. | 
| 1312 |  | 
| 1313 | As a common choice to describe the rotation dynamics of the rigid | 
| 1314 | body, angular momentum on body frame $\Pi  = Q^t P$ is introduced to | 
| 1315 | rewrite the equations of motion, | 
| 1316 | \begin{equation} | 
| 1317 | \begin{array}{l} | 
| 1318 | \mathop \Pi \limits^ \bullet   = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda  \\ | 
| 1319 | \mathop Q\limits^{{\rm{   }} \bullet }  = Q\Pi {\rm{ }}J^{ - 1}  \\ | 
| 1320 | \end{array} | 
| 1321 | \label{introEqaution:RBMotionPI} | 
| 1322 | \end{equation} | 
| 1323 | , as well as holonomic constraints, | 
| 1324 | \[ | 
| 1325 | \begin{array}{l} | 
| 1326 | \Pi J^{ - 1}  + J^{ - 1} \Pi ^t  = 0 \\ | 
| 1327 | Q^T Q = 1 \\ | 
| 1328 | \end{array} | 
| 1329 | \] | 
| 1330 |  | 
| 1331 | For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a matrix $\hat v \in | 
| 1332 | so(3)^ \star$, the hat-map isomorphism, | 
| 1333 | \begin{equation} | 
| 1334 | v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left( | 
| 1335 | {\begin{array}{*{20}c} | 
| 1336 | 0 & { - v_3 } & {v_2 }  \\ | 
| 1337 | {v_3 } & 0 & { - v_1 }  \\ | 
| 1338 | { - v_2 } & {v_1 } & 0  \\ | 
| 1339 | \end{array}} \right), | 
| 1340 | \label{introEquation:hatmapIsomorphism} | 
| 1341 | \end{equation} | 
| 1342 | will let us associate the matrix products with traditional vector | 
| 1343 | operations | 
| 1344 | \[ | 
| 1345 | \hat vu = v \times u | 
| 1346 | \] | 
| 1347 |  | 
| 1348 | Using \ref{introEqaution:RBMotionPI}, one can construct a skew | 
| 1349 | matrix, | 
| 1350 | \begin{equation} | 
| 1351 | (\mathop \Pi \limits^ \bullet   - \mathop \Pi \limits^ \bullet  ^T | 
| 1352 | ){\rm{ }} = {\rm{ }}(\Pi  - \Pi ^T ){\rm{ }}(J^{ - 1} \Pi  + \Pi J^{ | 
| 1353 | - 1} ) + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]} - | 
| 1354 | (\Lambda  - \Lambda ^T ) . \label{introEquation:skewMatrixPI} | 
| 1355 | \end{equation} | 
| 1356 | Since $\Lambda$ is symmetric, the last term of Equation | 
| 1357 | \ref{introEquation:skewMatrixPI} is zero, which implies the Lagrange | 
| 1358 | multiplier $\Lambda$ is absent from the equations of motion. This | 
| 1359 | unique property eliminate the requirement of iterations which can | 
| 1360 | not be avoided in other methods\cite{Kol1997, Omelyan1998}. | 
| 1361 |  | 
| 1362 | Applying hat-map isomorphism, we obtain the equation of motion for | 
| 1363 | angular momentum on body frame | 
| 1364 | \begin{equation} | 
| 1365 | \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T | 
| 1366 | F_i (r,Q)} \right) \times X_i }. | 
| 1367 | \label{introEquation:bodyAngularMotion} | 
| 1368 | \end{equation} | 
| 1369 | In the same manner, the equation of motion for rotation matrix is | 
| 1370 | given by | 
| 1371 | \[ | 
| 1372 | \dot Q = Qskew(I^{ - 1} \pi ) | 
| 1373 | \] | 
| 1374 |  | 
| 1375 | \subsection{\label{introSection:SymplecticFreeRB}Symplectic | 
| 1376 | Lie-Poisson Integrator for Free Rigid Body} | 
| 1377 |  | 
| 1378 | If there is not external forces exerted on the rigid body, the only | 
| 1379 | contribution to the rotational is from the kinetic potential (the | 
| 1380 | first term of \ref{ introEquation:bodyAngularMotion}). The free | 
| 1381 | rigid body is an example of Lie-Poisson system with Hamiltonian | 
| 1382 | function | 
| 1383 | \begin{equation} | 
| 1384 | T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 ) | 
| 1385 | \label{introEquation:rotationalKineticRB} | 
| 1386 | \end{equation} | 
| 1387 | where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and | 
| 1388 | Lie-Poisson structure matrix, | 
| 1389 | \begin{equation} | 
| 1390 | J(\pi ) = \left( {\begin{array}{*{20}c} | 
| 1391 | 0 & {\pi _3 } & { - \pi _2 }  \\ | 
| 1392 | { - \pi _3 } & 0 & {\pi _1 }  \\ | 
| 1393 | {\pi _2 } & { - \pi _1 } & 0  \\ | 
| 1394 | \end{array}} \right) | 
| 1395 | \end{equation} | 
| 1396 | Thus, the dynamics of free rigid body is governed by | 
| 1397 | \begin{equation} | 
| 1398 | \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi ) | 
| 1399 | \end{equation} | 
| 1400 |  | 
| 1401 | One may notice that each $T_i^r$ in Equation | 
| 1402 | \ref{introEquation:rotationalKineticRB} can be solved exactly. For | 
| 1403 | instance, the equations of motion due to $T_1^r$ are given by | 
| 1404 | \begin{equation} | 
| 1405 | \frac{d}{{dt}}\pi  = R_1 \pi ,\frac{d}{{dt}}Q = QR_1 | 
| 1406 | \label{introEqaution:RBMotionSingleTerm} | 
| 1407 | \end{equation} | 
| 1408 | where | 
| 1409 | \[ R_1  = \left( {\begin{array}{*{20}c} | 
| 1410 | 0 & 0 & 0  \\ | 
| 1411 | 0 & 0 & {\pi _1 }  \\ | 
| 1412 | 0 & { - \pi _1 } & 0  \\ | 
| 1413 | \end{array}} \right). | 
| 1414 | \] | 
| 1415 | The solutions of Equation \ref{introEqaution:RBMotionSingleTerm} is | 
| 1416 | \[ | 
| 1417 | \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) = | 
| 1418 | Q(0)e^{\Delta tR_1 } | 
| 1419 | \] | 
| 1420 | with | 
| 1421 | \[ | 
| 1422 | e^{\Delta tR_1 }  = \left( {\begin{array}{*{20}c} | 
| 1423 | 0 & 0 & 0  \\ | 
| 1424 | 0 & {\cos \theta _1 } & {\sin \theta _1 }  \\ | 
| 1425 | 0 & { - \sin \theta _1 } & {\cos \theta _1 }  \\ | 
| 1426 | \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t. | 
| 1427 | \] | 
| 1428 | To reduce the cost of computing expensive functions in $e^{\Delta | 
| 1429 | tR_1 }$, we can use Cayley transformation, | 
| 1430 | \[ | 
| 1431 | e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1 | 
| 1432 | ) | 
| 1433 | \] | 
| 1434 | The flow maps for $T_2^r$ and $T_3^r$ can be found in the same | 
| 1435 | manner. | 
| 1436 |  | 
| 1437 | In order to construct a second-order symplectic method, we split the | 
| 1438 | angular kinetic Hamiltonian function can into five terms | 
| 1439 | \[ | 
| 1440 | T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2 | 
| 1441 | ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r | 
| 1442 | (\pi _1 ) | 
| 1443 | \]. | 
| 1444 | Concatenating flows corresponding to these five terms, we can obtain | 
| 1445 | an symplectic integrator, | 
| 1446 | \[ | 
| 1447 | \varphi _{\Delta t,T^r }  = \varphi _{\Delta t/2,\pi _1 }  \circ | 
| 1448 | \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 } | 
| 1449 | \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi | 
| 1450 | _1 }. | 
| 1451 | \] | 
| 1452 |  | 
| 1453 | The non-canonical Lie-Poisson bracket ${F, G}$ of two function | 
| 1454 | $F(\pi )$ and $G(\pi )$ is defined by | 
| 1455 | \[ | 
| 1456 | \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi | 
| 1457 | ) | 
| 1458 | \] | 
| 1459 | If the Poisson bracket of a function $F$ with an arbitrary smooth | 
| 1460 | function $G$ is zero, $F$ is a \emph{Casimir}, which is the | 
| 1461 | conserved quantity in Poisson system. We can easily verify that the | 
| 1462 | norm of the angular momentum, $\parallel \pi | 
| 1463 | \parallel$, is a \emph{Casimir}. Let$ F(\pi ) = S(\frac{{\parallel | 
| 1464 | \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ , | 
| 1465 | then by the chain rule | 
| 1466 | \[ | 
| 1467 | \nabla _\pi  F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2 | 
| 1468 | }}{2})\pi | 
| 1469 | \] | 
| 1470 | Thus $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel \pi | 
| 1471 | \parallel ^2 }}{2})\pi  \times \pi  = 0 $. This explicit | 
| 1472 | Lie-Poisson integrator is found to be extremely efficient and stable | 
| 1473 | which can be explained by the fact the small angle approximation is | 
| 1474 | used and the norm of the angular momentum is conserved. | 
| 1475 |  | 
| 1476 | \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian | 
| 1477 | Splitting for Rigid Body} | 
| 1478 |  | 
| 1479 | The Hamiltonian of rigid body can be separated in terms of kinetic | 
| 1480 | energy and potential energy, | 
| 1481 | \[ | 
| 1482 | H = T(p,\pi ) + V(q,Q) | 
| 1483 | \] | 
| 1484 | The equations of motion corresponding to potential energy and | 
| 1485 | kinetic energy are listed in the below table, | 
| 1486 | \begin{table} | 
| 1487 | \caption{Equations of motion due to Potential and Kinetic Energies} | 
| 1488 | \begin{center} | 
| 1489 | \begin{tabular}{|l|l|} | 
| 1490 | \hline | 
| 1491 | % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... | 
| 1492 | Potential & Kinetic \\ | 
| 1493 | $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\ | 
| 1494 | $\frac{d}{{dt}}p =  - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\ | 
| 1495 | $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\ | 
| 1496 | $ \frac{d}{{dt}}\pi  = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi  = \pi  \times I^{ - 1} \pi$\\ | 
| 1497 | \hline | 
| 1498 | \end{tabular} | 
| 1499 | \end{center} | 
| 1500 | \end{table} | 
| 1501 | A second-order symplectic method is now obtained by the | 
| 1502 | composition of the flow maps, | 
| 1503 | \[ | 
| 1504 | \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi | 
| 1505 | _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}. | 
| 1506 | \] | 
| 1507 | Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two | 
| 1508 | sub-flows which corresponding to force and torque respectively, | 
| 1509 | \[ | 
| 1510 | \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi | 
| 1511 | _{\Delta t/2,\tau }. | 
| 1512 | \] | 
| 1513 | Since the associated operators of $\varphi _{\Delta t/2,F} $ and | 
| 1514 | $\circ \varphi _{\Delta t/2,\tau }$ are commuted, the composition | 
| 1515 | order inside $\varphi _{\Delta t/2,V}$ does not matter. | 
| 1516 |  | 
| 1517 | Furthermore, kinetic potential can be separated to translational | 
| 1518 | kinetic term, $T^t (p)$, and rotational kinetic term, $T^r (\pi )$, | 
| 1519 | \begin{equation} | 
| 1520 | T(p,\pi ) =T^t (p) + T^r (\pi ). | 
| 1521 | \end{equation} | 
| 1522 | where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is | 
| 1523 | defined by \ref{introEquation:rotationalKineticRB}. Therefore, the | 
| 1524 | corresponding flow maps are given by | 
| 1525 | \[ | 
| 1526 | \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi | 
| 1527 | _{\Delta t,T^r }. | 
| 1528 | \] | 
| 1529 | Finally, we obtain the overall symplectic flow maps for free moving | 
| 1530 | rigid body | 
| 1531 | \begin{equation} | 
| 1532 | \begin{array}{c} | 
| 1533 | \varphi _{\Delta t}  = \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \\ | 
| 1534 | \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \\ | 
| 1535 | \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\ | 
| 1536 | \end{array} | 
| 1537 | \label{introEquation:overallRBFlowMaps} | 
| 1538 | \end{equation} | 
| 1539 |  | 
| 1540 | \section{\label{introSection:langevinDynamics}Langevin Dynamics} | 
| 1541 | As an alternative to newtonian dynamics, Langevin dynamics, which | 
| 1542 | mimics a simple heat bath with stochastic and dissipative forces, | 
| 1543 | has been applied in a variety of studies. This section will review | 
| 1544 | the theory of Langevin dynamics simulation. A brief derivation of | 
| 1545 | generalized Langevin equation will be given first. Follow that, we | 
| 1546 | will discuss the physical meaning of the terms appearing in the | 
| 1547 | equation as well as the calculation of friction tensor from | 
| 1548 | hydrodynamics theory. | 
| 1549 |  | 
| 1550 | \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation} | 
| 1551 |  | 
| 1552 | Harmonic bath model, in which an effective set of harmonic | 
| 1553 | oscillators are used to mimic the effect of a linearly responding | 
| 1554 | environment, has been widely used in quantum chemistry and | 
| 1555 | statistical mechanics. One of the successful applications of | 
| 1556 | Harmonic bath model is the derivation of Deriving Generalized | 
| 1557 | Langevin Dynamics. Lets consider a system, in which the degree of | 
| 1558 | freedom $x$ is assumed to couple to the bath linearly, giving a | 
| 1559 | Hamiltonian of the form | 
| 1560 | \begin{equation} | 
| 1561 | H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N) | 
| 1562 | \label{introEquation:bathGLE}. | 
| 1563 | \end{equation} | 
| 1564 | Here $p$ is a momentum conjugate to $q$, $m$ is the mass associated | 
| 1565 | with this degree of freedom, $H_B$ is harmonic bath Hamiltonian, | 
| 1566 | \[ | 
| 1567 | H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2 | 
| 1568 | }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 } | 
| 1569 | \right\}} | 
| 1570 | \] | 
| 1571 | where the index $\alpha$ runs over all the bath degrees of freedom, | 
| 1572 | $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are | 
| 1573 | the harmonic bath masses, and $\Delta U$ is bilinear system-bath | 
| 1574 | coupling, | 
| 1575 | \[ | 
| 1576 | \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x} | 
| 1577 | \] | 
| 1578 | where $g_\alpha$ are the coupling constants between the bath and the | 
| 1579 | coordinate $x$. Introducing | 
| 1580 | \[ | 
| 1581 | W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2 | 
| 1582 | }}{{2m_\alpha  w_\alpha ^2 }}} x^2 | 
| 1583 | \] and combining the last two terms in Equation | 
| 1584 | \ref{introEquation:bathGLE}, we may rewrite the Harmonic bath | 
| 1585 | Hamiltonian as | 
| 1586 | \[ | 
| 1587 | H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha  = 1}^N | 
| 1588 | {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha | 
| 1589 | w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha | 
| 1590 | w_\alpha ^2 }}x} \right)^2 } \right\}} | 
| 1591 | \] | 
| 1592 | Since the first two terms of the new Hamiltonian depend only on the | 
| 1593 | system coordinates, we can get the equations of motion for | 
| 1594 | Generalized Langevin Dynamics by Hamilton's equations | 
| 1595 | \ref{introEquation:motionHamiltonianCoordinate, | 
| 1596 | introEquation:motionHamiltonianMomentum}, | 
| 1597 | \begin{equation} | 
| 1598 | m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - | 
| 1599 | \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   - | 
| 1600 | \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right)}, | 
| 1601 | \label{introEquation:coorMotionGLE} | 
| 1602 | \end{equation} | 
| 1603 | and | 
| 1604 | \begin{equation} | 
| 1605 | m\ddot x_\alpha   =  - m_\alpha  w_\alpha ^2 \left( {x_\alpha   - | 
| 1606 | \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right). | 
| 1607 | \label{introEquation:bathMotionGLE} | 
| 1608 | \end{equation} | 
| 1609 |  | 
| 1610 | In order to derive an equation for $x$, the dynamics of the bath | 
| 1611 | variables $x_\alpha$ must be solved exactly first. As an integral | 
| 1612 | transform which is particularly useful in solving linear ordinary | 
| 1613 | differential equations, Laplace transform is the appropriate tool to | 
| 1614 | solve this problem. The basic idea is to transform the difficult | 
| 1615 | differential equations into simple algebra problems which can be | 
| 1616 | solved easily. Then applying inverse Laplace transform, also known | 
| 1617 | as the Bromwich integral, we can retrieve the solutions of the | 
| 1618 | original problems. | 
| 1619 |  | 
| 1620 | Let $f(t)$ be a function defined on $ [0,\infty ) $. The Laplace | 
| 1621 | transform of f(t) is a new function defined as | 
| 1622 | \[ | 
| 1623 | L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt} | 
| 1624 | \] | 
| 1625 | where  $p$ is real and  $L$ is called the Laplace Transform | 
| 1626 | Operator. Below are some important properties of Laplace transform | 
| 1627 |  | 
| 1628 | \begin{eqnarray*} | 
| 1629 | L(x + y)  & = & L(x) + L(y) \\ | 
| 1630 | L(ax)     & = & aL(x) \\ | 
| 1631 | L(\dot x) & = & pL(x) - px(0) \\ | 
| 1632 | L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\ | 
| 1633 | L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\ | 
| 1634 | \end{eqnarray*} | 
| 1635 |  | 
| 1636 |  | 
| 1637 | Applying Laplace transform to the bath coordinates, we obtain | 
| 1638 | \begin{eqnarray*} | 
| 1639 | p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\ | 
| 1640 | L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\ | 
| 1641 | \end{eqnarray*} | 
| 1642 |  | 
| 1643 | By the same way, the system coordinates become | 
| 1644 | \begin{eqnarray*} | 
| 1645 | mL(\ddot x) & = & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} \\ | 
| 1646 | & & \mbox{} - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\ | 
| 1647 | \end{eqnarray*} | 
| 1648 |  | 
| 1649 | With the help of some relatively important inverse Laplace | 
| 1650 | transformations: | 
| 1651 | \[ | 
| 1652 | \begin{array}{c} | 
| 1653 | L(\cos at) = \frac{p}{{p^2  + a^2 }} \\ | 
| 1654 | L(\sin at) = \frac{a}{{p^2  + a^2 }} \\ | 
| 1655 | L(1) = \frac{1}{p} \\ | 
| 1656 | \end{array} | 
| 1657 | \] | 
| 1658 | , we obtain | 
| 1659 | \begin{eqnarray*} | 
| 1660 | m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - | 
| 1661 | \sum\limits_{\alpha  = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2 | 
| 1662 | }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega | 
| 1663 | _\alpha  t)\dot x(t - \tau )d\tau  \\ | 
| 1664 | & &\mbox{} - \left[ {g_\alpha  x_\alpha (0) - \frac{{g_\alpha | 
| 1665 | }}{{m_\alpha \omega _\alpha  }}} \right]\cos (\omega _\alpha  t) - | 
| 1666 | \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega | 
| 1667 | _\alpha  }}\sin (\omega _\alpha  t)} } \right\}} \\ | 
| 1668 | % | 
| 1669 | & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t | 
| 1670 | {\sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2 | 
| 1671 | }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha | 
| 1672 | t)\dot x(t - \tau )d} \tau }  + \sum\limits_{\alpha  = 1}^N {\left\{ | 
| 1673 | {\left[ {g_\alpha  x_\alpha  (0) \\ | 
| 1674 | & & \mbox{} - \frac{{g_\alpha  }}{{m_\alpha \omega _\alpha  }}} | 
| 1675 | \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha | 
| 1676 | (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}} | 
| 1677 | \end{eqnarray*} | 
| 1678 | Introducing a \emph{dynamic friction kernel} | 
| 1679 | \begin{equation} | 
| 1680 | \xi (t) = \sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2 | 
| 1681 | }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha  t)} | 
| 1682 | \label{introEquation:dynamicFrictionKernelDefinition} | 
| 1683 | \end{equation} | 
| 1684 | and \emph{a random force} | 
| 1685 | \begin{equation} | 
| 1686 | R(t) = \sum\limits_{\alpha  = 1}^N {\left( {g_\alpha  x_\alpha  (0) | 
| 1687 | - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}x(0)} | 
| 1688 | \right)\cos (\omega _\alpha  t)}  + \frac{{\dot x_\alpha | 
| 1689 | (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t), | 
| 1690 | \label{introEquation:randomForceDefinition} | 
| 1691 | \end{equation} | 
| 1692 | the equation of motion can be rewritten as | 
| 1693 | \begin{equation} | 
| 1694 | m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi | 
| 1695 | (t)\dot x(t - \tau )d\tau }  + R(t) | 
| 1696 | \label{introEuqation:GeneralizedLangevinDynamics} | 
| 1697 | \end{equation} | 
| 1698 | which is known as the \emph{generalized Langevin equation}. | 
| 1699 |  | 
| 1700 | \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}Random Force and Dynamic Friction Kernel} | 
| 1701 |  | 
| 1702 | One may notice that $R(t)$ depends only on initial conditions, which | 
| 1703 | implies it is completely deterministic within the context of a | 
| 1704 | harmonic bath. However, it is easy to verify that $R(t)$ is totally | 
| 1705 | uncorrelated to $x$ and $\dot x$, | 
| 1706 | \[ | 
| 1707 | \begin{array}{l} | 
| 1708 | \left\langle {x(t)R(t)} \right\rangle  = 0, \\ | 
| 1709 | \left\langle {\dot x(t)R(t)} \right\rangle  = 0. \\ | 
| 1710 | \end{array} | 
| 1711 | \] | 
| 1712 | This property is what we expect from a truly random process. As long | 
| 1713 | as the model, which is gaussian distribution in general, chosen for | 
| 1714 | $R(t)$ is a truly random process, the stochastic nature of the GLE | 
| 1715 | still remains. | 
| 1716 |  | 
| 1717 | %dynamic friction kernel | 
| 1718 | The convolution integral | 
| 1719 | \[ | 
| 1720 | \int_0^t {\xi (t)\dot x(t - \tau )d\tau } | 
| 1721 | \] | 
| 1722 | depends on the entire history of the evolution of $x$, which implies | 
| 1723 | that the bath retains memory of previous motions. In other words, | 
| 1724 | the bath requires a finite time to respond to change in the motion | 
| 1725 | of the system. For a sluggish bath which responds slowly to changes | 
| 1726 | in the system coordinate, we may regard $\xi(t)$ as a constant | 
| 1727 | $\xi(t) = \Xi_0$. Hence, the convolution integral becomes | 
| 1728 | \[ | 
| 1729 | \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = \xi _0 (x(t) - x(0)) | 
| 1730 | \] | 
| 1731 | and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes | 
| 1732 | \[ | 
| 1733 | m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) + | 
| 1734 | \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t), | 
| 1735 | \] | 
| 1736 | which can be used to describe dynamic caging effect. The other | 
| 1737 | extreme is the bath that responds infinitely quickly to motions in | 
| 1738 | the system. Thus, $\xi (t)$ can be taken as a $delta$ function in | 
| 1739 | time: | 
| 1740 | \[ | 
| 1741 | \xi (t) = 2\xi _0 \delta (t) | 
| 1742 | \] | 
| 1743 | Hence, the convolution integral becomes | 
| 1744 | \[ | 
| 1745 | \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = 2\xi _0 \int_0^t | 
| 1746 | {\delta (t)\dot x(t - \tau )d\tau }  = \xi _0 \dot x(t), | 
| 1747 | \] | 
| 1748 | and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes | 
| 1749 | \begin{equation} | 
| 1750 | m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot | 
| 1751 | x(t) + R(t) \label{introEquation:LangevinEquation} | 
| 1752 | \end{equation} | 
| 1753 | which is known as the Langevin equation. The static friction | 
| 1754 | coefficient $\xi _0$ can either be calculated from spectral density | 
| 1755 | or be determined by Stokes' law for regular shaped particles.A | 
| 1756 | briefly review on calculating friction tensor for arbitrary shaped | 
| 1757 | particles is given in Sec.~\ref{introSection:frictionTensor}. | 
| 1758 |  | 
| 1759 | \subsubsection{\label{introSection:secondFluctuationDissipation}The Second Fluctuation Dissipation Theorem} | 
| 1760 |  | 
| 1761 | Defining a new set of coordinates, | 
| 1762 | \[ | 
| 1763 | q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha | 
| 1764 | ^2 }}x(0) | 
| 1765 | \], | 
| 1766 | we can rewrite $R(T)$ as | 
| 1767 | \[ | 
| 1768 | R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}. | 
| 1769 | \] | 
| 1770 | And since the $q$ coordinates are harmonic oscillators, | 
| 1771 |  | 
| 1772 | \begin{eqnarray*} | 
| 1773 | \left\langle {q_\alpha ^2 } \right\rangle  & = & \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\ | 
| 1774 | \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\ | 
| 1775 | \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\ | 
| 1776 | \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\ | 
| 1777 | & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\ | 
| 1778 | & = &kT\xi (t) \\ | 
| 1779 | \end{eqnarray*} | 
| 1780 |  | 
| 1781 | Thus, we recover the \emph{second fluctuation dissipation theorem} | 
| 1782 | \begin{equation} | 
| 1783 | \xi (t) = \left\langle {R(t)R(0)} \right\rangle | 
| 1784 | \label{introEquation:secondFluctuationDissipation}. | 
| 1785 | \end{equation} | 
| 1786 | In effect, it acts as a constraint on the possible ways in which one | 
| 1787 | can model the random force and friction kernel. | 
| 1788 |  | 
| 1789 | \subsection{\label{introSection:frictionTensor} Friction Tensor} | 
| 1790 | Theoretically, the friction kernel can be determined using velocity | 
| 1791 | autocorrelation function. However, this approach become impractical | 
| 1792 | when the system become more and more complicate. Instead, various | 
| 1793 | approaches based on hydrodynamics have been developed to calculate | 
| 1794 | the friction coefficients. The friction effect is isotropic in | 
| 1795 | Equation, $\zeta$ can be taken as a scalar. In general, friction | 
| 1796 | tensor $\Xi$ is a $6\times 6$ matrix given by | 
| 1797 | \[ | 
| 1798 | \Xi  = \left( {\begin{array}{*{20}c} | 
| 1799 | {\Xi _{}^{tt} } & {\Xi _{}^{rt} }  \\ | 
| 1800 | {\Xi _{}^{tr} } & {\Xi _{}^{rr} }  \\ | 
| 1801 | \end{array}} \right). | 
| 1802 | \] | 
| 1803 | Here, $ {\Xi^{tt} }$ and $ {\Xi^{rr} }$ are translational friction | 
| 1804 | tensor and rotational resistance (friction) tensor respectively, | 
| 1805 | while ${\Xi^{tr} }$ is translation-rotation coupling tensor and $ | 
| 1806 | {\Xi^{rt} }$ is rotation-translation coupling tensor. When a | 
| 1807 | particle moves in a fluid, it may experience friction force or | 
| 1808 | torque along the opposite direction of the velocity or angular | 
| 1809 | velocity, | 
| 1810 | \[ | 
| 1811 | \left( \begin{array}{l} | 
| 1812 | F_R  \\ | 
| 1813 | \tau _R  \\ | 
| 1814 | \end{array} \right) =  - \left( {\begin{array}{*{20}c} | 
| 1815 | {\Xi ^{tt} } & {\Xi ^{rt} }  \\ | 
| 1816 | {\Xi ^{tr} } & {\Xi ^{rr} }  \\ | 
| 1817 | \end{array}} \right)\left( \begin{array}{l} | 
| 1818 | v \\ | 
| 1819 | w \\ | 
| 1820 | \end{array} \right) | 
| 1821 | \] | 
| 1822 | where $F_r$ is the friction force and $\tau _R$ is the friction | 
| 1823 | toque. | 
| 1824 |  | 
| 1825 | \subsubsection{\label{introSection:resistanceTensorRegular}The Resistance Tensor for Regular Shape} | 
| 1826 |  | 
| 1827 | For a spherical particle, the translational and rotational friction | 
| 1828 | constant can be calculated from Stoke's law, | 
| 1829 | \[ | 
| 1830 | \Xi ^{tt}  = \left( {\begin{array}{*{20}c} | 
| 1831 | {6\pi \eta R} & 0 & 0  \\ | 
| 1832 | 0 & {6\pi \eta R} & 0  \\ | 
| 1833 | 0 & 0 & {6\pi \eta R}  \\ | 
| 1834 | \end{array}} \right) | 
| 1835 | \] | 
| 1836 | and | 
| 1837 | \[ | 
| 1838 | \Xi ^{rr}  = \left( {\begin{array}{*{20}c} | 
| 1839 | {8\pi \eta R^3 } & 0 & 0  \\ | 
| 1840 | 0 & {8\pi \eta R^3 } & 0  \\ | 
| 1841 | 0 & 0 & {8\pi \eta R^3 }  \\ | 
| 1842 | \end{array}} \right) | 
| 1843 | \] | 
| 1844 | where $\eta$ is the viscosity of the solvent and $R$ is the | 
| 1845 | hydrodynamics radius. | 
| 1846 |  | 
| 1847 | Other non-spherical shape, such as cylinder and ellipsoid | 
| 1848 | \textit{etc}, are widely used as reference for developing new | 
| 1849 | hydrodynamics theory, because their properties can be calculated | 
| 1850 | exactly. In 1936, Perrin extended Stokes's law to general ellipsoid, | 
| 1851 | also called a triaxial ellipsoid, which is given in Cartesian | 
| 1852 | coordinates by\cite{Perrin1934, Perrin1936} | 
| 1853 | \[ | 
| 1854 | \frac{{x^2 }}{{a^2 }} + \frac{{y^2 }}{{b^2 }} + \frac{{z^2 }}{{c^2 | 
| 1855 | }} = 1 | 
| 1856 | \] | 
| 1857 | where the semi-axes are of lengths $a$, $b$, and $c$. Unfortunately, | 
| 1858 | due to the complexity of the elliptic integral, only the ellipsoid | 
| 1859 | with the restriction of two axes having to be equal, \textit{i.e.} | 
| 1860 | prolate($ a \ge b = c$) and oblate ($ a < b = c $), can be solved | 
| 1861 | exactly. Introducing an elliptic integral parameter $S$ for prolate, | 
| 1862 | \[ | 
| 1863 | S = \frac{2}{{\sqrt {a^2  - b^2 } }}\ln \frac{{a + \sqrt {a^2  - b^2 | 
| 1864 | } }}{b}, | 
| 1865 | \] | 
| 1866 | and oblate, | 
| 1867 | \[ | 
| 1868 | S = \frac{2}{{\sqrt {b^2  - a^2 } }}arctg\frac{{\sqrt {b^2  - a^2 } | 
| 1869 | }}{a} | 
| 1870 | \], | 
| 1871 | one can write down the translational and rotational resistance | 
| 1872 | tensors | 
| 1873 | \[ | 
| 1874 | \begin{array}{l} | 
| 1875 | \Xi _a^{tt}  = 16\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - b^2 )S - 2a}} \\ | 
| 1876 | \Xi _b^{tt}  = \Xi _c^{tt}  = 32\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - 3b^2 )S + 2a}} \\ | 
| 1877 | \end{array}, | 
| 1878 | \] | 
| 1879 | and | 
| 1880 | \[ | 
| 1881 | \begin{array}{l} | 
| 1882 | \Xi _a^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^2  - b^2 )b^2 }}{{2a - b^2 S}} \\ | 
| 1883 | \Xi _b^{rr}  = \Xi _c^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^4  - b^4 )}}{{(2a^2  - b^2 )S - 2a}} \\ | 
| 1884 | \end{array}. | 
| 1885 | \] | 
| 1886 |  | 
| 1887 | \subsubsection{\label{introSection:resistanceTensorRegularArbitrary}The Resistance Tensor for Arbitrary Shape} | 
| 1888 |  | 
| 1889 | Unlike spherical and other regular shaped molecules, there is not | 
| 1890 | analytical solution for friction tensor of any arbitrary shaped | 
| 1891 | rigid molecules. The ellipsoid of revolution model and general | 
| 1892 | triaxial ellipsoid model have been used to approximate the | 
| 1893 | hydrodynamic properties of rigid bodies. However, since the mapping | 
| 1894 | from all possible ellipsoidal space, $r$-space, to all possible | 
| 1895 | combination of rotational diffusion coefficients, $D$-space is not | 
| 1896 | unique\cite{Wegener1979} as well as the intrinsic coupling between | 
| 1897 | translational and rotational motion of rigid body, general ellipsoid | 
| 1898 | is not always suitable for modeling arbitrarily shaped rigid | 
| 1899 | molecule. A number of studies have been devoted to determine the | 
| 1900 | friction tensor for irregularly shaped rigid bodies using more | 
| 1901 | advanced method where the molecule of interest was modeled by | 
| 1902 | combinations of spheres(beads)\cite{Carrasco1999} and the | 
| 1903 | hydrodynamics properties of the molecule can be calculated using the | 
| 1904 | hydrodynamic interaction tensor. Let us consider a rigid assembly of | 
| 1905 | $N$ beads immersed in a continuous medium. Due to hydrodynamics | 
| 1906 | interaction, the ``net'' velocity of $i$th bead, $v'_i$ is different | 
| 1907 | than its unperturbed velocity $v_i$, | 
| 1908 | \[ | 
| 1909 | v'_i  = v_i  - \sum\limits_{j \ne i} {T_{ij} F_j } | 
| 1910 | \] | 
| 1911 | where $F_i$ is the frictional force, and $T_{ij}$ is the | 
| 1912 | hydrodynamic interaction tensor. The friction force of $i$th bead is | 
| 1913 | proportional to its ``net'' velocity | 
| 1914 | \begin{equation} | 
| 1915 | F_i  = \zeta _i v_i  - \zeta _i \sum\limits_{j \ne i} {T_{ij} F_j }. | 
| 1916 | \label{introEquation:tensorExpression} | 
| 1917 | \end{equation} | 
| 1918 | This equation is the basis for deriving the hydrodynamic tensor. In | 
| 1919 | 1930, Oseen and Burgers gave a simple solution to Equation | 
| 1920 | \ref{introEquation:tensorExpression} | 
| 1921 | \begin{equation} | 
| 1922 | T_{ij}  = \frac{1}{{8\pi \eta r_{ij} }}\left( {I + \frac{{R_{ij} | 
| 1923 | R_{ij}^T }}{{R_{ij}^2 }}} \right). | 
| 1924 | \label{introEquation:oseenTensor} | 
| 1925 | \end{equation} | 
| 1926 | Here $R_{ij}$ is the distance vector between bead $i$ and bead $j$. | 
| 1927 | A second order expression for element of different size was | 
| 1928 | introduced by Rotne and Prager\cite{Rotne1969} and improved by | 
| 1929 | Garc\'{i}a de la Torre and Bloomfield\cite{Torre1977}, | 
| 1930 | \begin{equation} | 
| 1931 | T_{ij}  = \frac{1}{{8\pi \eta R_{ij} }}\left[ {\left( {I + | 
| 1932 | \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right) + R\frac{{\sigma | 
| 1933 | _i^2  + \sigma _j^2 }}{{r_{ij}^2 }}\left( {\frac{I}{3} - | 
| 1934 | \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right)} \right]. | 
| 1935 | \label{introEquation:RPTensorNonOverlapped} | 
| 1936 | \end{equation} | 
| 1937 | Both of the Equation \ref{introEquation:oseenTensor} and Equation | 
| 1938 | \ref{introEquation:RPTensorNonOverlapped} have an assumption $R_{ij} | 
| 1939 | \ge \sigma _i  + \sigma _j$. An alternative expression for | 
| 1940 | overlapping beads with the same radius, $\sigma$, is given by | 
| 1941 | \begin{equation} | 
| 1942 | T_{ij}  = \frac{1}{{6\pi \eta R_{ij} }}\left[ {\left( {1 - | 
| 1943 | \frac{2}{{32}}\frac{{R_{ij} }}{\sigma }} \right)I + | 
| 1944 | \frac{2}{{32}}\frac{{R_{ij} R_{ij}^T }}{{R_{ij} \sigma }}} \right] | 
| 1945 | \label{introEquation:RPTensorOverlapped} | 
| 1946 | \end{equation} | 
| 1947 |  | 
| 1948 | To calculate the resistance tensor at an arbitrary origin $O$, we | 
| 1949 | construct a $3N \times 3N$ matrix consisting of $N \times N$ | 
| 1950 | $B_{ij}$ blocks | 
| 1951 | \begin{equation} | 
| 1952 | B = \left( {\begin{array}{*{20}c} | 
| 1953 | {B_{11} } &  \ldots  & {B_{1N} }  \\ | 
| 1954 | \vdots  &  \ddots  &  \vdots   \\ | 
| 1955 | {B_{N1} } &  \cdots  & {B_{NN} }  \\ | 
| 1956 | \end{array}} \right), | 
| 1957 | \end{equation} | 
| 1958 | where $B_{ij}$ is given by | 
| 1959 | \[ | 
| 1960 | B_{ij}  = \delta _{ij} \frac{I}{{6\pi \eta R}} + (1 - \delta _{ij} | 
| 1961 | )T_{ij} | 
| 1962 | \] | 
| 1963 | where $\delta _{ij}$ is Kronecker delta function. Inverting matrix | 
| 1964 | $B$, we obtain | 
| 1965 |  | 
| 1966 | \[ | 
| 1967 | C = B^{ - 1}  = \left( {\begin{array}{*{20}c} | 
| 1968 | {C_{11} } &  \ldots  & {C_{1N} }  \\ | 
| 1969 | \vdots  &  \ddots  &  \vdots   \\ | 
| 1970 | {C_{N1} } &  \cdots  & {C_{NN} }  \\ | 
| 1971 | \end{array}} \right) | 
| 1972 | \] | 
| 1973 | , which can be partitioned into $N \times N$ $3 \times 3$ block | 
| 1974 | $C_{ij}$. With the help of $C_{ij}$ and skew matrix $U_i$ | 
| 1975 | \[ | 
| 1976 | U_i  = \left( {\begin{array}{*{20}c} | 
| 1977 | 0 & { - z_i } & {y_i }  \\ | 
| 1978 | {z_i } & 0 & { - x_i }  \\ | 
| 1979 | { - y_i } & {x_i } & 0  \\ | 
| 1980 | \end{array}} \right) | 
| 1981 | \] | 
| 1982 | where $x_i$, $y_i$, $z_i$ are the components of the vector joining | 
| 1983 | bead $i$ and origin $O$. Hence, the elements of resistance tensor at | 
| 1984 | arbitrary origin $O$ can be written as | 
| 1985 | \begin{equation} | 
| 1986 | \begin{array}{l} | 
| 1987 | \Xi _{}^{tt}  = \sum\limits_i {\sum\limits_j {C_{ij} } } , \\ | 
| 1988 | \Xi _{}^{tr}  = \Xi _{}^{rt}  = \sum\limits_i {\sum\limits_j {U_i C_{ij} } } , \\ | 
| 1989 | \Xi _{}^{rr}  =  - \sum\limits_i {\sum\limits_j {U_i C_{ij} } } U_j  \\ | 
| 1990 | \end{array} | 
| 1991 | \label{introEquation:ResistanceTensorArbitraryOrigin} | 
| 1992 | \end{equation} | 
| 1993 |  | 
| 1994 | The resistance tensor depends on the origin to which they refer. The | 
| 1995 | proper location for applying friction force is the center of | 
| 1996 | resistance (reaction), at which the trace of rotational resistance | 
| 1997 | tensor, $ \Xi ^{rr}$ reaches minimum. Mathematically, the center of | 
| 1998 | resistance is defined as an unique point of the rigid body at which | 
| 1999 | the translation-rotation coupling tensor are symmetric, | 
| 2000 | \begin{equation} | 
| 2001 | \Xi^{tr}  = \left( {\Xi^{tr} } \right)^T | 
| 2002 | \label{introEquation:definitionCR} | 
| 2003 | \end{equation} | 
| 2004 | Form Equation \ref{introEquation:ResistanceTensorArbitraryOrigin}, | 
| 2005 | we can easily find out that the translational resistance tensor is | 
| 2006 | origin independent, while the rotational resistance tensor and | 
| 2007 | translation-rotation coupling resistance tensor depend on the | 
| 2008 | origin. Given resistance tensor at an arbitrary origin $O$, and a | 
| 2009 | vector ,$r_{OP}(x_{OP}, y_{OP}, z_{OP})$, from $O$ to $P$, we can | 
| 2010 | obtain the resistance tensor at $P$ by | 
| 2011 | \begin{equation} | 
| 2012 | \begin{array}{l} | 
| 2013 | \Xi _P^{tt}  = \Xi _O^{tt}  \\ | 
| 2014 | \Xi _P^{tr}  = \Xi _P^{rt}  = \Xi _O^{tr}  - U_{OP} \Xi _O^{tt}  \\ | 
| 2015 | \Xi _P^{rr}  = \Xi _O^{rr}  - U_{OP} \Xi _O^{tt} U_{OP}  + \Xi _O^{tr} U_{OP}  - U_{OP} \Xi _O^{tr} ^{^T }  \\ | 
| 2016 | \end{array} | 
| 2017 | \label{introEquation:resistanceTensorTransformation} | 
| 2018 | \end{equation} | 
| 2019 | where | 
| 2020 | \[ | 
| 2021 | U_{OP}  = \left( {\begin{array}{*{20}c} | 
| 2022 | 0 & { - z_{OP} } & {y_{OP} }  \\ | 
| 2023 | {z_i } & 0 & { - x_{OP} }  \\ | 
| 2024 | { - y_{OP} } & {x_{OP} } & 0  \\ | 
| 2025 | \end{array}} \right) | 
| 2026 | \] | 
| 2027 | Using Equations \ref{introEquation:definitionCR} and | 
| 2028 | \ref{introEquation:resistanceTensorTransformation}, one can locate | 
| 2029 | the position of center of resistance, | 
| 2030 | \begin{eqnarray*} | 
| 2031 | \left( \begin{array}{l} | 
| 2032 | x_{OR}  \\ | 
| 2033 | y_{OR}  \\ | 
| 2034 | z_{OR}  \\ | 
| 2035 | \end{array} \right) & = &\left( {\begin{array}{*{20}c} | 
| 2036 | {(\Xi _O^{rr} )_{yy}  + (\Xi _O^{rr} )_{zz} } & { - (\Xi _O^{rr} )_{xy} } & { - (\Xi _O^{rr} )_{xz} }  \\ | 
| 2037 | { - (\Xi _O^{rr} )_{xy} } & {(\Xi _O^{rr} )_{zz}  + (\Xi _O^{rr} )_{xx} } & { - (\Xi _O^{rr} )_{yz} }  \\ | 
| 2038 | { - (\Xi _O^{rr} )_{xz} } & { - (\Xi _O^{rr} )_{yz} } & {(\Xi _O^{rr} )_{xx}  + (\Xi _O^{rr} )_{yy} }  \\ | 
| 2039 | \end{array}} \right)^{ - 1}  \\ | 
| 2040 | & & \left( \begin{array}{l} | 
| 2041 | (\Xi _O^{tr} )_{yz}  - (\Xi _O^{tr} )_{zy}  \\ | 
| 2042 | (\Xi _O^{tr} )_{zx}  - (\Xi _O^{tr} )_{xz}  \\ | 
| 2043 | (\Xi _O^{tr} )_{xy}  - (\Xi _O^{tr} )_{yx}  \\ | 
| 2044 | \end{array} \right) \\ | 
| 2045 | \end{eqnarray*} | 
| 2046 |  | 
| 2047 |  | 
| 2048 |  | 
| 2049 | where $x_OR$, $y_OR$, $z_OR$ are the components of the vector | 
| 2050 | joining center of resistance $R$ and origin $O$. |