--- trunk/xDissertation/Introduction.tex 2008/02/29 15:20:55 3347 +++ trunk/xDissertation/Introduction.tex 2008/03/10 21:52:50 3365 @@ -1,65 +1,77 @@ \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND} \section{Background on the Problem\label{In:sec:pro}} -Phospholipid molecules are chosen to be studied in this dissertation -because of their critical role as a foundation of the bio-membrane -construction. The self assembled bilayer of the lipids when dispersed -in water is the micro structure of the membrane. The phase behavior of -lipid bilayer is explored experimentally~\cite{Cevc87}, however, fully -understanding on the mechanism is far beyond accomplished. +Phospholipid molecules are the primary topic of this dissertation +because of their critical role as the foundation of biological +membranes. Lipids, when dispersed in water, self assemble into bilayer +structures. The phase behavior of lipid bilayers have been explored +experimentally~\cite{Cevc87}, however, a complete understanding of the +mechanism for self-assembly has not been achieved. \subsection{Ripple Phase\label{In:ssec:ripple}} -The {\it ripple phase} $P_{\beta'}$ of lipid bilayers, named from the +The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the periodic buckling of the membrane, is an intermediate phase which is developed either from heating the gel phase $L_{\beta'}$ or cooling -the fluid phase $L_\alpha$. Although the ripple phase is observed in -different +the fluid phase $L_\alpha$. Although the ripple phase has been +observed in several experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03}, -the mechanism of the formation of the ripple phase has never been +the physical mechanism for the formation of the ripple phase has never been explained and the microscopic structure of the ripple phase has never been elucidated by experiments. Computational simulation is a perfect -tool to study the microscopic properties for a system, however, the -long range dimension of the ripple structure and the long time scale +tool to study the microscopic properties for a system. However, the +large length scale the ripple structure and the long time scale of the formation of the ripples are crucial obstacles to performing -the actual work. The idea to break through this dilemma forks into: +the actual work. The principal ideas explored in this dissertation are +attempts to break the computational task up by \begin{itemize} -\item Simplify the lipid model. -\item Improve the integrating algorithm. +\item Simplifying the lipid model. +\item Improving algorithm for integrating the equations of motion. \end{itemize} -In Ch.~\ref{chap:mc} and~\ref{chap:md}, we use a simple point dipole -model and a coarse-grained model to perform the Monte Carlo and -Molecular Dynamics simulations respectively, and in Ch.~\ref{chap:ld}, -we implement a Langevin Dynamics algorithm to exclude the explicit -solvent to improve the efficiency of the simulations. +In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point +dipole model and a coarse-grained model to perform the Monte Carlo and +Molecular Dynamics simulations respectively, and in +chapter~\ref{chap:ld}, we develop a Langevin Dynamics algorithm which +excludes the explicit solvent to improve the efficiency of the +simulations. \subsection{Lattice Model\label{In:ssec:model}} -The gel like characteristic of the ripple phase ensures the feasiblity -of applying the lattice model to study the system. It is claimed that -the packing of the lipid molecules in ripple phase is +The gel-like characteristic of the ripple phase makes it feasible to +apply a lattice model to study the system. It is claimed that the +packing of the lipid molecules in ripple phase is hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models, -{\it i.e.}, Ising model, Heisenberg model and $X-Y$ model, show -{\it frustration} on triangular lattice. +{\it i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it +frustration} on triangular lattices. The Hamiltonian of the systems +are given by +\begin{equation} +H = + \begin{cases} + -J \sum_n \sum_{n'} s_n s_n' & \text{Ising}, \\ + -J \sum_n \sum_{n'} \vec s_n \cdot \vec s_{n'} & \text{$X-Y$ and +Heisenberg}, + \end{cases} +\end{equation} +where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and +$s_{n'}$ ($\vec s_{n'}$) are the nearest neighbours. \begin{figure} \centering -\includegraphics[width=\linewidth]{./figures/frustration.pdf} -\caption{Sketch to illustrate the frustration on triangular -lattice. Spins are represented by arrows, no matter which direction -the spin on the top of triangle points to, the Hamiltonian of the -system is the same, hence there are infinite possibilities for the -packing of the spins.} -\label{fig:frustration} +\includegraphics[width=\linewidth]{./figures/inFrustration.pdf} +\caption{Frustration on a triangular lattice, the spins are +represented by arrows. No matter which direction the spin on the top +of triangle points to, the Hamiltonain of the system goes up.} +\label{Infig:frustration} \end{figure} -Figure~\ref{fig:frustration} shows an illustration of the frustration -on a triangular lattice. The direction of the spin on top of the -triangle has no effects on the Hamiltonian of the system, therefore -infinite possibilities for the packing of spins induce the frustration -of the lattice. +Figure~\ref{Infig:frustration} shows an illustration of the +frustration on a triangular lattice. When $J < 0$, the spins want to +be anti-aligned, The direction of the spin on top of the triangle will +make the energy go up no matter which direction it picks, therefore +infinite possibilities for the packing of spins induce what is known +as ``complete regular frustration'' which leads to disordered low +temperature phases. The lack of translational degree of freedom in lattice models prevents -their utilization on investigating the emergence of the surface -buckling which is the imposition of the ripple formation. In this -dissertation, a modified lattice model is introduced to this specific -situation in Ch.~\ref{chap:mc}. +their utilization in models for surface buckling which would +correspond to ripple formation. In chapter~\ref{chap:mc}, a modified +lattice model is introduced to tackle this specific situation. \section{Overview of Classical Statistical Mechanics\label{In:sec:SM}} Statistical mechanics provides a way to calculate the macroscopic @@ -95,7 +107,7 @@ normalize $\rho$ to unity, normalize $\rho$ to unity, \begin{equation} 1 = \int d \vec q~^N \int d \vec p~^N \rho, -\label{normalized} +\label{Ineq:normalized} \end{equation} then the value of $\rho$ gives the probability of finding the system in a unit volume in the phase space. @@ -104,7 +116,7 @@ space at any instant can be written as: time. The number of representive points at a given volume in the phase space at any instant can be written as: \begin{equation} -\label{eq:deltaN} +\label{Ineq:deltaN} \delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. \end{equation} To calculate the change in the number of representive points in this @@ -112,29 +124,29 @@ representive points entering the volume at $q_1$ per u of representive points in $q_1$ axis. The rate of the number of the representive points entering the volume at $q_1$ per unit time is: \begin{equation} -\label{eq:deltaNatq1} +\label{Ineq:deltaNatq1} \rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N, \end{equation} and the rate of the number of representive points leaving the volume at another position $q_1 + \delta q_1$ is: \begin{equation} -\label{eq:deltaNatq1plusdeltaq1} +\label{Ineq:deltaNatq1plusdeltaq1} \left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 + \frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. \end{equation} Here the higher order differentials are neglected. So the change of the number of the representive points is the difference of -eq.~\ref{eq:deltaNatq1} and eq.~\ref{eq:deltaNatq1plusdeltaq1}, which +eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, which gives us: \begin{equation} -\label{eq:deltaNatq1axis} +\label{Ineq:deltaNatq1axis} -\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N, \end{equation} where, higher order differetials are neglected. If we sum over all the axes in the phase space, we can get the change of the number of representive points in a given volume with time: \begin{equation} -\label{eq:deltaNatGivenVolume} +\label{Ineq:deltaNatGivenVolume} \frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial {\dot q_i}}{\partial q_i} + \frac{\partial {\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial @@ -144,35 +156,35 @@ From Hamilton's equation of motion, \begin{equation} \frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial {\dot p_i}}{\partial p_i}, -\label{eq:canonicalFormOfEquationOfMotion} +\label{Ineq:canonicalFormOfEquationOfMotion} \end{equation} this cancels out the first term on the right side of -eq.~\ref{eq:deltaNatGivenVolume}. If both sides of -eq.~\ref{eq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2 +eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of +eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we can derive Liouville's theorem: \begin{equation} \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left( \frac{\partial {\rho}}{\partial q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right). -\label{eq:simpleFormofLiouville} +\label{Ineq:simpleFormofLiouville} \end{equation} This is the basis of statistical mechanics. If we move the right -side of equation~\ref{eq:simpleFormofLiouville} to the left, we +side of equation~\ref{Ineq:simpleFormofLiouville} to the left, we will obtain \begin{equation} \left( \frac{\partial \rho}{\partial t} \right)_{q, p} + \sum_{i} \left( \frac{\partial {\rho}}{\partial q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right) = 0. -\label{eq:anotherFormofLiouville} +\label{Ineq:anotherFormofLiouville} \end{equation} It is easy to note that the left side of -equation~\ref{eq:anotherFormofLiouville} is the total derivative of +equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of $\rho$ with respect of $t$, which means \begin{equation} \frac{d \rho}{dt} = 0, -\label{eq:conservationofRho} +\label{Ineq:conservationofRho} \end{equation} and the rate of density change is zero in the neighborhood of any selected moving representive points in the phase space. @@ -184,7 +196,7 @@ phase space is zero, phase space is zero, \begin{equation} \left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0. -\label{eq:statEquilibrium} +\label{Ineq:statEquilibrium} \end{equation} We may conclude the ensemble is in {\it statistical equilibrium}. An ensemble in statistical equilibrium often means the system is also in @@ -195,7 +207,7 @@ q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \ \frac{\partial {\rho}}{\partial q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i \right) = 0. -\label{constantofMotion} +\label{Ineq:constantofMotion} \end{equation} If $\rho$ is a function only of some constant of the motion, $\rho$ is independent of time. For a conservative system, the energy of the @@ -204,13 +216,13 @@ phase space, phase space, \begin{equation} \rho = \mathrm{const.} -\label{eq:uniformEnsemble} +\label{Ineq:uniformEnsemble} \end{equation} the ensemble is called {\it uniform ensemble}. Another useful ensemble is called {\it microcanonical ensemble}, for which: \begin{equation} \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)} -\label{eq:microcanonicalEnsemble} +\label{Ineq:microcanonicalEnsemble} \end{equation} where $\Sigma(N, V, E)$ is a normalization constant parameterized by $N$, the total number of particles, $V$, the total physical volume and @@ -220,7 +232,7 @@ S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [ Hamiltonian of the system. The Gibbs entropy is defined as \begin{equation} S = - k_B \int d \vec q~^N \int d \vec p~^N \rho \ln [C^N \rho], -\label{eq:gibbsEntropy} +\label{Ineq:gibbsEntropy} \end{equation} where $k_B$ is the Boltzmann constant and $C^N$ is a number which makes the argument of $\ln$ dimensionless, in this case, it is the @@ -228,17 +240,17 @@ S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right). ensemble is given by \begin{equation} S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right). -\label{eq:entropy} +\label{Ineq:entropy} \end{equation} If the density distribution $\rho$ is given by \begin{equation} \rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T}, -\label{eq:canonicalEnsemble} +\label{Ineq:canonicalEnsemble} \end{equation} the ensemble is known as the {\it canonical ensemble}. Here, \begin{equation} Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T}, -\label{eq:partitionFunction} +\label{Ineq:partitionFunction} \end{equation} which is also known as {\it partition function}. $\Gamma$ indicates that the integral is over all the phase space. In the canonical @@ -250,17 +262,17 @@ the thermodynamics maximizes the entropy $S$, \begin{array}{ccc} \delta S = 0 & \mathrm{and} & \delta^2 S < 0. \end{array} -\label{eq:maximumPrinciple} +\label{Ineq:maximumPrinciple} \end{equation} -From Eq.~\ref{eq:maximumPrinciple} and two constrains of the canonical +From Eq.~\ref{Ineq:maximumPrinciple} and two constrains of the canonical ensemble, {\it i.e.}, total probability and average energy conserved, the partition function is calculated as \begin{equation} Z_N = e^{-A/k_B T}, -\label{eq:partitionFunctionWithFreeEnergy} +\label{Ineq:partitionFunctionWithFreeEnergy} \end{equation} where $A$ is the Helmholtz free energy. The significance of -Eq.~\ref{eq:entropy} and~\ref{eq:partitionFunctionWithFreeEnergy} is +Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is that they serve as a connection between macroscopic properties of the system and the distribution of the microscopic states. @@ -276,32 +288,32 @@ can be calculated based on the definition shown by of any quantity ($F(q^N, p^N$)) which depends on the coordinates ($q^N$) and the momenta ($p^N$) for all the systems in the ensemble can be calculated based on the definition shown by -Eq.~\ref{eq:statAverage1} +Eq.~\ref{Ineq:statAverage1} \begin{equation} \langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}. -\label{eq:statAverage1} +\label{Ineq:statAverage1} \end{equation} Since the density distribution $\rho$ is normalized to unity, the mean value of $F(q^N, p^N)$ is simplified to \begin{equation} \langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N, p^N, t) \rho, -\label{eq:statAverage2} +\label{Ineq:statAverage2} \end{equation} called {\it ensemble average}. However, the quantity is often averaged for a finite time in real experiments, \begin{equation} \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty} \frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt. -\label{eq:timeAverage1} +\label{Ineq:timeAverage1} \end{equation} Usually this time average is independent of $t_0$ in statistical -mechanics, so Eq.~\ref{eq:timeAverage1} becomes +mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes \begin{equation} \langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt -\label{eq:timeAverage2} +\label{Ineq:timeAverage2} \end{equation} for an infinite time interval. @@ -311,9 +323,9 @@ phase space. Mathematically, this leads to phase space. Mathematically, this leads to \begin{equation} \langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t. -\label{eq:ergodicity} +\label{Ineq:ergodicity} \end{equation} -Eq.~\ref{eq:ergodicity} validates the Monte Carlo method which we will +Eq.~\ref{Ineq:ergodicity} validates the Monte Carlo method which we will discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity can be related to the time average measured in the experiments. @@ -332,31 +344,31 @@ C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarr \begin{equation} C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau). -\label{eq:autocorrelationFunction} +\label{Ineq:autocorrelationFunction} \end{equation} -Eq.~\ref{eq:autocorrelationFunction} is the correlation function of a +Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of a single variable, called {\it autocorrelation function}. The defination of the correlation function for two different variables is similar to that of autocorrelation function, which is \begin{equation} C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau), -\label{eq:crosscorrelationFunction} +\label{Ineq:crosscorrelationFunction} \end{equation} and called {\it cross correlation function}. -In section~\ref{In:ssec:average} we know from Eq.~\ref{eq:ergodicity} +In section~\ref{In:ssec:average} we know from Eq.~\ref{Ineq:ergodicity} the relationship between time average and ensemble average. We can put the correlation function in a classical mechanics form, \begin{equation} C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p) -\label{eq:autocorrelationFunctionCM} +\label{Ineq:autocorrelationFunctionCM} \end{equation} and \begin{equation} C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau) \rho(q, p) -\label{eq:crosscorrelationFunctionCM} +\label{Ineq:crosscorrelationFunctionCM} \end{equation} as autocorrelation function and cross correlation function respectively. $\rho(q, p)$ is the density distribution at equillibrium @@ -364,7 +376,7 @@ C(t) \sim e^{-t / \tau_r}, single exponential \begin{equation} C(t) \sim e^{-t / \tau_r}, -\label{eq:relaxation} +\label{Ineq:relaxation} \end{equation} where $\tau_r$ is known as relaxation time which discribes the rate of the decay. @@ -390,31 +402,31 @@ temperature are constants. The average energy is given applied to the canonical ensemble, a Boltzmann-weighted ensemble, in which the $N$, the total number of particles, $V$, total volume, $T$, temperature are constants. The average energy is given by substituding -Eq.~\ref{eq:canonicalEnsemble} into Eq.~\ref{eq:statAverage2}, +Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2}, \begin{equation} \langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}. -\label{eq:energyofCanonicalEnsemble} +\label{Ineq:energyofCanonicalEnsemble} \end{equation} So are the other properties of the system. The Hamiltonian is the summation of Kinetic energy $K(p^N)$ as a function of momenta and Potential energy $U(q^N)$ as a function of positions, \begin{equation} H(q^N, p^N) = K(p^N) + U(q^N). -\label{eq:hamiltonian} +\label{Ineq:hamiltonian} \end{equation} If the property $A$ is only a function of position ($ A = A(q^N)$), the mean value of $A$ is reduced to \begin{equation} \langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}}, -\label{eq:configurationIntegral} +\label{Ineq:configurationIntegral} \end{equation} The kinetic energy $K(p^N)$ is factored out in -Eq.~\ref{eq:configurationIntegral}. $\langle A +Eq.~\ref{Ineq:configurationIntegral}. $\langle A \rangle$ is a configuration integral now, and the -Eq.~\ref{eq:configurationIntegral} is equivalent to +Eq.~\ref{Ineq:configurationIntegral} is equivalent to \begin{equation} \langle A \rangle = \int d \vec q~^N A \rho(q^N). -\label{eq:configurationAve} +\label{Ineq:configurationAve} \end{equation} In a Monte Carlo simulation of canonical ensemble, the probability of @@ -422,52 +434,52 @@ probability with time is given by probability with time is given by \begin{equation} \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ], -\label{eq:timeChangeofProb} +\label{Ineq:timeChangeofProb} \end{equation} where $w_{ss'}$ is the tansition probability of going from state $s$ to state $s'$. Since $\rho_s$ is independent of time at equilibrium, \begin{equation} \frac{d \rho_{s}^{equilibrium}}{dt} = 0, -\label{eq:equiProb} +\label{Ineq:equiProb} \end{equation} which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So \begin{equation} \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}. -\label{eq:timeChangeofProb} +\label{Ineq:relationshipofRhoandW} \end{equation} If \begin{equation} \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T}, -\label{eq:conditionforBoltzmannStatistics} +\label{Ineq:conditionforBoltzmannStatistics} \end{equation} then \begin{equation} \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}. -\label{eq:satisfyofBoltzmannStatistics} +\label{Ineq:satisfyofBoltzmannStatistics} \end{equation} -Eq.~\ref{eq:satisfyofBoltzmannStatistics} implies that +Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that $\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm, shows how Monte Carlo simulation generates a transition probability -governed by \ref{eq:conditionforBoltzmannStatistics}, is schemed as +governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as \begin{enumerate} -\item\label{itm:oldEnergy} Choose an particle randomly, calculate the energy. -\item\label{itm:newEnergy} Make a random displacement for particle, +\item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy. +\item\label{Initm:newEnergy} Make a random displacement for particle, calculate the new energy. \begin{itemize} - \item Keep the new configuration and return to step~\ref{itm:oldEnergy} if energy + \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy goes down. \item Pick a random number between $[0,1]$ if energy goes up. \begin{itemize} \item Keep the new configuration and return to -step~\ref{itm:oldEnergy} if the random number smaller than +step~\ref{Initm:oldEnergy} if the random number smaller than $e^{-(U_{new} - U_{old})} / k_B T$. \item Keep the old configuration and return to -step~\ref{itm:oldEnergy} if the random number larger than +step~\ref{Initm:oldEnergy} if the random number larger than $e^{-(U_{new} - U_{old})} / k_B T$. \end{itemize} \end{itemize} -\item\label{itm:accumulateAvg} Accumulate the average after it converges. +\item\label{Initm:accumulateAvg} Accumulate the average after it converges. \end{enumerate} It is important to notice that the old configuration has to be sampled again if it is kept. @@ -489,47 +501,47 @@ defined by defined by \begin{equation} \frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle, -\label{eq:temperature} +\label{Ineq:temperature} \end{equation} here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$ component of the velocity of the particle. The right side of -Eq.~\ref{eq:temperature} is the average kinetic energy of the +Eq.~\ref{Ineq:temperature} is the average kinetic energy of the system. A simple Molecular Dynamics simulation scheme is:~\cite{Frenkel1996} \begin{enumerate} -\item\label{itm:initialize} Assign the initial positions and momenta +\item\label{Initm:initialize} Assign the initial positions and momenta for the particles in the system. -\item\label{itm:calcForce} Calculate the forces. -\item\label{itm:equationofMotion} Integrate the equation of motion. +\item\label{Initm:calcForce} Calculate the forces. +\item\label{Initm:equationofMotion} Integrate the equation of motion. \begin{itemize} - \item Return to step~\ref{itm:calcForce} if the equillibrium is + \item Return to step~\ref{Initm:calcForce} if the equillibrium is not achieved. - \item Go to step~\ref{itm:calcAvg} if the equillibrium is + \item Go to step~\ref{Initm:calcAvg} if the equillibrium is achieved. \end{itemize} -\item\label{itm:calcAvg} Compute the quantities we are interested in. +\item\label{Initm:calcAvg} Compute the quantities we are interested in. \end{enumerate} The initial positions of the particles are chosen as that there is no overlap for the particles. The initial velocities at first are distributed randomly to the particles, and then shifted to make the momentum of the system $0$, at last scaled to the desired temperature -of the simulation according Eq.~\ref{eq:temperature}. +of the simulation according Eq.~\ref{Ineq:temperature}. -The core of Molecular Dynamics simulations is step~\ref{itm:calcForce} -and~\ref{itm:equationofMotion}. The calculation of the forces are +The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce} +and~\ref{Initm:equationofMotion}. The calculation of the forces are often involved numerous effort, this is the most time consuming step in the Molecular Dynamics scheme. The evaluation of the forces is followed by \begin{equation} f(q) = - \frac{\partial U(q)}{\partial q}, -\label{eq:force} +\label{Ineq:force} \end{equation} $U(q)$ is the potential of the system. Once the forces computed, are the positions and velocities updated by integrating Newton's equation of motion, \begin{equation} f(q) = \frac{dp}{dt} = \frac{m dv}{dt}. -\label{eq:newton} +\label{Ineq:newton} \end{equation} Here is an example of integrating algorithms, Verlet algorithm, which is one of the best algorithms to integrate Newton's equation of @@ -538,27 +550,27 @@ q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\ q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 + \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} + \mathcal{O}(\Delta t^4) -\label{eq:verletFuture} +\label{Ineq:verletFuture} \end{equation} for a later time $t+\Delta t$, and \begin{equation} q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 - \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} + \mathcal{O}(\Delta t^4) , -\label{eq:verletPrevious} +\label{Ineq:verletPrevious} \end{equation} for a previous time $t-\Delta t$. The summation of the -Eq.~\ref{eq:verletFuture} and~\ref{eq:verletPrevious} gives +Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives \begin{equation} q(t+\Delta t)+q(t-\Delta t) = 2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4), -\label{eq:verletSum} +\label{Ineq:verletSum} \end{equation} so, the new position can be expressed as \begin{equation} q(t+\Delta t) \approx 2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2. -\label{eq:newPosition} +\label{Ineq:newPosition} \end{equation} The higher order of the $\Delta t$ is omitted. @@ -586,65 +598,65 @@ H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q), Hamiltonian of such a system is written as \begin{equation} H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q), -\label{eq:hamiltonianofCoupling} +\label{Ineq:hamiltonianofCoupling} \end{equation} where $H_B$ is the Hamiltonian of the bath which equals to \begin{equation} H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} + \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\}, -\label{eq:hamiltonianofBath} +\label{Ineq:hamiltonianofBath} \end{equation} $\alpha$ is all the degree of freedoms of the bath, $\omega$ is the bath frequency, and $\Delta U(q)$ is the bilinear coupling given by \begin{equation} \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q, -\label{eq:systemBathCoupling} +\label{Ineq:systemBathCoupling} \end{equation} where $g$ is the coupling constant. By solving the Hamilton's equation of motion, the {\it Generalized Langevin Equation} for this system is derived to \begin{equation} m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t), -\label{eq:gle} +\label{Ineq:gle} \end{equation} with mean force, \begin{equation} W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha \omega_\alpha^2}q^2, -\label{eq:meanForce} +\label{Ineq:meanForce} \end{equation} being only a dependence of coordinates of the solute particles, {\it friction kernel}, \begin{equation} \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha \omega_\alpha} \cos(\omega_\alpha t), -\label{eq:xiforGLE} +\label{Ineq:xiforGLE} \end{equation} and the random force, \begin{equation} R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t), -\label{eq:randomForceforGLE} +\label{Ineq:randomForceforGLE} \end{equation} as only a dependence of the initial conditions. The relationship of friction kernel $\xi(t)$ and random force $R(t)$ is given by \begin{equation} \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle -\label{eq:relationshipofXiandR} +\label{Ineq:relationshipofXiandR} \end{equation} from their definitions. In Langevin limit, the friction is treated static, which means \begin{equation} \xi(t) = 2 \xi_0 \delta(t). -\label{eq:xiofStaticFriction} +\label{Ineq:xiofStaticFriction} \end{equation} -After substitude $\xi(t)$ into Eq.~\ref{eq:gle} with -Eq.~\ref{eq:xiofStaticFriction}, {\it Langevin Equation} is extracted +After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with +Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted to \begin{equation} m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t). -\label{eq:langevinEquation} +\label{Ineq:langevinEquation} \end{equation} The applying of Langevin Equation to dynamic simulations is discussed in Ch.~\ref{chap:ld}.