--- trunk/xDissertation/Introduction.tex 2008/03/19 21:04:47 3372 +++ trunk/xDissertation/Introduction.tex 2008/03/24 20:12:52 3375 @@ -3,29 +3,72 @@ because of their critical role as the foundation of bi \section{Background on the Problem\label{In:sec:pro}} Phospholipid molecules are the primary topic of this dissertation because of their critical role as the foundation of biological -membranes. Lipids, when dispersed in water, self assemble into a -mumber of topologically distinct bilayer structures. The phase -behavior of lipid bilayers has been explored -experimentally~\cite{Cevc87}, however, a complete understanding of the -mechanism and driving forces behind the various phases has not been -achieved. +membranes. The chemical structure of phospholipids includes a head +group with a large dipole moment which is due to the large charge +separation between phosphate and amino alcohol, and a nonpolar tail +that contains fatty acid chains. Depending on the specific alcohol +which the phosphate and fatty acid chains are esterified to, the +phospholipids are divided into glycerophospholipids and +sphingophospholipids.~\cite{Cevc80} The chemical structures are shown +in figure~\ref{Infig:lipid}. +\begin{figure} +\centering +\includegraphics[width=\linewidth]{./figures/inLipid.pdf} +\caption{The chemical structure of glycerophospholipids (left) and +sphingophospholipids (right).\cite{Cevc80}} +\label{Infig:lipid} +\end{figure} +Glycerophospholipids are the dominant phospholipids in biological +membranes. The type of glycerophospholipid depends on the identity of +the X group, and the chains. For example, if X is choline +[(CH$_3$)$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as +phosphatidylcholine (PC), or if X is ethanolamine +[H$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as +phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a number +types of phosphatidycholine with different fatty acids as the lipid +chains. +\begin{table*} +\begin{minipage}{\linewidth} +\begin{center} +\caption{A number types of phosphatidycholine.} +\begin{tabular}{lll} +\hline + & Fatty acid & Generic Name \\ +\hline +\textcolor{red}{DMPC} & Myristic: CH$_3$(CH$_2$)$_{12}$COOH & +\textcolor{red}{D}i\textcolor{red}{M}yristoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\ +\textcolor{red}{DPPC} & Palmitic: CH$_3$(CH$_2$)$_{14}$COOH & \textcolor{red}{D}i\textcolor{red}{P}almtoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine +\\ +\textcolor{red}{DSPC} & Stearic: CH$_3$(CH$_2$)$_{16}$COOH & \textcolor{red}{D}i\textcolor{red}{S}tearoyl\textcolor{red}{P}hosphatidyl\textcolor{red}{C}holine \\ +\end{tabular} +\label{Intab:pc} +\end{center} +\end{minipage} +\end{table*} +When dispersed in water, lipids self assemble into a mumber of +topologically distinct bilayer structures. The phase behavior of lipid +bilayers has been explored experimentally~\cite{Cevc80}, however, a +complete understanding of the mechanism and driving forces behind the +various phases has not been achieved. -\subsection{Ripple Phase\label{In:ssec:ripple}} +\subsection{The Ripple Phase\label{In:ssec:ripple}} The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the periodic buckling of the membrane, is an intermediate phase which is developed either from heating the gel phase $L_{\beta'}$ or cooling -the fluid phase $L_\alpha$. A Sketch is shown in -figure~\ref{Infig:phaseDiagram}.~\cite{Cevc87} +the fluid phase $L_\alpha$. A sketch of the phases is shown in +figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80} \begin{figure} \centering \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf} -\caption{A phase diagram of lipid bilayer. With increasing the -temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple -($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.} +\caption{Phases of PC lipid bilayers. With increasing +temperature, phosphotidylcholine (PC) bilayers can go through +$L_{\beta'} \rightarrow P_{\beta'}$ (gel $\rightarrow$ ripple) and +$P_{\beta'} \rightarrow L_\alpha$ (ripple $\rightarrow$ fluid) phase +transitions.~\cite{Cevc80}} \label{Infig:phaseDiagram} \end{figure} -Most structural information of the ripple phase has been obtained by -the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture +Most structural information about the ripple phase has been obtained +by X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray diffraction work by Katsaras {\it et al.} showed that a rich phase diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples @@ -36,39 +79,43 @@ mica.~\cite{Kaasgaard03} \begin{figure} \centering \includegraphics[width=\linewidth]{./figures/inRipple.pdf} -\caption{The experimental observed ripple phase. The top image is -obtained by X-ray diffraction~\cite{Sun96}, and the bottom one is -observed by AFM.~\cite{Kaasgaard03}} +\caption{Experimental observations of the riple phase. The top image +is an electrostatic density map obtained by Sun {\it et al.} using +X-ray diffraction~\cite{Sun96}. The lower figures are the surface +topology of various ripple domains in bilayers supported in mica. The +AFM images were observed by Kaasgaard {\it et +al.}.~\cite{Kaasgaard03}} \label{Infig:ripple} \end{figure} -Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray +Figure~\ref{Infig:ripple} shows the ripple phase oberved by both X-ray diffraction and AFM. The experimental results provide strong support for a 2-dimensional triangular packing lattice of the lipid molecules within the ripple phase. This is a notable change from the observed -lipid packing within the gel phase,~\cite{Cevc87} although Tenchov -{\it et al.} have recently observed near-hexagonal packing in some +lipid packing within the gel phase,~\cite{Cevc80} although Tenchov +{\it et al.} have recently observed near-triangular packing in some phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the physical mechanism for the formation of the ripple phase has never been explained and the microscopic structure of the ripple phase has never been elucidated by experiments. Computational simulation is a perfect tool to study the microscopic properties for a -system. However, the large length scale the ripple structure and the -long time scale of the formation of the ripples are crucial obstacles -to performing the actual work. The principal ideas explored in this -dissertation are attempts to break the computational task up by +system. However, the large length scale of the ripple structures and +the long time required for the formation of the ripples are crucial +obstacles to performing the actual work. The principal ideas explored +in this dissertation are attempts to break the computational task up +by \begin{itemize} \item Simplifying the lipid model. -\item Improving algorithm for integrating the equations of motion. +\item Improving the algorithm for integrating the equations of motion. \end{itemize} In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point -dipole spin model and a coarse-grained molecualr scale model to +dipole spin model and a coarse-grained molecular scale model to perform the Monte Carlo and Molecular Dynamics simulations respectively, and in chapter~\ref{chap:ld}, we develop a Langevin Dynamics algorithm which excludes the explicit solvent to improve the efficiency of the simulations. -\subsection{Lattice Model\label{In:ssec:model}} -The gel-like characteristic (relatively immobile molecules) exhibited +\subsection{Lattice Models\label{In:ssec:model}} +The gel-like characteristic (laterally immobile molecules) exhibited by the ripple phase makes it feasible to apply a lattice model to study the system. The popular $2$ dimensional lattice models, {\it i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration} @@ -91,22 +138,22 @@ represented by arrows. The multiple local minima of en \includegraphics[width=3in]{./figures/inFrustration.pdf} \caption{Frustration on triangular lattice, the spins and dipoles are represented by arrows. The multiple local minima of energy states -induce the frustration for spins and dipoles picking the directions.} +induce frustration for spins and dipoles resulting in disordered +low-temperature phases.} \label{Infig:frustration} \end{figure} -The spins in figure~\ref{Infig:frustration} shows an illustration of -the frustration for $J < 0$ on a triangular lattice. There are -multiple local minima energy states which are independent of the -direction of the spin on top of the triangle, therefore infinite -possibilities for the packing of spins which induces what is known as -``complete regular frustration'' which leads to disordered low -temperature phases. The similarity goes to the dipoles on a hexagonal -lattice, which are shown by the dipoles in -figure~\ref{Infig:frustration}. In this circumstance, the dipoles want -to be aligned, however, due to the long wave fluctuation, at low -temperature, the aligned state becomes unstable, vortex is formed and -results in multiple local minima of energy states. The dipole on the -center of the hexagonal lattice is frustrated. +The spins in figure~\ref{Infig:frustration} illustrate frustration for +$J < 0$ on a triangular lattice. There are multiple local minima +energy states which are independent of the direction of the spin on +top of the triangle, therefore infinite possibilities for orienting +large numbers spins. This induces what is known as ``complete regular +frustration'' which leads to disordered low temperature phases. This +behavior extends to dipoles on a triangular lattices, which are shown +in the lower portion of figure~\ref{Infig:frustration}. In this case, +dipole-aligned structures are energetically favorable, however, at low +temperature, vortices are easily formed, and, this results in multiple +local minima of energy states for a central dipole. The dipole on the +center of the hexagonal lattice is therefore frustrated. The lack of translational degrees of freedom in lattice models prevents their utilization in models for surface buckling. In @@ -120,7 +167,7 @@ principles of statistical mechanics.~\cite{Tolman1979} to key concepts of classical statistical mechanics that we used in this dissertation. Tolman gives an excellent introduction to the principles of statistical mechanics.~\cite{Tolman1979} A large part of -section~\ref{In:sec:SM} will follow Tolman's notation. +section~\ref{In:sec:SM} follows Tolman's notation. \subsection{Ensembles\label{In:ssec:ensemble}} In classical mechanics, the state of the system is completely @@ -230,7 +277,7 @@ selected moving representative points in the phase spa and the rate of density change is zero in the neighborhood of any selected moving representative points in the phase space. -The condition of the ensemble is determined by the density +The type of thermodynamic ensemble is determined by the density distribution. If we consider the density distribution as only a function of $q$ and $p$, which means the rate of change of the phase space density in the neighborhood of all representative points in the @@ -259,11 +306,13 @@ constant everywhere in the phase space, \rho = \mathrm{const.} \label{Ineq:uniformEnsemble} \end{equation} -the ensemble is called {\it uniform ensemble}. +the ensemble is called {\it uniform ensemble}, but this ensemble has +little relevance for physical chemistry. It is an ensemble with +essentially infinite temperature. \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}} -Another useful ensemble is the {\it microcanonical ensemble}, for -which: +The most useful ensemble for Molecular Dynamics is the {\it +microcanonical ensemble}, for which: \begin{equation} \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)} \label{Ineq:microcanonicalEnsemble} @@ -280,8 +329,9 @@ makes the argument of $\ln$ dimensionless. In this cas \end{equation} where $k_B$ is the Boltzmann constant and $C^N$ is a number which makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the -total phase space volume of one state. The entropy of a microcanonical -ensemble is given by +total phase space volume of one state which has the same units as +$\Sigma(N, V, E)$. The entropy of a microcanonical ensemble is given +by \begin{equation} S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right). \label{Ineq:entropy} @@ -298,12 +348,12 @@ Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H( Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T}, \label{Ineq:partitionFunction} \end{equation} -which is also known as the canonical{\it partition function}. $\Gamma$ -indicates that the integral is over all phase space. In the canonical -ensemble, $N$, the total number of particles, $V$, total volume, and -$T$, the temperature, are constants. The systems with the lowest -energies hold the largest population. According to maximum principle, -thermodynamics maximizes the entropy $S$, implying that +which is also known as the canonical {\it partition +function}. $\Gamma$ indicates that the integral is over all phase +space. In the canonical ensemble, $N$, the total number of particles, +$V$, total volume, and $T$, the temperature, are constants. The +systems with the lowest energies hold the largest +population. Thermodynamics maximizes the entropy, $S$, implying that \begin{equation} \begin{array}{ccc} \delta S = 0 & \mathrm{and} & \delta^2 S < 0. @@ -324,11 +374,11 @@ There is an implicit assumption that our arguments are system and the distribution of microscopic states. There is an implicit assumption that our arguments are based on so -far. A representative point in the phase space is equally likely to be -found in any energetically allowed region. In other words, all -energetically accessible states are represented equally, the -probabilities to find the system in any of the accessible states is -equal. This is called the principle of equal a {\it priori} +far. Tow representative points in phase space are equally likely to be +found if they have the same energy. In other words, all energetically +accessible states are represented , and the probabilities to find the +system in any of the accessible states is equal to that states +Boltzmann weight. This is called the principle of equal a {\it priori} probabilities. \subsection{Statistical Averages\label{In:ssec:average}} @@ -363,7 +413,7 @@ mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt \label{Ineq:timeAverage2} \end{equation} -for an infinite time interval. +for an finite time interval, $T$. \subsubsection{Ergodicity\label{In:sssec:ergodicity}} The {\it ergodic hypothesis}, an important hypothesis governing modern @@ -406,84 +456,94 @@ C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \ \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau), \label{Ineq:crosscorrelationFunction} \end{equation} -and called {\it cross correlation function}. +and is called a {\it cross correlation function}. We know from the ergodic hypothesis that there is a relationship between time average and ensemble average. We can put the correlation -function in a classical mechanics form, +function in a classical mechanical form, \begin{equation} -C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)] -A[q^N(t+\tau), q^N(t+\tau)] \rho(q, p) +C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N] +A[q^N(\tau), p^N(\tau)] \rho(q^N, p^N) \label{Ineq:autocorrelationFunctionCM} \end{equation} -and +where $q^N(\tau)$, $p^N(\tau)$ is the phase space point that follows +classical evolution of the point $q^N$, $p^N$ after a tme $\tau$ has +elapsed, and \begin{equation} -C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)] -B[q^N(t+\tau), q^N(t+\tau)] \rho(q, p) +C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N] +B[q^N(\tau), p^N(\tau)] \rho(q^N, p^N) \label{Ineq:crosscorrelationFunctionCM} \end{equation} -as autocorrelation function and cross correlation function +as the autocorrelation function and cross correlation functions respectively. $\rho(q, p)$ is the density distribution at equillibrium -in phase space. In many cases, the correlation function decay is a -single exponential +in phase space. In many cases, correlation functions decay as a +single exponential in time \begin{equation} C(t) \sim e^{-t / \tau_r}, \label{Ineq:relaxation} \end{equation} -where $\tau_r$ is known as relaxation time which discribes the rate of +where $\tau_r$ is known as relaxation time which describes the rate of the decay. -\section{Methodolody\label{In:sec:method}} -The simulations performed in this dissertation are branched into two -main catalog, Monte Carlo and Molecular Dynamics. There are two main -difference between Monte Carlo and Molecular Dynamics simulations. One -is that the Monte Carlo simulation is time independent, and Molecular -Dynamics simulation is time involved. Another dissimilar is that the -Monte Carlo is a stochastic process, the configuration of the system -is not determinated by its past, however, using Moleuclar Dynamics, -the system is propagated by Newton's equation of motion, the -trajectory of the system evolved in the phase space is determined. A -brief introduction of the two algorithms are given in -section~\ref{In:ssec:mc} and~\ref{In:ssec:md}. An extension of the -Molecular Dynamics, Langevin Dynamics, is introduced by +\section{Methodology\label{In:sec:method}} +The simulations performed in this dissertation branch into two main +categories, Monte Carlo and Molecular Dynamics. There are two main +differences between Monte Carlo and Molecular Dynamics +simulations. One is that the Monte Carlo simulations are time +independent methods of sampling structural features of an ensemble, +while Molecular Dynamics simulations provide dynamic +information. Additionally, Monte Carlo methods are stochastic +processes; the future configurations of the system are not determined +by its past. However, in Molecular Dynamics, the system is propagated +by Hamilton's equations of motion, and the trajectory of the system +evolving in phase space is deterministic. Brief introductions of the +two algorithms are given in section~\ref{In:ssec:mc} +and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular +Dynamics that includes implicit solvent effects, is introduced by section~\ref{In:ssec:ld}. \subsection{Monte Carlo\label{In:ssec:mc}} -Monte Carlo algorithm was first introduced by Metropolis {\it et -al.}.~\cite{Metropolis53} Basic Monte Carlo algorithm is usually -applied to the canonical ensemble, a Boltzmann-weighted ensemble, in -which the $N$, the total number of particles, $V$, total volume, $T$, -temperature are constants. The average energy is given by substituding -Eq.~\ref{Ineq:canonicalEnsemble} into Eq.~\ref{Ineq:statAverage2}, +A Monte Carlo integration algorithm was first introduced by Metropolis +{\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo +algorithm is usually applied to the canonical ensemble, a +Boltzmann-weighted ensemble, in which $N$, the total number of +particles, $V$, the total volume, and $T$, the temperature are +constants. An average in this ensemble is given \begin{equation} -\langle E \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N E e^{-H(q^N, p^N) / k_B T}. +\langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N +A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}. \label{Ineq:energyofCanonicalEnsemble} \end{equation} -So are the other properties of the system. The Hamiltonian is the -summation of Kinetic energy $K(p^N)$ as a function of momenta and -Potential energy $U(q^N)$ as a function of positions, +The Hamiltonian is the sum of the kinetic energy, $K(p^N)$, a function +of momenta and the potential energy, $U(q^N)$, a function of +positions, \begin{equation} H(q^N, p^N) = K(p^N) + U(q^N). \label{Ineq:hamiltonian} \end{equation} -If the property $A$ is only a function of position ($ A = A(q^N)$), -the mean value of $A$ is reduced to +If the property $A$ is a function only of position ($ A = A(q^N)$), +the mean value of $A$ can be reduced to \begin{equation} -\langle A \rangle = \frac{\int d \vec q~^N \int d \vec p~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N \int d \vec p~^N e^{-U(q^N) / k_B T}}, +\langle A \rangle = \frac{\int d \vec q~^N A e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}, \label{Ineq:configurationIntegral} \end{equation} The kinetic energy $K(p^N)$ is factored out in Eq.~\ref{Ineq:configurationIntegral}. $\langle A -\rangle$ is a configuration integral now, and the +\rangle$ is now a configuration integral, and Eq.~\ref{Ineq:configurationIntegral} is equivalent to \begin{equation} -\langle A \rangle = \int d \vec q~^N A \rho(q^N). +\langle A \rangle = \int d \vec q~^N A \rho(q^N), \label{Ineq:configurationAve} \end{equation} +where $\rho(q^N)$ is a configurational probability +\begin{equation} +\rho(q^N) = \frac{e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}. +\label{Ineq:configurationProb} +\end{equation} -In a Monte Carlo simulation of canonical ensemble, the probability of -the system being in a state $s$ is $\rho_s$, the change of this -probability with time is given by +In a Monte Carlo simulation of a system in the canonical ensemble, the +probability of the system being in a state $s$ is $\rho_s$, the change +of this probability with time is given by \begin{equation} \frac{d \rho_s}{dt} = \sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ], \label{Ineq:timeChangeofProb} @@ -494,125 +554,114 @@ to state $s'$. Since $\rho_s$ is independent of time a \frac{d \rho_{s}^{equilibrium}}{dt} = 0, \label{Ineq:equiProb} \end{equation} -which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$ -for all $s'$. So +the sum of transition probabilities $\sum_{s'} [ -w_{ss'}\rho_s + +w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So \begin{equation} \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}. \label{Ineq:relationshipofRhoandW} \end{equation} -If +If the ratio of state populations \begin{equation} -\frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T}, -\label{Ineq:conditionforBoltzmannStatistics} -\end{equation} -then -\begin{equation} \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = e^{-(U_s - U_{s'}) / k_B T}. \label{Ineq:satisfyofBoltzmannStatistics} \end{equation} -Eq.~\ref{Ineq:satisfyofBoltzmannStatistics} implies that -$\rho^{equilibrium}$ satisfies Boltzmann statistics. An algorithm, -shows how Monte Carlo simulation generates a transition probability -governed by \ref{Ineq:conditionforBoltzmannStatistics}, is schemed as +then the ratio of transition probabilities, +\begin{equation} +\frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T}, +\label{Ineq:conditionforBoltzmannStatistics} +\end{equation} +An algorithm that indicates how a Monte Carlo simulation generates a +transition probability governed by +\ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as, \begin{enumerate} -\item\label{Initm:oldEnergy} Choose an particle randomly, calculate the energy. -\item\label{Initm:newEnergy} Make a random displacement for particle, -calculate the new energy. +\item\label{Initm:oldEnergy} Choose a particle randomly, and calculate +the energy of the rest of the system due to the current location of +the particle. +\item\label{Initm:newEnergy} Make a random displacement of the particle, +calculate the new energy taking the movement of the particle into account. \begin{itemize} - \item Keep the new configuration and return to step~\ref{Initm:oldEnergy} if energy -goes down. - \item Pick a random number between $[0,1]$ if energy goes up. + \item If the energy goes down, keep the new configuration. + \item If the energy goes up, pick a random number between $[0,1]$. \begin{itemize} - \item Keep the new configuration and return to -step~\ref{Initm:oldEnergy} if the random number smaller than -$e^{-(U_{new} - U_{old})} / k_B T$. - \item Keep the old configuration and return to -step~\ref{Initm:oldEnergy} if the random number larger than -$e^{-(U_{new} - U_{old})} / k_B T$. + \item If the random number smaller than +$e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration. + \item If the random number is larger than +$e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration. \end{itemize} \end{itemize} -\item\label{Initm:accumulateAvg} Accumulate the average after it converges. +\item\label{Initm:accumulateAvg} Accumulate the averages based on the +current configuration. +\item Go to step~\ref{Initm:oldEnergy}. \end{enumerate} -It is important to notice that the old configuration has to be sampled -again if it is kept. +It is important for sampling accuracy that the old configuration is +sampled again if it is kept. \subsection{Molecular Dynamics\label{In:ssec:md}} Although some of properites of the system can be calculated from the -ensemble average in Monte Carlo simulations, due to the nature of -lacking in the time dependence, it is impossible to gain information -of those dynamic properties from Monte Carlo simulations. Molecular -Dynamics is a measurement of the evolution of the positions and -momenta of the particles in the system. The evolution of the system -obeys laws of classical mechanics, in most situations, there is no -need for the count of the quantum effects. For a real experiment, the -instantaneous positions and momenta of the particles in the system are -neither important nor measurable, the observable quantities are -usually a average value for a finite time interval. These quantities -are expressed as a function of positions and momenta in Melecular -Dynamics simulations. Like the thermal temperature of the system is -defined by +ensemble average in Monte Carlo simulations, due to the absence of the +time dependence, it is impossible to gain information on dynamic +properties from Monte Carlo simulations. Molecular Dynamics evolves +the positions and momenta of the particles in the system. The +evolution of the system obeys the laws of classical mechanics, and in +most situations, there is no need to account for quantum effects. In a +real experiment, the instantaneous positions and momenta of the +particles in the system are ofter neither important nor measurable, +the observable quantities are usually an average value for a finite +time interval. These quantities are expressed as a function of +positions and momenta in Molecular Dynamics simulations. For example, +temperature of the system is defined by \begin{equation} -\frac{1}{2} k_B T = \langle \frac{1}{2} m v_\alpha \rangle, +\frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle, \label{Ineq:temperature} \end{equation} -here $m$ is the mass of the particle and $v_\alpha$ is the $\alpha$ -component of the velocity of the particle. The right side of -Eq.~\ref{Ineq:temperature} is the average kinetic energy of the -system. A simple Molecular Dynamics simulation scheme -is:~\cite{Frenkel1996} -\begin{enumerate} -\item\label{Initm:initialize} Assign the initial positions and momenta -for the particles in the system. -\item\label{Initm:calcForce} Calculate the forces. -\item\label{Initm:equationofMotion} Integrate the equation of motion. - \begin{itemize} - \item Return to step~\ref{Initm:calcForce} if the equillibrium is -not achieved. - \item Go to step~\ref{Initm:calcAvg} if the equillibrium is -achieved. - \end{itemize} -\item\label{Initm:calcAvg} Compute the quantities we are interested in. -\end{enumerate} -The initial positions of the particles are chosen as that there is no -overlap for the particles. The initial velocities at first are -distributed randomly to the particles, and then shifted to make the -momentum of the system $0$, at last scaled to the desired temperature -of the simulation according Eq.~\ref{Ineq:temperature}. +here $m_i$ is the mass of particle $i$ and $v_i$ is the velocity of +particle $i$. The right side of Eq.~\ref{Ineq:temperature} is the +average kinetic energy of the system. -The core of Molecular Dynamics simulations is step~\ref{Initm:calcForce} -and~\ref{Initm:equationofMotion}. The calculation of the forces are -often involved numerous effort, this is the most time consuming step -in the Molecular Dynamics scheme. The evaluation of the forces is -followed by +The initial positions of the particles are chosen so that there is no +overlap of the particles. The initial velocities at first are +distributed randomly to the particles using a Maxwell-Boltzmann +distribution, and then shifted to make the total linear momentum of +the system $0$. + +The core of Molecular Dynamics simulations is the calculation of +forces and the integration algorithm. Calculation of the forces often +involves enormous effort. This is the most time consuming step in the +Molecular Dynamics scheme. Evaluation of the forces is mathematically +simple, \begin{equation} f(q) = - \frac{\partial U(q)}{\partial q}, \label{Ineq:force} \end{equation} -$U(q)$ is the potential of the system. Once the forces computed, are -the positions and velocities updated by integrating Newton's equation -of motion, -\begin{equation} -f(q) = \frac{dp}{dt} = \frac{m dv}{dt}. +where $U(q)$ is the potential of the system. However, the numerical +details of this computation are often quite complex. Once the forces +computed, the positions and velocities are updated by integrating +Hamilton's equations of motion, +\begin{eqnarray} +\dot p_i & = & -\frac{\partial H}{\partial q_i} = -\frac{\partial +U(q_i)}{\partial q_i} = f(q_i) \\ +\dot q_i & = & p_i \label{Ineq:newton} -\end{equation} -Here is an example of integrating algorithms, Verlet algorithm, which -is one of the best algorithms to integrate Newton's equation of -motion. The Taylor expension of the position at time $t$ is +\end{eqnarray} +The classic example of an integrating algorithm is the Verlet +algorithm, which is one of the simplest algorithms for integrating the +equations of motion. The Taylor expansion of the position at time $t$ +is \begin{equation} -q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 + +q(t+\Delta t)= q(t) + \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 + \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} + \mathcal{O}(\Delta t^4) \label{Ineq:verletFuture} \end{equation} for a later time $t+\Delta t$, and \begin{equation} -q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 - +q(t-\Delta t)= q(t) - \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 - \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} + \mathcal{O}(\Delta t^4) , \label{Ineq:verletPrevious} \end{equation} -for a previous time $t-\Delta t$. The summation of the -Eq.~\ref{Ineq:verletFuture} and~\ref{Ineq:verletPrevious} gives +for a previous time $t-\Delta t$. Adding Eq.~\ref{Ineq:verletFuture} +and~\ref{Ineq:verletPrevious} gives \begin{equation} q(t+\Delta t)+q(t-\Delta t) = 2q(t) + \frac{f(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4), @@ -624,49 +673,53 @@ q(t+\Delta t) \approx 2q(t) - q(t-\Delta t) + \frac{f(t)}{m}\Delta t^2. \label{Ineq:newPosition} \end{equation} -The higher order of the $\Delta t$ is omitted. +The higher order terms in $\Delta t$ are omitted. -Numerous technics and tricks are applied to Molecular Dynamics -simulation to gain more efficiency or more accuracy. The simulation -engine used in this dissertation for the Molecular Dynamics -simulations is {\sc oopse}, more details of the algorithms and -technics can be found in~\cite{Meineke2005}. +Numerous techniques and tricks have been applied to Molecular Dynamics +simulations to gain greater efficiency or accuracy. The engine used in +this dissertation for the Molecular Dynamics simulations is {\sc +oopse}. More details of the algorithms and techniques used in this +code can be found in Ref.~\cite{Meineke2005}. \subsection{Langevin Dynamics\label{In:ssec:ld}} -In many cases, the properites of the solvent in a system, like the -lipid-water system studied in this dissertation, are less important to -the researchers. However, the major computational expense is spent on -the solvent in the Molecular Dynamics simulations because of the large -number of the solvent molecules compared to that of solute molecules, -the ratio of the number of lipid molecules to the number of water -molecules is $1:25$ in our lipid-water system. The efficiency of the -Molecular Dynamics simulations is greatly reduced. +In many cases, the properites of the solvent (like the water in the +lipid-water system studied in this dissertation) are less interesting +to the researchers than the behavior of the solute. However, the major +computational expense is ofter the solvent-solvent interactions, this +is due to the large number of the solvent molecules when compared to +the number of solute molecules. The ratio of the number of lipid +molecules to the number of water molecules is $1:25$ in our +lipid-water system. The efficiency of the Molecular Dynamics +simulations is greatly reduced, with up to 85\% of CPU time spent +calculating only water-water interactions. -As an extension of the Molecular Dynamics simulations, the Langevin -Dynamics seeks a way to avoid integrating equation of motion for -solvent particles without losing the Brownian properites of solute -particles. A common approximation is that the coupling of the solute -and solvent is expressed as a set of harmonic oscillators. So the -Hamiltonian of such a system is written as +As an extension of the Molecular Dynamics methodologies, Langevin +Dynamics seeks a way to avoid integrating the equations of motion for +solvent particles without losing the solvent effects on the solute +particles. One common approximation is to express the coupling of the +solute and solvent as a set of harmonic oscillators. The Hamiltonian +of such a system is written as \begin{equation} H = \frac{p^2}{2m} + U(q) + H_B + \Delta U(q), \label{Ineq:hamiltonianofCoupling} \end{equation} -where $H_B$ is the Hamiltonian of the bath which equals to +where $H_B$ is the Hamiltonian of the bath which is a set of N +harmonic oscillators \begin{equation} H_B = \sum_{\alpha = 1}^{N} \left\{ \frac{p_\alpha^2}{2m_\alpha} + \frac{1}{2} m_\alpha \omega_\alpha^2 q_\alpha^2\right\}, \label{Ineq:hamiltonianofBath} \end{equation} -$\alpha$ is all the degree of freedoms of the bath, $\omega$ is the -bath frequency, and $\Delta U(q)$ is the bilinear coupling given by +$\alpha$ runs over all the degree of freedoms of the bath, +$\omega_\alpha$ is the bath frequency of oscillator $\alpha$, and +$\Delta U(q)$ is the bilinear coupling given by \begin{equation} \Delta U = -\sum_{\alpha = 1}^{N} g_\alpha q_\alpha q, \label{Ineq:systemBathCoupling} \end{equation} -where $g$ is the coupling constant. By solving the Hamilton's equation -of motion, the {\it Generalized Langevin Equation} for this system is -derived to +where $g_\alpha$ is the coupling constant for oscillator $\alpha$. By +solving the Hamilton's equations of motion, the {\it Generalized +Langevin Equation} for this system is derived as \begin{equation} m \ddot q = -\frac{\partial W(q)}{\partial q} - \int_0^t \xi(t) \dot q(t-t')dt' + R(t), \label{Ineq:gle} @@ -674,41 +727,42 @@ W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m with mean force, \begin{equation} W(q) = U(q) - \sum_{\alpha = 1}^N \frac{g_\alpha^2}{2m_\alpha -\omega_\alpha^2}q^2, +\omega_\alpha^2}q^2. \label{Ineq:meanForce} \end{equation} -being only a dependence of coordinates of the solute particles, {\it -friction kernel}, +The {\it friction kernel}, $\xi(t)$, depends only on the coordinates +of the solute particles, \begin{equation} \xi(t) = \sum_{\alpha = 1}^N \frac{-g_\alpha}{m_\alpha \omega_\alpha} \cos(\omega_\alpha t), \label{Ineq:xiforGLE} \end{equation} -and the random force, +and a ``random'' force, \begin{equation} R(t) = \sum_{\alpha = 1}^N \left( g_\alpha q_\alpha(0)-\frac{g_\alpha}{m_\alpha \omega_\alpha^2}q(0)\right) \cos(\omega_\alpha t) + \frac{\dot q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t), \label{Ineq:randomForceforGLE} \end{equation} -as only a dependence of the initial conditions. The relationship of -friction kernel $\xi(t)$ and random force $R(t)$ is given by +that depends only on the initial conditions. The relationship of +friction kernel $\xi(t)$ and random force $R(t)$ is given by the +second fluctuation dissipation theorem, \begin{equation} -\xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle +\xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle. \label{Ineq:relationshipofXiandR} \end{equation} -from their definitions. In Langevin limit, the friction is treated -static, which means +In the harmonic bath this relation is exact and provable from the +definitions of these quantities. In the limit of static friction, \begin{equation} \xi(t) = 2 \xi_0 \delta(t). \label{Ineq:xiofStaticFriction} \end{equation} -After substitude $\xi(t)$ into Eq.~\ref{Ineq:gle} with -Eq.~\ref{Ineq:xiofStaticFriction}, {\it Langevin Equation} is extracted -to +After substituting $\xi(t)$ into Eq.~\ref{Ineq:gle} with +Eq.~\ref{Ineq:xiofStaticFriction}, the {\it Langevin Equation} is +extracted, \begin{equation} m \ddot q = -\frac{\partial U(q)}{\partial q} - \xi \dot q(t) + R(t). \label{Ineq:langevinEquation} \end{equation} -The applying of Langevin Equation to dynamic simulations is discussed -in Ch.~\ref{chap:ld}. +Application of the Langevin Equation to dynamic simulations is +discussed in Ch.~\ref{chap:ld}.