ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/xDissertation/Introduction.tex
(Generate patch)

Comparing trunk/xDissertation/Introduction.tex (file contents):
Revision 3374 by xsun, Thu Mar 20 22:21:43 2008 UTC vs.
Revision 3383 by xsun, Wed Apr 16 21:56:34 2008 UTC

# Line 3 | Line 3 | because of their critical role as the foundation of bi
3   \section{Background on the Problem\label{In:sec:pro}}
4   Phospholipid molecules are the primary topic of this dissertation
5   because of their critical role as the foundation of biological
6 < membranes. The chemical structure of phospholipids includes the polar
7 < head group which is due to a large charge separation between phosphate
8 < and amino alcohol, and the nonpolar tails that contains fatty acid
9 < chains. Depending on the alcohol which phosphate and fatty acid chains
10 < are esterified to, the phospholipids are divided into
11 < glycerophospholipids and sphingophospholipids.~\cite{Cevc80} The
12 < chemical structures are shown in figure~\ref{Infig:lipid}.
6 > membranes. The chemical structure of phospholipids includes a head
7 > group with a large dipole moment which is due to the large charge
8 > separation between phosphate and amino alcohol, and a nonpolar tail
9 > that contains fatty acid chains. Depending on the specific alcohol
10 > which the phosphate and fatty acid chains are esterified to, the
11 > phospholipids are divided into glycerophospholipids and
12 > sphingophospholipids.~\cite{Cevc80} The chemical structures are shown
13 > in figure~\ref{Infig:lipid}.
14   \begin{figure}
15   \centering
16   \includegraphics[width=\linewidth]{./figures/inLipid.pdf}
17 < \caption{The chemical structure of glycerophospholipids (left) and
18 < sphingophospholipids (right).\cite{Cevc80}}
17 > \caption[The chemical structure of lipids]{The chemical structure of
18 > glycerophospholipids (left) and sphingophospholipids
19 > (right).\cite{Cevc80}}
20   \label{Infig:lipid}
21   \end{figure}
22 < The glycerophospholipid is the dominant phospholipid in membranes. The
23 < types of glycerophospholipids depend on the X group, and the
24 < chains. For example, if X is choline, the lipids are known as
25 < phosphatidylcholine (PC), or if X is ethanolamine, the lipids are
26 < known as phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a
27 < number types of phosphatidycholine with different fatty acids as the
28 < lipid chains.
22 > Glycerophospholipids are the dominant phospholipids in biological
23 > membranes. The type of glycerophospholipid depends on the identity of
24 > the X group, and the chains. For example, if X is choline
25 > [(CH$_3$)$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
26 > phosphatidylcholine (PC), or if X is ethanolamine
27 > [H$_3$N$^+$CH$_2$CH$_2$OH], the lipids are known as
28 > phosphatidyethanolamine (PE). Table~\ref{Intab:pc} listed a number
29 > types of phosphatidycholine with different fatty acids as the lipid
30 > chains.
31   \begin{table*}
32   \begin{minipage}{\linewidth}
33   \begin{center}
34 < \caption{A number types of phosphatidycholine.}
34 > \caption{A NUMBER TYPES OF PHOSPHATIDYCHOLINE}
35   \begin{tabular}{lll}
36   \hline
37    & Fatty acid & Generic Name \\
# Line 42 | Line 46 | lipid chains.
46   \end{center}
47   \end{minipage}
48   \end{table*}
49 < When dispersed in water, lipids self assemble into a mumber of
49 > When dispersed in water, lipids self assemble into a number of
50   topologically distinct bilayer structures. The phase behavior of lipid
51   bilayers has been explored experimentally~\cite{Cevc80}, however, a
52   complete understanding of the mechanism and driving forces behind the
53   various phases has not been achieved.
54  
55 < \subsection{Ripple Phase\label{In:ssec:ripple}}
55 > \subsection{The Ripple Phase\label{In:ssec:ripple}}
56   The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the
57   periodic buckling of the membrane, is an intermediate phase which is
58   developed either from heating the gel phase $L_{\beta'}$ or cooling
59 < the fluid phase $L_\alpha$. A Sketch is shown in
59 > the fluid phase $L_\alpha$. A sketch of the phases is shown in
60   figure~\ref{Infig:phaseDiagram}.~\cite{Cevc80}
61   \begin{figure}
62   \centering
63   \includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf}
64 < \caption{A phase diagram of lipid bilayer. With increasing the
65 < temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple
66 < ($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.~\cite{Cevc80}}
64 > \caption[Phases of PC lipid bilayers]{Phases of PC lipid
65 > bilayers. With increasing temperature, phosphotidylcholine (PC)
66 > bilayers can go through $L_{\beta'} \rightarrow P_{\beta'}$ (gel
67 > $\rightarrow$ ripple) and $P_{\beta'} \rightarrow L_\alpha$ (ripple
68 > $\rightarrow$ fluid) phase transitions.~\cite{Cevc80}}
69   \label{Infig:phaseDiagram}
70   \end{figure}
71 < Most structural information of the ripple phase has been obtained by
72 < the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
71 > Most structural information about the ripple phase has been obtained
72 > by X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture
73   electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray
74   diffraction work by Katsaras {\it et al.} showed that a rich phase
75   diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples
# Line 74 | Line 80 | mica.~\cite{Kaasgaard03}
80   \begin{figure}
81   \centering
82   \includegraphics[width=\linewidth]{./figures/inRipple.pdf}
83 < \caption{The experimental observed ripple phase. The top image is
84 < obtained by Sun {\it et al.} using X-ray diffraction~\cite{Sun96},
85 < and the bottom one is observed by Kaasgaard {\it et al.} using
86 < AFM.~\cite{Kaasgaard03}}
83 > \caption[Experimental observations of the riple phase]{Experimental
84 > observations of the riple phase. The top image is an electrostatic
85 > density map obtained by Sun {\it et al.} using X-ray
86 > diffraction~\cite{Sun96}.  The lower figures are the surface topology
87 > of various ripple domains in bilayers supported in mica. The AFM
88 > images were observed by Kaasgaard {\it et al.}.~\cite{Kaasgaard03}}
89   \label{Infig:ripple}
90   \end{figure}
91 < Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray
91 > Figure~\ref{Infig:ripple} shows the ripple phase oberved by both X-ray
92   diffraction and AFM. The experimental results provide strong support
93   for a 2-dimensional triangular packing lattice of the lipid molecules
94   within the ripple phase.  This is a notable change from the observed
95   lipid packing within the gel phase,~\cite{Cevc80} although Tenchov
96 < {\it et al.} have recently observed near-hexagonal packing in some
96 > {\it et al.} have recently observed near-triangular packing in some
97   phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the
98   physical mechanism for the formation of the ripple phase has never
99   been explained and the microscopic structure of the ripple phase has
100   never been elucidated by experiments. Computational simulation is a
101 < perfect tool to study the microscopic properties for a
102 < system. However, the large length scale the ripple structure and the
103 < long time scale of the formation of the ripples are crucial obstacles
104 < to performing the actual work. The principal ideas explored in this
105 < dissertation are attempts to break the computational task up by
101 > very good tool to study the microscopic properties for a
102 > system. However, the large length scale of the ripple structures and
103 > the long time required for the formation of the ripples are crucial
104 > obstacles to performing the actual work. The principal ideas explored
105 > in this dissertation are attempts to break the computational task up
106 > by
107   \begin{itemize}
108   \item Simplifying the lipid model.
109 < \item Improving algorithm for integrating the equations of motion.
109 > \item Improving the algorithm for integrating the equations of motion.
110   \end{itemize}
111   In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point
112 < dipole spin model and a coarse-grained molecualr scale model to
112 > dipole spin model and a coarse-grained molecular scale model to
113   perform the Monte Carlo and Molecular Dynamics simulations
114   respectively, and in chapter~\ref{chap:ld}, we develop a Langevin
115   Dynamics algorithm which excludes the explicit solvent to improve the
116   efficiency of the simulations.
117  
118 < \subsection{Lattice Model\label{In:ssec:model}}
119 < The gel-like characteristic (relatively immobile molecules) exhibited
118 > \subsection{Lattice Models\label{In:ssec:model}}
119 > The gel-like characteristic (laterally immobile molecules) exhibited
120   by the ripple phase makes it feasible to apply a lattice model to
121   study the system. The popular $2$ dimensional lattice models, {\it
122   i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration}
# Line 128 | Line 137 | anti-aligned structure.
137   \begin{figure}
138   \centering
139   \includegraphics[width=3in]{./figures/inFrustration.pdf}
140 < \caption{Frustration on triangular lattice, the spins and dipoles are
141 < represented by arrows. The multiple local minima of energy states
142 < induce the frustration for spins and dipoles picking the directions.}
140 > \caption[Frustration on triangular lattice]{Frustration on triangular
141 > lattice, the spins and dipoles are represented by arrows. The multiple
142 > local minima of energy states induce frustration for spins and dipoles
143 > resulting in disordered low-temperature phases.}
144   \label{Infig:frustration}
145   \end{figure}
146 < The spins in figure~\ref{Infig:frustration} shows an illustration of
147 < the frustration for $J < 0$ on a triangular lattice. There are
148 < multiple local minima energy states which are independent of the
149 < direction of the spin on top of the triangle, therefore infinite
150 < possibilities for the packing of spins which induces what is known as
151 < ``complete regular frustration'' which leads to disordered low
152 < temperature phases. The similarity goes to the dipoles on a hexagonal
153 < lattice, which are shown by the dipoles in
154 < figure~\ref{Infig:frustration}. In this circumstance, the dipoles want
155 < to be aligned, however, due to the long wave fluctuation, at low
156 < temperature, the aligned state becomes unstable, vortex is formed and
157 < results in multiple local minima of energy states. The dipole on the
148 < center of the hexagonal lattice is frustrated.
146 > The spins in figure~\ref{Infig:frustration} illustrate frustration for
147 > $J < 0$ on a triangular lattice. There are multiple local minima
148 > energy states which are independent of the direction of the spin on
149 > top of the triangle, therefore infinite possibilities for orienting
150 > large numbers spins. This induces what is known as ``complete regular
151 > frustration'' which leads to disordered low temperature phases. This
152 > behavior extends to dipoles on a triangular lattices, which are shown
153 > in the lower portion of figure~\ref{Infig:frustration}. In this case,
154 > dipole-aligned structures are energetically favorable, however, at low
155 > temperature, vortices are easily formed, and, this results in multiple
156 > local minima of energy states for a central dipole. The dipole on the
157 > center of the hexagonal lattice is therefore frustrated.
158  
159   The lack of translational degrees of freedom in lattice models
160   prevents their utilization in models for surface buckling. In
# Line 159 | Line 168 | principles of statistical mechanics.~\cite{Tolman1979}
168   to key concepts of classical statistical mechanics that we used in
169   this dissertation. Tolman gives an excellent introduction to the
170   principles of statistical mechanics.~\cite{Tolman1979} A large part of
171 < section~\ref{In:sec:SM} will follow Tolman's notation.
171 > section~\ref{In:sec:SM} follows Tolman's notation.
172  
173   \subsection{Ensembles\label{In:ssec:ensemble}}
174   In classical mechanics, the state of the system is completely
# Line 269 | Line 278 | selected moving representative points in the phase spa
278   and the rate of density change is zero in the neighborhood of any
279   selected moving representative points in the phase space.
280  
281 < The condition of the ensemble is determined by the density
281 > The type of thermodynamic ensemble is determined by the density
282   distribution. If we consider the density distribution as only a
283   function of $q$ and $p$, which means the rate of change of the phase
284   space density in the neighborhood of all representative points in the
# Line 298 | Line 307 | constant everywhere in the phase space,
307   \rho = \mathrm{const.}
308   \label{Ineq:uniformEnsemble}
309   \end{equation}
310 < the ensemble is called {\it uniform ensemble}.
310 > the ensemble is called {\it uniform ensemble}, but this ensemble has
311 > little relevance for physical chemistry. It is an ensemble with
312 > essentially infinite temperature.
313  
314   \subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}}
315 < Another useful ensemble is the {\it microcanonical ensemble}, for
316 < which:
315 > The most useful ensemble for Molecular Dynamics is the {\it
316 > microcanonical ensemble}, for which:
317   \begin{equation}
318   \rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)}
319   \label{Ineq:microcanonicalEnsemble}
# Line 319 | Line 330 | makes the argument of $\ln$ dimensionless. In this cas
330   \end{equation}
331   where $k_B$ is the Boltzmann constant and $C^N$ is a number which
332   makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the
333 < total phase space volume of one state. The entropy of a microcanonical
334 < ensemble is given by
333 > total phase space volume of one state which has the same units as
334 > $\Sigma(N, V, E)$. The entropy of a microcanonical ensemble is given
335 > by
336   \begin{equation}
337   S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right).
338   \label{Ineq:entropy}
# Line 337 | Line 349 | Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(
349   Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N  e^{-H(q^N, p^N) / k_B T},
350   \label{Ineq:partitionFunction}
351   \end{equation}
352 < which is also known as the canonical{\it partition function}. $\Gamma$
353 < indicates that the integral is over all phase space. In the canonical
354 < ensemble, $N$, the total number of particles, $V$, total volume, and
355 < $T$, the temperature, are constants. The systems with the lowest
356 < energies hold the largest population. According to maximum principle,
357 < thermodynamics maximizes the entropy $S$, implying that
352 > which is also known as the canonical {\it partition
353 > function}. $\Gamma$ indicates that the integral is over all phase
354 > space. In the canonical ensemble, $N$, the total number of particles,
355 > $V$, total volume, and $T$, the temperature, are constants. The
356 > systems with the lowest energies hold the largest
357 > population. Thermodynamics maximizes the entropy, $S$, implying that
358   \begin{equation}
359   \begin{array}{ccc}
360   \delta S = 0 & \mathrm{and} & \delta^2 S < 0.
# Line 363 | Line 375 | There is an implicit assumption that our arguments are
375   system and the distribution of microscopic states.
376  
377   There is an implicit assumption that our arguments are based on so
378 < far. A representative point in the phase space is equally likely to be
379 < found in any energetically allowed region. In other words, all
380 < energetically accessible states are represented equally, the
381 < probabilities to find the system in any of the accessible states is
382 < equal. This is called the principle of equal a {\it priori}
378 > far. Tow representative points in phase space are equally likely to be
379 > found if they have the same energy. In other words, all energetically
380 > accessible states are represented , and the probabilities to find the
381 > system in any of the accessible states is equal to that states
382 > Boltzmann weight. This is called the principle of equal a {\it priori}
383   probabilities.
384  
385   \subsection{Statistical Averages\label{In:ssec:average}}
# Line 402 | Line 414 | mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes
414   \frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt
415   \label{Ineq:timeAverage2}
416   \end{equation}
417 < for an infinite time interval.
417 > for an finite time interval, $T$.
418  
419   \subsubsection{Ergodicity\label{In:sssec:ergodicity}}
420   The {\it ergodic hypothesis}, an important hypothesis governing modern
# Line 445 | Line 457 | C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \
457   \frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau),
458   \label{Ineq:crosscorrelationFunction}
459   \end{equation}
460 < and called {\it cross correlation function}.
460 > and is called a {\it cross correlation function}.
461  
462   We know from the ergodic hypothesis that there is a relationship
463   between time average and ensemble average. We can put the correlation
464 < function in a classical mechanics form,
464 > function in a classical mechanical form,
465   \begin{equation}
466 < C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
467 < A[q^N(t+\tau), p^N(t+\tau)] \rho(q, p)
466 > C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
467 > A[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
468   \label{Ineq:autocorrelationFunctionCM}
469   \end{equation}
470 < and
470 > where $q^N(\tau)$, $p^N(\tau)$ is the phase space point that follows
471 > classical evolution of the point $q^N$, $p^N$ after a tme $\tau$ has
472 > elapsed, and
473   \begin{equation}
474 < C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)]
475 < B[q^N(t+\tau), p^N(t+\tau)] \rho(q, p)
474 > C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N, p^N]
475 > B[q^N(\tau), p^N(\tau)] \rho(q^N, p^N)
476   \label{Ineq:crosscorrelationFunctionCM}
477   \end{equation}
478   as the autocorrelation function and cross correlation functions
# Line 479 | Line 493 | while Molecular Dynamics simulations provide dynamic
493   simulations. One is that the Monte Carlo simulations are time
494   independent methods of sampling structural features of an ensemble,
495   while Molecular Dynamics simulations provide dynamic
496 < information. Additionally, Monte Carlo methods are a stochastic
497 < processes, the future configurations of the system are not determined
496 > information. Additionally, Monte Carlo methods are stochastic
497 > processes; the future configurations of the system are not determined
498   by its past. However, in Molecular Dynamics, the system is propagated
499 < by Newton's equation of motion, and the trajectory of the system
499 > by Hamilton's equations of motion, and the trajectory of the system
500   evolving in phase space is deterministic. Brief introductions of the
501   two algorithms are given in section~\ref{In:ssec:mc}
502   and~\ref{In:ssec:md}. Langevin Dynamics, an extension of the Molecular
# Line 493 | Line 507 | algorithm is usually applied to the canonical ensemble
507   A Monte Carlo integration algorithm was first introduced by Metropolis
508   {\it et al.}~\cite{Metropolis53} The basic Metropolis Monte Carlo
509   algorithm is usually applied to the canonical ensemble, a
510 < Boltzmann-weighted ensemble, in which the $N$, the total number of
511 < particles, $V$, total volume, $T$, temperature are constants. An
512 < average in this ensemble is given
510 > Boltzmann-weighted ensemble, in which $N$, the total number of
511 > particles, $V$, the total volume, and $T$, the temperature are
512 > constants. An average in this ensemble is given
513   \begin{equation}
514   \langle A \rangle = \frac{1}{Z_N} \int d \vec q~^N \int d \vec p~^N
515   A(q^N, p^N) e^{-H(q^N, p^N) / k_B T}.
# Line 519 | Line 533 | Eq.~\ref{Ineq:configurationIntegral} is equivalent to
533   \rangle$ is now a configuration integral, and
534   Eq.~\ref{Ineq:configurationIntegral} is equivalent to
535   \begin{equation}
536 < \langle A \rangle = \int d \vec q~^N A \rho(q^N).
536 > \langle A \rangle = \int d \vec q~^N A \rho(q^N),
537   \label{Ineq:configurationAve}
538   \end{equation}
539 + where $\rho(q^N)$ is a configurational probability
540 + \begin{equation}
541 + \rho(q^N) = \frac{e^{-U(q^N) / k_B T}}{\int d \vec q~^N e^{-U(q^N) / k_B T}}.
542 + \label{Ineq:configurationProb}
543 + \end{equation}
544  
545   In a Monte Carlo simulation of a system in the canonical ensemble, the
546   probability of the system being in a state $s$ is $\rho_s$, the change
# Line 536 | Line 555 | to state $s'$. Since $\rho_s$ is independent of time a
555   \frac{d \rho_{s}^{equilibrium}}{dt} = 0,
556   \label{Ineq:equiProb}
557   \end{equation}
558 < which means $\sum_{s'} [ -w_{ss'}\rho_s + w_{s's}\rho_{s'} ]$ is $0$
559 < for all $s'$. So
558 > the sum of transition probabilities $\sum_{s'} [ -w_{ss'}\rho_s +
559 > w_{s's}\rho_{s'} ]$ is $0$ for all $s'$. So
560   \begin{equation}
561   \frac{\rho_s^{equilibrium}}{\rho_{s'}^{equilibrium}} = \frac{w_{s's}}{w_{ss'}}.
562   \label{Ineq:relationshipofRhoandW}
# Line 552 | Line 571 | then the ratio of transition probabilities,
571   \frac{w_{s's}}{w_{ss'}} = e^{-(U_s - U_{s'}) / k_B T},
572   \label{Ineq:conditionforBoltzmannStatistics}
573   \end{equation}
574 < An algorithm that shows how Monte Carlo simulation generates a
574 > An algorithm that indicates how a Monte Carlo simulation generates a
575   transition probability governed by
576   \ref{Ineq:conditionforBoltzmannStatistics}, is given schematically as,
577   \begin{enumerate}
578   \item\label{Initm:oldEnergy} Choose a particle randomly, and calculate
579 < the energy of the rest of the system due to the location of the particle.
579 > the energy of the rest of the system due to the current location of
580 > the particle.
581   \item\label{Initm:newEnergy} Make a random displacement of the particle,
582   calculate the new energy taking the movement of the particle into account.
583    \begin{itemize}
584 <     \item If the energy goes down, keep the new configuration and return to
565 < step~\ref{Initm:oldEnergy}.
584 >     \item If the energy goes down, keep the new configuration.
585       \item If the energy goes up, pick a random number between $[0,1]$.
586          \begin{itemize}
587             \item If the random number smaller than
588 < $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration and return to
570 < step~\ref{Initm:oldEnergy}.
588 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the new configuration.
589             \item If the random number is larger than
590 < $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration and return to
573 < step~\ref{Initm:oldEnergy}.
590 > $e^{-(U_{new} - U_{old})} / k_B T$, keep the old configuration.
591          \end{itemize}
592    \end{itemize}
593   \item\label{Initm:accumulateAvg} Accumulate the averages based on the
594 < current configuartion.
594 > current configuration.
595   \item Go to step~\ref{Initm:oldEnergy}.
596   \end{enumerate}
597 < It is important for sampling accurately that the old configuration is
597 > It is important for sampling accuracy that the old configuration is
598   sampled again if it is kept.
599  
600   \subsection{Molecular Dynamics\label{In:ssec:md}}
# Line 589 | Line 606 | real experiment, the instantaneous positions and momen
606   evolution of the system obeys the laws of classical mechanics, and in
607   most situations, there is no need to account for quantum effects. In a
608   real experiment, the instantaneous positions and momenta of the
609 < particles in the system are neither important nor measurable, the
610 < observable quantities are usually an average value for a finite time
611 < interval. These quantities are expressed as a function of positions
612 < and momenta in Molecular Dynamics simulations. For example,
609 > particles in the system are ofter neither important nor measurable,
610 > the observable quantities are usually an average value for a finite
611 > time interval. These quantities are expressed as a function of
612 > positions and momenta in Molecular Dynamics simulations. For example,
613   temperature of the system is defined by
614   \begin{equation}
615   \frac{3}{2} N k_B T = \langle \sum_{i=1}^N \frac{1}{2} m_i v_i \rangle,
# Line 605 | Line 622 | distributed randomly to the particles using a Maxwell-
622   The initial positions of the particles are chosen so that there is no
623   overlap of the particles. The initial velocities at first are
624   distributed randomly to the particles using a Maxwell-Boltzmann
625 < ditribution, and then shifted to make the total linear momentum of the
626 < system $0$.
625 > distribution, and then shifted to make the total linear momentum of
626 > the system $0$.
627  
628   The core of Molecular Dynamics simulations is the calculation of
629   forces and the integration algorithm. Calculation of the forces often
# Line 632 | Line 649 | is
649   equations of motion. The Taylor expansion of the position at time $t$
650   is
651   \begin{equation}
652 < q(t+\Delta t)= q(t) + v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 +
652 > q(t+\Delta t)= q(t) + \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 +
653          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
654          \mathcal{O}(\Delta t^4)
655   \label{Ineq:verletFuture}
656   \end{equation}
657   for a later time $t+\Delta t$, and
658   \begin{equation}
659 < q(t-\Delta t)= q(t) - v(t) \Delta t + \frac{f(t)}{2m}\Delta t^2 -
659 > q(t-\Delta t)= q(t) - \frac{p(t)}{m} \Delta t + \frac{f(t)}{2m}\Delta t^2 -
660          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
661          \mathcal{O}(\Delta t^4) ,
662   \label{Ineq:verletPrevious}
# Line 666 | Line 683 | code can be found in Ref.~\cite{Meineke2005}.
683   code can be found in Ref.~\cite{Meineke2005}.
684  
685   \subsection{Langevin Dynamics\label{In:ssec:ld}}
686 < In many cases, the properites of the solvent in a system, like the
687 < water in the lipid-water system studied in this dissertation, are less
688 < interesting to the researchers than the behavior of the
689 < solute. However, the major computational expense is ofter the
690 < solvent-solvent interation, this is due to the large number of the
691 < solvent molecules when compared to the number of solute molecules, the
692 < ratio of the number of lipid molecules to the number of water
693 < molecules is $1:25$ in our lipid-water system. The efficiency of the
694 < Molecular Dynamics simulations is greatly reduced, with up to 85\% of
695 < CPU time spent calculating only water-water interactions.
686 > In many cases, the properites of the solvent (like the water in the
687 > lipid-water system studied in this dissertation) are less interesting
688 > to the researchers than the behavior of the solute. However, the major
689 > computational expense is ofter the solvent-solvent interactions, this
690 > is due to the large number of the solvent molecules when compared to
691 > the number of solute molecules. The ratio of the number of lipid
692 > molecules to the number of water molecules is $1:25$ in our
693 > lipid-water system. The efficiency of the Molecular Dynamics
694 > simulations is greatly reduced, with up to 85\% of CPU time spent
695 > calculating only water-water interactions.
696  
697   As an extension of the Molecular Dynamics methodologies, Langevin
698   Dynamics seeks a way to avoid integrating the equations of motion for
# Line 728 | Line 745 | q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
745   q_\alpha(0)}{\omega_\alpha} \sin(\omega_\alpha t),
746   \label{Ineq:randomForceforGLE}
747   \end{equation}
748 < depends only on the initial conditions. The relationship of friction
749 < kernel $\xi(t)$ and random force $R(t)$ is given by the second
750 < fluctuation dissipation theorem,
748 > that depends only on the initial conditions. The relationship of
749 > friction kernel $\xi(t)$ and random force $R(t)$ is given by the
750 > second fluctuation dissipation theorem,
751   \begin{equation}
752   \xi(t) = \frac{1}{k_B T} \langle R(t)R(0) \rangle.
753   \label{Ineq:relationshipofXiandR}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines