ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 1001 by mmeineke, Sat Jan 31 22:10:21 2004 UTC vs.
Revision 1003 by mmeineke, Mon Feb 2 21:56:16 2004 UTC

# Line 3 | Line 3
3   \chapter{\label{chapt:intro}Introduction and Theoretical Background}
4  
5  
6
7 \section{\label{introSec:theory}Theoretical Background}
8
6   The techniques used in the course of this research fall under the two
7   main classes of molecular simulation: Molecular Dynamics and Monte
8   Carlo. Molecular Dynamic simulations integrate the equations of motion
# Line 30 | Line 27 | which method best suits that objective.
27   thermodynamic properties of the system are being probed, then chose
28   which method best suits that objective.
29  
30 < \subsection{\label{introSec:statThermo}Statistical Mechanics}
30 > \section{\label{introSec:statThermo}Statistical Mechanics}
31  
32   The following section serves as a brief introduction to some of the
33   Statistical Mechanics concepts present in this dissertation.  What
# Line 43 | Line 40 | Consider a system, $\gamma$, with some total energy,,
40   \subsection{\label{introSec:boltzman}Boltzman weighted statistics}
41  
42   Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
43 < Let $\Omega(E_{gamma})$ represent the number of degenerate ways
43 > Let $\Omega(E_{\gamma})$ represent the number of degenerate ways
44   $\boldsymbol{\Gamma}$, the collection of positions and conjugate
45   momenta of system $\gamma$, can be configured to give
46   $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
47   where energy is exchanged between the two systems, $\Omega(E)$, where
48   $E$ is the total energy of both systems, can be represented as
49   \begin{equation}
50 < eq here
50 > \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
51   \label{introEq:SM1}
52   \end{equation}
53   Or additively as
54   \begin{equation}
55 < eq here
55 > \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
56   \label{introEq:SM2}
57   \end{equation}
58  
59   The solution to Eq.~\ref{introEq:SM2} maximizes the number of
60 < degenerative configurations in $E$. \cite{fix}
60 > degenerative configurations in $E$. \cite{Frenkel1996}
61   This gives
62   \begin{equation}
63 < eq here
63 > \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
64 >        \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
65 >         +
66 >        \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
67 >        \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
68   \label{introEq:SM3}
69   \end{equation}
70   Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
71 < $\frac{partialE_{\text{bath}}}{\partial E_{\gamma}}$ is
71 > $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
72   $-1$. Eq.~\ref{introEq:SM3} becomes
73   \begin{equation}
74 < eq here
74 > \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
75 > \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
76   \label{introEq:SM4}
77   \end{equation}
78  
79   At this point, one can draw a relationship between the maximization of
80   degeneracy in Eq.~\ref{introEq:SM3} and the second law of
81 < thermodynamics.  Namely, that for a closed system, entropy wil
82 < increase for an irreversible process.\cite{fix} Here the
81 > thermodynamics.  Namely, that for a closed system, entropy will
82 > increase for an irreversible process.\cite{chandler:1987} Here the
83   process is the partitioning of energy among the two systems.  This
84   allows the following definition of entropy:
85   \begin{equation}
86 < eq here
86 > S = k_B \ln \Omega(E)
87   \label{introEq:SM5}
88   \end{equation}
89   Where $k_B$ is the Boltzman constant.  Having defined entropy, one can
90   also define the temperature of the system using the relation
91   \begin{equation}
92 < eq here
92 > \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
93   \label{introEq:SM6}
94   \end{equation}
95   The temperature in the system $\gamma$ is then
96   \begin{equation}
97 < eq here
97 > \beta( E_{\gamma} ) = \frac{1}{k_B T} =
98 >        \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
99   \label{introEq:SM7}
100   \end{equation}
101   Applying this to Eq.~\ref{introEq:SM4} gives the following
102   \begin{equation}
103 < eq here
103 > \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
104   \label{introEq:SM8}
105   \end{equation}
106   Showing that the partitioning of energy between the two systems is
107 < actually a process of thermal equilibration. \cite{fix}
107 > actually a process of thermal equilibration.\cite{Frenkel1996}
108  
109   An application of these results is to formulate the form of an
110   expectation value of an observable, $A$, in the canonical ensemble. In
# Line 109 | Line 112 | system is now an infinitly large thermal bath, whose e
112   and the temperature, $T$, are all held constant while the energy, $E$,
113   is allowed to fluctuate. Returning to the previous example, the bath
114   system is now an infinitly large thermal bath, whose exchange of
115 < energy with the system $\gamma$ holds teh temperature constant.  The
115 > energy with the system $\gamma$ holds the temperature constant.  The
116   partitioning of energy in the bath system is then related to the total
117 < energy of both systems and the fluctuations in $E_{\gamma}}$:
117 > energy of both systems and the fluctuations in $E_{\gamma}$:
118   \begin{equation}
119 < eq here
119 > \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
120   \label{introEq:SM9}
121   \end{equation}
122   As for the expectation value, it can be defined
123   \begin{equation}
124 < eq here
124 > \langle A \rangle =
125 >        \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
126 >        P_{\gamma} A(\boldsymbol{\Gamma})
127   \label{introEq:SM10}
128 < \end{eequation}
129 < Where $\int_{\boldsymbol{\Gamma}} d\Boldsymbol{\Gamma}$ denotes an
130 < integration over all accessable phase space, $P_{\gamma}$ is the
128 > \end{equation}
129 > Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
130 > an integration over all accessable phase space, $P_{\gamma}$ is the
131   probability of being in a given phase state and
132   $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
133   phase state.
# Line 132 | Line 137 | states the coupled system is able to assume. Namely,
137   $\gamma$ will be directly proportional to the number of allowable
138   states the coupled system is able to assume. Namely,
139   \begin{equation}
140 < eq here
140 > P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
141 >        e^{\ln \Omega( E - E_{\gamma})}
142   \label{introEq:SM11}
143   \end{equation}
144 < With $E_{\gamma} \lE$, $\ln \Omega$ can be expanded in a Taylor series:
144 > With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
145   \begin{equation}
146 < eq here
146 > \ln \Omega ( E - E_{\gamma}) =
147 >        \ln \Omega (E) -
148 >        E_{\gamma}  \frac{\partial \ln \Omega }{\partial E}
149 >        + \ldots
150   \label{introEq:SM12}
151   \end{equation}
152   Higher order terms are omitted as $E$ is an infinite thermal
153   bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
154   be rewritten:
155   \begin{equation}
156 < eq here
156 > P_{\gamma} \propto e^{-\beta E_{\gamma}}
157   \label{introEq:SM13}
158   \end{equation}
159   Where $\ln \Omega(E)$ has been factored out of the porpotionality as a
160 < constant.  Normalizing the probability ($\int_{\boldsymbol{\Gamma}}
161 < d\boldsymbol{\Gamma} P_{\gamma} =1$ gives
160 > constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
161 > d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
162   \begin{equation}
163 < eq here
163 > P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
164 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
165   \label{introEq:SM14}
166   \end{equation}
167   This result is the standard Boltzman statistical distribution.
168   Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
169   \begin{equation}
170 < eq here
170 > \langle A \rangle =
171 > \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
172 >        A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
173 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
174   \label{introEq:SM15}
175   \end{equation}
176  
# Line 165 | Line 178 | the assumption that given an infinite amount of time,
178  
179   One last important consideration is that of ergodicity. Ergodicity is
180   the assumption that given an infinite amount of time, a system will
181 < visit every available point in phase space.\cite{fix} For most
181 > visit every available point in phase space.\cite{Frenkel1996} For most
182   systems, this is a valid assumption, except in cases where the system
183 < may be trapped in a local feature (\emph{i.~e.~glasses}). When valid,
183 > may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
184   ergodicity allows the unification of a time averaged observation and
185   an ensemble averged one. If an observation is averaged over a
186   sufficiently long time, the system is assumed to visit all
# Line 175 | Line 188 | ensemble averaged approach seems most logical, the Mon
188   weighted statistical average. This allows the researcher freedom of
189   choice when deciding how best to measure a given observable.  When an
190   ensemble averaged approach seems most logical, the Monte Carlo
191 < techniques described in Sec.~\ref{introSec:MC} can be utilized.
191 > techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
192   Conversely, if a problem lends itself to a time averaging approach,
193   the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
194   employed.
195  
196 < \subsection{\label{introSec:monteCarlo}Monte Carlo Simulations}
196 > \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
197  
198   The Monte Carlo method was developed by Metropolis and Ulam for their
199   work in fissionable material.\cite{metropolis:1949} The method is so
# Line 216 | Line 229 | Where $\mathbf{r}^N$ stands for the coordinates of all
229   \label{eq:mcEnsAvg}
230   \end{equation}
231   Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
232 < and $A$ is some observable that is only dependent on
233 < position. $\langle A \rangle$ is the ensemble average of $A$ as
234 < presented in Sec.~\ref{introSec:statThermo}. Because $A$ is
235 < independent of momentum, the momenta contribution of the integral can
236 < be factored out, leaving the configurational integral. Application of
237 < the brute force method to this system would yield highly inefficient
232 > and $A$ is some observable that is only dependent on position. This is
233 > the ensemble average of $A$ as presented in
234 > Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
235 > momentum. Therefore the momenta contribution of the integral can be
236 > factored out, leaving the configurational integral. Application of the
237 > brute force method to this system would yield highly inefficient
238   results. Due to the Boltzman weighting of this integral, most random
239   configurations will have a near zero contribution to the ensemble
240 < average. This is where a importance sampling comes into
240 > average. This is where importance sampling comes into
241   play.\cite{allen87:csl}
242  
243   Importance Sampling is a method where one selects a distribution from
# Line 276 | Line 289 | $\rho_{kT}(\mathbf{r}^N)$.
289   the use of a Markov chain whose limiting distribution was
290   $\rho_{kT}(\mathbf{r}^N)$.
291  
292 < \subsubsection{\label{introSec:markovChains}Markov Chains}
292 > \subsection{\label{introSec:markovChains}Markov Chains}
293  
294   A Markov chain is a chain of states satisfying the following
295   conditions:\cite{leach01:mm}
# Line 326 | Line 339 | will only yield the limiting distribution again.
339   \label{introEq:MCmarkovEquil}
340   \end{equation}
341  
342 < \subsubsection{\label{introSec:metropolisMethod}The Metropolis Method}
342 > \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
343  
344   In the Metropolis method\cite{metropolis:1953}
345   Eq.~\ref{introEq:MCmarkovEquil} is solved such that
# Line 355 | Line 368 | This allows for the following set of acceptance rules
368   \end{equation}
369   This allows for the following set of acceptance rules be defined:
370   \begin{equation}
371 < EQ Here
371 > \accMe( m \rightarrow n ) =
372 >        \begin{cases}
373 >        1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
374 >        e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
375 >        \end{cases}
376 > \label{introEq:accRules}
377   \end{equation}
378  
379 < Using the acceptance criteria from Eq.~\ref{fix} the Metropolis method
380 < proceeds as follows
381 < \begin{itemize}
382 < \item Generate an initial configuration $fix$ which has some finite probability in $fix$.
383 < \item Modify $fix$, to generate configuratioon $fix$.
384 < \item If configuration $n$ lowers the energy of the system, accept the move with unity ($fix$ becomes $fix$).  Otherwise accept with probability $fix$.
379 > Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
380 > Metropolis method proceeds as follows
381 > \begin{enumerate}
382 > \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
383 > \item Modify $\mathbf{r}^N$, to generate configuratioon $\mathbf{r^{\prime}}^N$.
384 > \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$).  Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
385   \item Accumulate the average for the configurational observable of intereest.
386 < \item Repeat from step 2 until average converges.
387 < \end{itemize}
386 > \item Repeat from step 2 until the average converges.
387 > \end{enumerate}
388   One important note is that the average is accumulated whether the move
389   is accepted or not, this ensures proper weighting of the average.
390 < Using Eq.~\ref{fix} it becomes clear that the accumulated averages are
391 < the ensemble averages, as this method ensures that the limiting
392 < distribution is the Boltzman distribution.
390 > Using Eq.~\ref{introEq:Importance4} it becomes clear that the
391 > accumulated averages are the ensemble averages, as this method ensures
392 > that the limiting distribution is the Boltzman distribution.
393  
394 < \subsection{\label{introSec:MD}Molecular Dynamics Simulations}
394 > \section{\label{introSec:MD}Molecular Dynamics Simulations}
395  
396   The main simulation tool used in this research is Molecular Dynamics.
397   Molecular Dynamics is when the equations of motion for a system are
# Line 381 | Line 399 | diffusion constants, velocity auto correlations, foldi
399   momentum of a system, allowing the calculation of not only
400   configurational observables, but momenta dependent ones as well:
401   diffusion constants, velocity auto correlations, folding/unfolding
402 < events, etc.  Due to the principle of ergodicity, Eq.~\ref{fix}, the
403 < average of these observables over the time period of the simulation
404 < are taken to be the ensemble averages for the system.
402 > events, etc.  Due to the principle of ergodicity,
403 > Sec.~\ref{introSec:ergodic}, the average of these observables over the
404 > time period of the simulation are taken to be the ensemble averages
405 > for the system.
406  
407   The choice of when to use molecular dynamics over Monte Carlo
408   techniques, is normally decided by the observables in which the
# Line 396 | Line 415 | making molecular dynamics key in the simulation of tho
415   centered around the dynamic properties of phospholipid bilayers,
416   making molecular dynamics key in the simulation of those properties.
417  
418 < \subsubsection{Molecular dynamics Algorithm}
418 > \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
419  
420   To illustrate how the molecular dynamics technique is applied, the
421   following sections will describe the sequence involved in a
422 < simulation.  Sec.~\ref{fix} deals with the initialization of a
423 < simulation.  Sec.~\ref{fix} discusses issues involved with the
424 < calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
425 < discussion with the integration of the equations of motion. \cite{fix}
422 > simulation.  Sec.~\ref{introSec:mdInit} deals with the initialization
423 > of a simulation.  Sec.~\ref{introSec:mdForce} discusses issues
424 > involved with the calculation of the forces.
425 > Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
426 > with the integration of the equations of motion.\cite{Frenkel1996}
427  
428 < \subsubsection{initialization}
428 > \subsection{\label{introSec:mdInit}Initialization}
429  
430   When selecting the initial configuration for the simulation it is
431   important to consider what dynamics one is hoping to observe.
432 < Ch.~\ref{fix} deals with the formation and equilibrium dynamics of
432 > Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
433   phospholipid membranes.  Therefore in these simulations initial
434   positions were selected that in some cases dispersed the lipids in
435   water, and in other cases structured the lipids into preformed
# Line 428 | Line 448 | movement would be unphysical and an artifact of the si
448   whole.  This arises from the fact that most simulations are of systems
449   in equilibrium in the absence of outside forces.  Therefore any net
450   movement would be unphysical and an artifact of the simulation method
451 < used.  The final point addresses teh selection of the magnitude of the
451 > used.  The final point addresses the selection of the magnitude of the
452   initial velocities.  For many simulations it is convienient to use
453   this opportunity to scale the amount of kinetic energy to reflect the
454   desired thermal distribution of the system.  However, it must be noted
# Line 436 | Line 456 | kinetic energy from energy stored in potential degrees
456   first few initial simulation steps due to either loss or gain of
457   kinetic energy from energy stored in potential degrees of freedom.
458  
459 < \subsubsection{Force Evaluation}
459 > \subsection{\label{introSec:mdForce}Force Evaluation}
460  
461   The evaluation of forces is the most computationally expensive portion
462   of a given molecular dynamics simulation.  This is due entirely to the
463   evaluation of long range forces in a simulation, typically pair-wise.
464   These forces are most commonly the Van der Waals force, and sometimes
465 < Coulombic forces as well.  For a pair-wise force, there are $fix$
466 < pairs to be evaluated, where $n$ is the number of particles in the
467 < system.  This leads to the calculations scaling as $fix$, making large
465 > Coulombic forces as well.  For a pair-wise force, there are $N(N-1)/ 2$
466 > pairs to be evaluated, where $N$ is the number of particles in the
467 > system.  This leads to the calculations scaling as $N^2$, making large
468   simulations prohibitive in the absence of any computation saving
469   techniques.
470  
471   Another consideration one must resolve, is that in a given simulation
472   a disproportionate number of the particles will feel the effects of
473 < the surface.\cite{fix} For a cubic system of 1000 particles arranged
474 < in a $10x10x10$ cube, 488 particles will be exposed to the surface.
475 < Unless one is simulating an isolated particle group in a vacuum, the
476 < behavior of the system will be far from the desired bulk
477 < charecteristics.  To offset this, simulations employ the use of
478 < periodic boundary images.\cite{fix}
473 > the surface.\cite{allen87:csl} For a cubic system of 1000 particles
474 > arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
475 > exposed to the surface.  Unless one is simulating an isolated particle
476 > group in a vacuum, the behavior of the system will be far from the
477 > desired bulk charecteristics.  To offset this, simulations employ the
478 > use of periodic boundary images.\cite{born:1912}
479  
480   The technique involves the use of an algorithm that replicates the
481   simulation box on an infinite lattice in cartesian space.  Any given
482   particle leaving the simulation box on one side will have an image of
483 < itself enter on the opposite side (see Fig.~\ref{fix}).
484 < \begin{equation}
485 < EQ Here
486 < \end{equation}
487 < In addition, this sets that any given particle pair has an image, real
468 < or periodic, within $fix$ of each other.  A discussion of the method
469 < used to calculate the periodic image can be found in Sec.\ref{fix}.
483 > itself enter on the opposite side (see Fig.~\ref{introFig:pbc}).  In
484 > addition, this sets that any given particle pair has an image, real or
485 > periodic, within $fix$ of each other.  A discussion of the method used
486 > to calculate the periodic image can be found in
487 > Sec.\ref{oopseSec:pbc}.
488  
489 + \begin{figure}
490 + \centering
491 + \includegraphics[width=\linewidth]{pbcFig.eps}
492 + \caption[An illustration of periodic boundry conditions]{A 2-D illustration of periodic boundry conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
493 + \label{introFig:pbc}
494 + \end{figure}
495 +
496   Returning to the topic of the computational scale of the force
497   evaluation, the use of periodic boundary conditions requires that a
498   cutoff radius be employed.  Using a cutoff radius improves the
499   efficiency of the force evaluation, as particles farther than a
500 < predetermined distance, $fix$, are not included in the
501 < calculation.\cite{fix} In a simultation with periodic images, $fix$
502 < has a maximum value of $fix$.  Fig.~\ref{fix} illustrates how using an
503 < $fix$ larger than this value, or in the extreme limit of no $fix$ at
504 < all, the corners of the simulation box are unequally weighted due to
505 < the lack of particle images in the $x$, $y$, or $z$ directions past a
506 < disance of $fix$.
500 > predetermined distance, $r_{\text{cut}}$, are not included in the
501 > calculation.\cite{Frenkel1996} In a simultation with periodic images,
502 > $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
503 > Fig.~\ref{introFig:rMax} illustrates how when using an
504 > $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
505 > $r_{\text{cut}}$ at all, the corners of the simulation box are
506 > unequally weighted due to the lack of particle images in the $x$, $y$,
507 > or $z$ directions past a disance of $\text{box} / 2$.
508 >
509 > \begin{figure}
510 > \centering
511 > \includegraphics[width=\linewidth]{rCutMaxFig.eps}
512 > \caption
513 > \label{introFig:rMax}
514 > \end{figure}
515  
516   With the use of an $fix$, however, comes a discontinuity in the
517   potential energy curve (Fig.~\ref{fix}). To fix this discontinuity,
# Line 498 | Line 531 | neighbor list.
531   giving rise to the possibility that a particle has left or joined a
532   neighbor list.
533  
534 < \subsection{\label{introSec:MDintegrate} Integration of the equations of motion}
534 > \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
535  
536   A starting point for the discussion of molecular dynamics integrators
537   is the Verlet algorithm. \cite{Frenkel1996} It begins with a Taylor

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines