--- trunk/mattDisertation/Introduction.tex 2004/01/22 17:34:20 976 +++ trunk/mattDisertation/Introduction.tex 2004/01/22 21:13:55 977 @@ -53,10 +53,10 @@ The equation can be recast as: \end{equation} The equation can be recast as: \begin{equation} -I = (b-a) +I = (b-a)\langle f(x) \rangle \label{eq:MCex2} \end{equation} -Where $$ is the unweighted average over the interval +Where $\langle f(x) \rangle$ is the unweighted average over the interval $[a,b]$. The calculation of the integral could then be solved by randomly choosing points along the interval $[a,b]$ and calculating the value of $f(x)$ at each point. The accumulated average would then @@ -66,16 +66,18 @@ integrals of the form: However, in Statistical Mechanics, one is typically interested in integrals of the form: \begin{equation} - = \frac{A}{exp^{-\beta}} +\langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)% + e^{-\beta V(\mathbf{r}^N)}}% + {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}} \label{eq:mcEnsAvg} \end{equation} -Where $r^N$ stands for the coordinates of all $N$ particles and $A$ is -some observable that is only dependent on position. $$ is the -ensemble average of $A$ as presented in -Sec.~\ref{introSec:statThermo}. Because $A$ is independent of -momentum, the momenta contribution of the integral can be factored -out, leaving the configurational integral. Application of the brute -force method to this system would yield highly inefficient +Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles +and $A$ is some observable that is only dependent on +position. $\langle A \rangle$ is the ensemble average of $A$ as +presented in Sec.~\ref{introSec:statThermo}. Because $A$ is +independent of momentum, the momenta contribution of the integral can +be factored out, leaving the configurational integral. Application of +the brute force method to this system would yield highly inefficient results. Due to the Boltzman weighting of this integral, most random configurations will have a near zero contribution to the ensemble average. This is where a importance sampling comes into @@ -86,92 +88,126 @@ Eq.~\ref{eq:MCex1} rewritten to be: efficiently calculate the integral.\cite{Frenkel1996} Consider again Eq.~\ref{eq:MCex1} rewritten to be: \begin{equation} -EQ Here +I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx +\label{introEq:Importance1} \end{equation} -Where $fix$ is an arbitrary probability distribution in $x$. If one -conducts $fix$ trials selecting a random number, $fix$, from the -distribution $fix$ on the interval $[a,b]$, then Eq.~\ref{fix} becomes +Where $\rho(x)$ is an arbitrary probability distribution in $x$. If +one conducts $\tau$ trials selecting a random number, $\zeta_\tau$, +from the distribution $\rho(x)$ on the interval $[a,b]$, then +Eq.~\ref{introEq:Importance1} becomes \begin{equation} -EQ Here +I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}} +\label{introEq:Importance2} \end{equation} -Looking at Eq.~ref{fix}, and realizing +Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing \begin {equation} -EQ Here +\rho_{kT}(\mathbf{r}^N) = + \frac{e^{-\beta V(\mathbf{r}^N)}} + {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}} +\label{introEq:MCboltzman} \end{equation} -The ensemble average can be rewritten as +Where $\rho_{kT}$ is the boltzman distribution. The ensemble average +can be rewritten as \begin{equation} -EQ Here +\langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N) + \rho_{kT}(\mathbf{r}^N) +\label{introEq:Importance3} \end{equation} -Appllying Eq.~ref{fix} one obtains +Applying Eq.~\ref{introEq:Importance1} one obtains \begin{equation} -EQ Here +\langle A \rangle = \biggl \langle + \frac{ A \rho_{kT}(\mathbf{r}^N) } + {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}} +\label{introEq:Importance4} \end{equation} -By selecting $fix$ to be $fix$ Eq.~ref{fix} becomes +By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$ +Eq.~\ref{introEq:Importance4} becomes \begin{equation} -EQ Here +\langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}} +\label{introEq:Importance5} \end{equation} -The difficulty is selecting points $fix$ such that they are sampled -from the distribution $fix$. A solution was proposed by Metropolis et -al.\cite{fix} which involved the use of a Markov chain whose limiting -distribution was $fix$. +The difficulty is selecting points $\mathbf{r}^N$ such that they are +sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution +was proposed by Metropolis et al.\cite{metropolis:1953} which involved +the use of a Markov chain whose limiting distribution was +$\rho_{kT}(\mathbf{r}^N)$. -\subsection{Markov Chains} +\subsubsection{\label{introSec:markovChains}Markov Chains} A Markov chain is a chain of states satisfying the following -conditions:\cite{fix} -\begin{itemize} +conditions:\cite{leach01:mm} +\begin{enumerate} \item The outcome of each trial depends only on the outcome of the previous trial. \item Each trial belongs to a finite set of outcomes called the state space. -\end{itemize} -If given two configuartions, $fix$ and $fix$, $fix$ and $fix$ are the -probablilities of being in state $fix$ and $fix$ respectively. -Further, the two states are linked by a transition probability, $fix$, -which is the probability of going from state $m$ to state $n$. +\end{enumerate} +If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$, +$\rho_m$ and $\rho_n$ are the probablilities of being in state +$\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two +states are linked by a transition probability, $\pi_{mn}$, which is the +probability of going from state $m$ to state $n$. +\newcommand{\accMe}{\operatorname{acc}} + The transition probability is given by the following: \begin{equation} -EQ Here -\end{equation} -Where $fix$ is the probability of attempting the move $fix$, and $fix$ -is the probability of accepting the move $fix$. Defining a -probability vector, $fix$, such that +\pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n) +\label{introEq:MCpi} +\end{equation} +Where $\alpha_{mn}$ is the probability of attempting the move $m +\rightarrow n$, and $\accMe$ is the probability of accepting the move +$m \rightarrow n$. Defining a probability vector, +$\boldsymbol{\rho}$, such that \begin{equation} -EQ Here +\boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n, + \ldots \rho_N \} +\label{introEq:MCrhoVector} \end{equation} -a transition matrix $fix$ can be defined, whose elements are $fix$, -for each given transition. The limiting distribution of the Markov -chain can then be found by applying the transition matrix an infinite -number of times to the distribution vector. +a transition matrix $\boldsymbol{\Pi}$ can be defined, +whose elements are $\pi_{mn}$, for each given transition. The +limiting distribution of the Markov chain can then be found by +applying the transition matrix an infinite number of times to the +distribution vector. \begin{equation} -EQ Here +\boldsymbol{\rho}_{\text{limit}} = + \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}} + \boldsymbol{\Pi}^N +\label{introEq:MCmarkovLimit} \end{equation} - The limiting distribution of the chain is independent of the starting distribution, and successive applications of the transition matrix will only yield the limiting distribution again. \begin{equation} -EQ Here +\boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}} + \boldsymbol{\Pi} +\label{introEq:MCmarkovEquil} \end{equation} -\subsection{fix} +\subsubsection{\label{introSec:metropolisMethod}The Metropolis Method} -In the Metropolis method \cite{fix} Eq.~ref{fix} is solved such that -$fix$ matches the Boltzman distribution of states. The method -accomplishes this by imposing the strong condition of microscopic -reversibility on the equilibrium distribution. Meaning, that at -equilibrium the probability of going from $m$ to $n$ is the same as -going from $n$ to $m$. +In the Metropolis method\cite{metropolis:1953} +Eq.~\ref{introEq:MCmarkovEquil} is solved such that +$\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution +of states. The method accomplishes this by imposing the strong +condition of microscopic reversibility on the equilibrium +distribution. Meaning, that at equilibrium the probability of going +from $m$ to $n$ is the same as going from $n$ to $m$. \begin{equation} -EQ Here +\rho_m\pi_{mn} = \rho_n\pi_{nm} +\label{introEq:MCmicroReverse} \end{equation} -Further, $fix$ is chosen to be a symetric matrix in the Metropolis -method. Using Eq.~\ref{fix}, Eq.~\ref{fix} becomes +Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in +the Metropolis method. Using Eq.~\ref{introEq:MCpi}, +Eq.~\ref{introEq:MCmicroReverse} becomes \begin{equation} -EQ Here +\frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} = + \frac{\rho_n}{\rho_m} +\label{introEq:MCmicro2} \end{equation} -For a Boltxman limiting distribution +For a Boltxman limiting distribution, \begin{equation} -EQ Here +\frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]} + = e^{-\beta \Delta \mathcal{U}} +\label{introEq:MCmicro3} \end{equation} This allows for the following set of acceptance rules be defined: \begin{equation} @@ -193,7 +229,7 @@ distribution is the Boltzman distribution. the ensemble averages, as this method ensures that the limiting distribution is the Boltzman distribution. -\subsection{\label{introSec:md}Molecular Dynamics Simulations} +\subsection{\label{introSec:MD}Molecular Dynamics Simulations} The main simulation tool used in this research is Molecular Dynamics. Molecular Dynamics is when the equations of motion for a system are @@ -216,7 +252,7 @@ making molecular dynamics key in the simulation of tho centered around the dynamic properties of phospholipid bilayers, making molecular dynamics key in the simulation of those properties. -\subsection{Molecular dynamics Algorithm} +\subsubsection{Molecular dynamics Algorithm} To illustrate how the molecular dynamics technique is applied, the following sections will describe the sequence involved in a @@ -225,7 +261,7 @@ discussion with the integration of the equations of mo calculation of the forces. Sec.~\ref{fix} concludes the algorithm discussion with the integration of the equations of motion. \cite{fix} -\subsection{initialization} +\subsubsection{initialization} When selecting the initial configuration for the simulation it is important to consider what dynamics one is hoping to observe. @@ -256,7 +292,7 @@ kinetic energy from energy stored in potential degrees first few initial simulation steps due to either loss or gain of kinetic energy from energy stored in potential degrees of freedom. -\subsection{Force Evaluation} +\subsubsection{Force Evaluation} The evaluation of forces is the most computationally expensive portion of a given molecular dynamics simulation. This is due entirely to the