| 78 |
|
\end{equation} |
| 79 |
|
|
| 80 |
|
The solution to Eq.~\ref{introEq:SM2} maximizes the number of |
| 81 |
< |
degenerative configurations in $E$. \cite{Frenkel1996} |
| 81 |
> |
degenerate configurations in $E$. \cite{Frenkel1996} |
| 82 |
|
This gives |
| 83 |
|
\begin{equation} |
| 84 |
|
\frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 = |
| 197 |
|
|
| 198 |
|
\subsection{\label{introSec:ergodic}The Ergodic Hypothesis} |
| 199 |
|
|
| 200 |
< |
One last important consideration is that of ergodicity. Ergodicity is |
| 201 |
< |
the assumption that given an infinite amount of time, a system will |
| 202 |
< |
visit every available point in phase space.\cite{Frenkel1996} For most |
| 203 |
< |
systems, this is a valid assumption, except in cases where the system |
| 204 |
< |
may be trapped in a local feature (\emph{e.g.}~glasses). When valid, |
| 205 |
< |
ergodicity allows the unification of a time averaged observation and |
| 206 |
< |
an ensemble averaged one. If an observation is averaged over a |
| 207 |
< |
sufficiently long time, the system is assumed to visit all |
| 208 |
< |
appropriately available points in phase space, giving a properly |
| 209 |
< |
weighted statistical average. This allows the researcher freedom of |
| 210 |
< |
choice when deciding how best to measure a given observable. When an |
| 211 |
< |
ensemble averaged approach seems most logical, the Monte Carlo |
| 212 |
< |
techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized. |
| 213 |
< |
Conversely, if a problem lends itself to a time averaging approach, |
| 214 |
< |
the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be |
| 215 |
< |
employed. |
| 200 |
> |
In the case of a Molecular Dynamics simulation, rather than |
| 201 |
> |
calculating an ensemble average integral over phase space as in |
| 202 |
> |
Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time |
| 203 |
> |
average of an observable. Namely, |
| 204 |
> |
\begin{equation} |
| 205 |
> |
\langle A \rangle_t = \frac{1}{\tau} |
| 206 |
> |
\int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt |
| 207 |
> |
\label{introEq:SM16} |
| 208 |
> |
\end{equation} |
| 209 |
> |
Where the value of an observable is averaged over the length of time |
| 210 |
> |
that the simulation is run. This type of measurement mirrors the |
| 211 |
> |
experimental measurement of an observable. In an experiment, the |
| 212 |
> |
instrument analyzing the system must average its observation of the |
| 213 |
> |
finite time of the measurement. What is required then, is a principle |
| 214 |
> |
to relate the time average to the ensemble average. This is the |
| 215 |
> |
ergodic hypothesis. |
| 216 |
|
|
| 217 |
+ |
Ergodicity is the assumption that given an infinite amount of time, a |
| 218 |
+ |
system will visit every available point in phase |
| 219 |
+ |
space.\cite{Frenkel1996} For most systems, this is a valid assumption, |
| 220 |
+ |
except in cases where the system may be trapped in a local feature |
| 221 |
+ |
(\emph{e.g.}~glasses). When valid, ergodicity allows the unification |
| 222 |
+ |
of a time averaged observation and an ensemble averaged one. If an |
| 223 |
+ |
observation is averaged over a sufficiently long time, the system is |
| 224 |
+ |
assumed to visit all appropriately available points in phase space, |
| 225 |
+ |
giving a properly weighted statistical average. This allows the |
| 226 |
+ |
researcher freedom of choice when deciding how best to measure a given |
| 227 |
+ |
observable. When an ensemble averaged approach seems most logical, |
| 228 |
+ |
the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo} |
| 229 |
+ |
can be utilized. Conversely, if a problem lends itself to a time |
| 230 |
+ |
averaging approach, the Molecular Dynamics techniques in |
| 231 |
+ |
Sec.~\ref{introSec:MD} can be employed. |
| 232 |
+ |
|
| 233 |
|
\section{\label{introSec:monteCarlo}Monte Carlo Simulations} |
| 234 |
|
|
| 235 |
|
The Monte Carlo method was developed by Metropolis and Ulam for their |
| 247 |
|
\end{equation} |
| 248 |
|
The equation can be recast as: |
| 249 |
|
\begin{equation} |
| 250 |
< |
I = (b-a)\langle f(x) \rangle |
| 250 |
> |
I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx |
| 251 |
> |
\label{introEq:Importance1} |
| 252 |
> |
\end{equation} |
| 253 |
> |
Where $\rho(x)$ is an arbitrary probability distribution in $x$. If |
| 254 |
> |
one conducts $\tau$ trials selecting a random number, $\zeta_\tau$, |
| 255 |
> |
from the distribution $\rho(x)$ on the interval $[a,b]$, then |
| 256 |
> |
Eq.~\ref{introEq:Importance1} becomes |
| 257 |
> |
\begin{equation} |
| 258 |
> |
I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]} |
| 259 |
> |
\label{introEq:Importance2} |
| 260 |
> |
\end{equation} |
| 261 |
> |
If $\rho(x)$ is uniformly distributed over the interval $[a,b]$, |
| 262 |
> |
\begin{equation} |
| 263 |
> |
\rho(x) = \frac{1}{b-a} |
| 264 |
> |
\label{introEq:importance2b} |
| 265 |
> |
\end{equation} |
| 266 |
> |
then the integral can be rewritten as |
| 267 |
> |
\begin{equation} |
| 268 |
> |
I = (b-a)\lim_{\tau \rightarrow \infty} |
| 269 |
> |
\langle f(x) \rangle_{\text{trials}[a,b]} |
| 270 |
|
\label{eq:MCex2} |
| 271 |
|
\end{equation} |
| 237 |
– |
Where $\langle f(x) \rangle$ is the unweighted average over the interval |
| 238 |
– |
$[a,b]$. The calculation of the integral could then be solved by |
| 239 |
– |
randomly choosing points along the interval $[a,b]$ and calculating |
| 240 |
– |
the value of $f(x)$ at each point. The accumulated average would then |
| 241 |
– |
approach $I$ in the limit where the number of trials is infinitely |
| 242 |
– |
large. |
| 272 |
|
|
| 273 |
|
However, in Statistical Mechanics, one is typically interested in |
| 274 |
|
integrals of the form: |
| 290 |
|
average. This is where importance sampling comes into |
| 291 |
|
play.\cite{allen87:csl} |
| 292 |
|
|
| 293 |
< |
Importance Sampling is a method where one selects a distribution from |
| 294 |
< |
which the random configurations are chosen in order to more |
| 295 |
< |
efficiently calculate the integral.\cite{Frenkel1996} Consider again |
| 296 |
< |
Eq.~\ref{eq:MCex1} rewritten to be: |
| 268 |
< |
\begin{equation} |
| 269 |
< |
I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx |
| 270 |
< |
\label{introEq:Importance1} |
| 271 |
< |
\end{equation} |
| 272 |
< |
Where $\rho(x)$ is an arbitrary probability distribution in $x$. If |
| 273 |
< |
one conducts $\tau$ trials selecting a random number, $\zeta_\tau$, |
| 274 |
< |
from the distribution $\rho(x)$ on the interval $[a,b]$, then |
| 275 |
< |
Eq.~\ref{introEq:Importance1} becomes |
| 276 |
< |
\begin{equation} |
| 277 |
< |
I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}} |
| 278 |
< |
\label{introEq:Importance2} |
| 279 |
< |
\end{equation} |
| 280 |
< |
Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing |
| 293 |
> |
Importance sampling is a method where the distribution, from which the |
| 294 |
> |
random configurations are chosen, is selected in such a way as to |
| 295 |
> |
efficiently sample the integral in question. Looking at |
| 296 |
> |
Eq.~\ref{eq:mcEnsAvg}, and realizing |
| 297 |
|
\begin {equation} |
| 298 |
|
\rho_{kT}(\mathbf{r}^N) = |
| 299 |
|
\frac{e^{-\beta V(\mathbf{r}^N)}} |
| 317 |
|
By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$ |
| 318 |
|
Eq.~\ref{introEq:Importance4} becomes |
| 319 |
|
\begin{equation} |
| 320 |
< |
\langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}} |
| 320 |
> |
\langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT} |
| 321 |
|
\label{introEq:Importance5} |
| 322 |
|
\end{equation} |
| 323 |
|
The difficulty is selecting points $\mathbf{r}^N$ such that they are |
| 435 |
|
integrated in order to obtain information about both the positions and |
| 436 |
|
momentum of a system, allowing the calculation of not only |
| 437 |
|
configurational observables, but momenta dependent ones as well: |
| 438 |
< |
diffusion constants, velocity auto correlations, folding/unfolding |
| 439 |
< |
events, etc. Due to the principle of ergodicity, |
| 438 |
> |
diffusion constants, relaxation events, folding/unfolding |
| 439 |
> |
events, etc. With the time dependent information gained from a |
| 440 |
> |
Molecular Dynamics simulation, one can also calculate time correlation |
| 441 |
> |
functions of the form\cite{Hansen86} |
| 442 |
> |
\begin{equation} |
| 443 |
> |
\langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau} |
| 444 |
> |
\int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime} |
| 445 |
> |
\label{introEq:timeCorr} |
| 446 |
> |
\end{equation} |
| 447 |
> |
These correlations can be used to measure fundamental time constants |
| 448 |
> |
of a system, such as diffusion constants from the velocity |
| 449 |
> |
autocorrelation or dipole relaxation times from the dipole |
| 450 |
> |
autocorrelation. Due to the principle of ergodicity, |
| 451 |
|
Sec.~\ref{introSec:ergodic}, the average of these observables over the |
| 452 |
|
time period of the simulation are taken to be the ensemble averages |
| 453 |
|
for the system. |
| 454 |
|
|
| 455 |
|
The choice of when to use molecular dynamics over Monte Carlo |
| 456 |
|
techniques, is normally decided by the observables in which the |
| 457 |
< |
researcher is interested. If the observables depend on momenta in |
| 457 |
> |
researcher is interested. If the observables depend on time in |
| 458 |
|
any fashion, then the only choice is molecular dynamics in some form. |
| 459 |
|
However, when the observable is dependent only on the configuration, |
| 460 |
< |
then most of the time Monte Carlo techniques will be more efficient. |
| 460 |
> |
then for most small systems, Monte Carlo techniques will be more efficient. |
| 461 |
|
|
| 462 |
|
The focus of research in the second half of this dissertation is |
| 463 |
|
centered around the dynamic properties of phospholipid bilayers, |