1 |
|
|
2 |
|
|
3 |
< |
\chapter{\label{chapt:intro}Introduction and Theoretical Background} |
3 |
> |
\chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND} |
4 |
|
|
5 |
|
|
6 |
|
The techniques used in the course of this research fall under the two |
7 |
|
main classes of molecular simulation: Molecular Dynamics and Monte |
8 |
< |
Carlo. Molecular Dynamic simulations integrate the equations of motion |
9 |
< |
for a given system of particles, allowing the researcher to gain |
10 |
< |
insight into the time dependent evolution of a system. Diffusion |
8 |
> |
Carlo. Molecular Dynamics simulations integrate the equations of |
9 |
> |
motion for a given system of particles, allowing the researcher to |
10 |
> |
gain insight into the time dependent evolution of a system. Diffusion |
11 |
|
phenomena are readily studied with this simulation technique, making |
12 |
|
Molecular Dynamics the main simulation technique used in this |
13 |
|
research. Other aspects of the research fall under the Monte Carlo |
14 |
|
class of simulations. In Monte Carlo, the configuration space |
15 |
< |
available to the collection of particles is sampled stochastically, |
16 |
< |
or randomly. Each configuration is chosen with a given probability |
17 |
< |
based on the Maxwell Boltzmann distribution. These types of simulations |
18 |
< |
are best used to probe properties of a system that are only dependent |
19 |
< |
only on the state of the system. Structural information about a system |
20 |
< |
is most readily obtained through these types of methods. |
15 |
> |
available to the collection of particles is sampled stochastically, or |
16 |
> |
randomly. Each configuration is chosen with a given probability based |
17 |
> |
on the Maxwell Boltzmann distribution. These types of simulations are |
18 |
> |
best used to probe properties of a system that are dependent only on |
19 |
> |
the state of the system. Structural information about a system is most |
20 |
> |
readily obtained through these types of methods. |
21 |
|
|
22 |
|
Although the two techniques employed seem dissimilar, they are both |
23 |
|
linked by the overarching principles of Statistical |
24 |
< |
Thermodynamics. Statistical Thermodynamics governs the behavior of |
24 |
> |
Mechanics. Statistical Meachanics governs the behavior of |
25 |
|
both classes of simulations and dictates what each method can and |
26 |
|
cannot do. When investigating a system, one most first analyze what |
27 |
|
thermodynamic properties of the system are being probed, then chose |
31 |
|
|
32 |
|
The following section serves as a brief introduction to some of the |
33 |
|
Statistical Mechanics concepts present in this dissertation. What |
34 |
< |
follows is a brief derivation of Boltzmann weighted statistics, and an |
34 |
> |
follows is a brief derivation of Boltzmann weighted statistics and an |
35 |
|
explanation of how one can use the information to calculate an |
36 |
|
observable for a system. This section then concludes with a brief |
37 |
|
discussion of the ergodic hypothesis and its relevance to this |
39 |
|
|
40 |
|
\subsection{\label{introSec:boltzman}Boltzmann weighted statistics} |
41 |
|
|
42 |
< |
Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$. |
43 |
< |
Let $\Omega(E_{\gamma})$ represent the number of degenerate ways |
44 |
< |
$\boldsymbol{\Gamma}$, the collection of positions and conjugate |
45 |
< |
momenta of system $\gamma$, can be configured to give |
46 |
< |
$E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system |
47 |
< |
where energy is exchanged between the two systems, $\Omega(E)$, where |
48 |
< |
$E$ is the total energy of both systems, can be represented as |
42 |
> |
Consider a system, $\gamma$, with total energy $E_{\gamma}$. Let |
43 |
> |
$\Omega(E_{\gamma})$ represent the number of degenerate ways |
44 |
> |
$\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the |
45 |
> |
collection of positions and conjugate momenta of system $\gamma$, can |
46 |
> |
be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset |
47 |
> |
of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots |
48 |
> |
E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be |
49 |
> |
expressed as, |
50 |
|
\begin{equation} |
51 |
+ |
\Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots |
52 |
+ |
\Omega(E_{\gamma}) \times \ldots \Omega(E_n) |
53 |
+ |
\label{introEq:SM0.1} |
54 |
+ |
\end{equation} |
55 |
+ |
This multiplicative combination of degeneracies is illustrated in |
56 |
+ |
Fig.~\ref{introFig:degenProd}. |
57 |
+ |
|
58 |
+ |
\begin{figure} |
59 |
+ |
\centering |
60 |
+ |
\includegraphics[width=\linewidth]{omegaFig.eps} |
61 |
+ |
\caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.} |
62 |
+ |
\label{introFig:degenProd} |
63 |
+ |
\end{figure} |
64 |
+ |
|
65 |
+ |
Next, consider the specific case of $\gamma$ in contact with a |
66 |
+ |
bath. Exchange of energy is allowed between the bath and the system, |
67 |
+ |
subject to the constraint that the total energy |
68 |
+ |
($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$ |
69 |
+ |
is the total energy of both systems, can be represented as |
70 |
+ |
\begin{equation} |
71 |
|
\Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma}) |
72 |
|
\label{introEq:SM1} |
73 |
|
\end{equation} |
78 |
|
\end{equation} |
79 |
|
|
80 |
|
The solution to Eq.~\ref{introEq:SM2} maximizes the number of |
81 |
< |
degenerative configurations in $E$. \cite{Frenkel1996} |
81 |
> |
degenerate configurations in $E$. \cite{Frenkel1996} |
82 |
|
This gives |
83 |
|
\begin{equation} |
84 |
|
\frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 = |
197 |
|
|
198 |
|
\subsection{\label{introSec:ergodic}The Ergodic Hypothesis} |
199 |
|
|
200 |
< |
One last important consideration is that of ergodicity. Ergodicity is |
201 |
< |
the assumption that given an infinite amount of time, a system will |
202 |
< |
visit every available point in phase space.\cite{Frenkel1996} For most |
203 |
< |
systems, this is a valid assumption, except in cases where the system |
204 |
< |
may be trapped in a local feature (\emph{e.g.}~glasses). When valid, |
205 |
< |
ergodicity allows the unification of a time averaged observation and |
206 |
< |
an ensemble averaged one. If an observation is averaged over a |
207 |
< |
sufficiently long time, the system is assumed to visit all |
208 |
< |
appropriately available points in phase space, giving a properly |
209 |
< |
weighted statistical average. This allows the researcher freedom of |
210 |
< |
choice when deciding how best to measure a given observable. When an |
211 |
< |
ensemble averaged approach seems most logical, the Monte Carlo |
212 |
< |
techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized. |
213 |
< |
Conversely, if a problem lends itself to a time averaging approach, |
214 |
< |
the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be |
215 |
< |
employed. |
200 |
> |
In the case of a Molecular Dynamics simulation, rather than |
201 |
> |
calculating an ensemble average integral over phase space as in |
202 |
> |
Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time |
203 |
> |
average of an observable. Namely, |
204 |
> |
\begin{equation} |
205 |
> |
\langle A \rangle_t = \frac{1}{\tau} |
206 |
> |
\int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt |
207 |
> |
\label{introEq:SM16} |
208 |
> |
\end{equation} |
209 |
> |
Where the value of an observable is averaged over the length of time |
210 |
> |
that the simulation is run. This type of measurement mirrors the |
211 |
> |
experimental measurement of an observable. In an experiment, the |
212 |
> |
instrument analyzing the system must average its observation of the |
213 |
> |
finite time of the measurement. What is required then, is a principle |
214 |
> |
to relate the time average to the ensemble average. This is the |
215 |
> |
ergodic hypothesis. |
216 |
|
|
217 |
+ |
Ergodicity is the assumption that given an infinite amount of time, a |
218 |
+ |
system will visit every available point in phase |
219 |
+ |
space.\cite{Frenkel1996} For most systems, this is a valid assumption, |
220 |
+ |
except in cases where the system may be trapped in a local feature |
221 |
+ |
(\emph{e.g.}~glasses). When valid, ergodicity allows the unification |
222 |
+ |
of a time averaged observation and an ensemble averaged one. If an |
223 |
+ |
observation is averaged over a sufficiently long time, the system is |
224 |
+ |
assumed to visit all appropriately available points in phase space, |
225 |
+ |
giving a properly weighted statistical average. This allows the |
226 |
+ |
researcher freedom of choice when deciding how best to measure a given |
227 |
+ |
observable. When an ensemble averaged approach seems most logical, |
228 |
+ |
the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo} |
229 |
+ |
can be utilized. Conversely, if a problem lends itself to a time |
230 |
+ |
averaging approach, the Molecular Dynamics techniques in |
231 |
+ |
Sec.~\ref{introSec:MD} can be employed. |
232 |
+ |
|
233 |
|
\section{\label{introSec:monteCarlo}Monte Carlo Simulations} |
234 |
|
|
235 |
|
The Monte Carlo method was developed by Metropolis and Ulam for their |
247 |
|
\end{equation} |
248 |
|
The equation can be recast as: |
249 |
|
\begin{equation} |
250 |
< |
I = (b-a)\langle f(x) \rangle |
250 |
> |
I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx |
251 |
> |
\label{introEq:Importance1} |
252 |
> |
\end{equation} |
253 |
> |
Where $\rho(x)$ is an arbitrary probability distribution in $x$. If |
254 |
> |
one conducts $\tau$ trials selecting a random number, $\zeta_\tau$, |
255 |
> |
from the distribution $\rho(x)$ on the interval $[a,b]$, then |
256 |
> |
Eq.~\ref{introEq:Importance1} becomes |
257 |
> |
\begin{equation} |
258 |
> |
I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]} |
259 |
> |
\label{introEq:Importance2} |
260 |
> |
\end{equation} |
261 |
> |
If $\rho(x)$ is uniformly distributed over the interval $[a,b]$, |
262 |
> |
\begin{equation} |
263 |
> |
\rho(x) = \frac{1}{b-a} |
264 |
> |
\label{introEq:importance2b} |
265 |
> |
\end{equation} |
266 |
> |
then the integral can be rewritten as |
267 |
> |
\begin{equation} |
268 |
> |
I = (b-a)\lim_{\tau \rightarrow \infty} |
269 |
> |
\langle f(x) \rangle_{\text{trials}[a,b]} |
270 |
|
\label{eq:MCex2} |
271 |
|
\end{equation} |
216 |
– |
Where $\langle f(x) \rangle$ is the unweighted average over the interval |
217 |
– |
$[a,b]$. The calculation of the integral could then be solved by |
218 |
– |
randomly choosing points along the interval $[a,b]$ and calculating |
219 |
– |
the value of $f(x)$ at each point. The accumulated average would then |
220 |
– |
approach $I$ in the limit where the number of trials is infinitely |
221 |
– |
large. |
272 |
|
|
273 |
|
However, in Statistical Mechanics, one is typically interested in |
274 |
|
integrals of the form: |
290 |
|
average. This is where importance sampling comes into |
291 |
|
play.\cite{allen87:csl} |
292 |
|
|
293 |
< |
Importance Sampling is a method where one selects a distribution from |
294 |
< |
which the random configurations are chosen in order to more |
295 |
< |
efficiently calculate the integral.\cite{Frenkel1996} Consider again |
296 |
< |
Eq.~\ref{eq:MCex1} rewritten to be: |
247 |
< |
\begin{equation} |
248 |
< |
I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx |
249 |
< |
\label{introEq:Importance1} |
250 |
< |
\end{equation} |
251 |
< |
Where $\rho(x)$ is an arbitrary probability distribution in $x$. If |
252 |
< |
one conducts $\tau$ trials selecting a random number, $\zeta_\tau$, |
253 |
< |
from the distribution $\rho(x)$ on the interval $[a,b]$, then |
254 |
< |
Eq.~\ref{introEq:Importance1} becomes |
255 |
< |
\begin{equation} |
256 |
< |
I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}} |
257 |
< |
\label{introEq:Importance2} |
258 |
< |
\end{equation} |
259 |
< |
Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing |
293 |
> |
Importance sampling is a method where the distribution, from which the |
294 |
> |
random configurations are chosen, is selected in such a way as to |
295 |
> |
efficiently sample the integral in question. Looking at |
296 |
> |
Eq.~\ref{eq:mcEnsAvg}, and realizing |
297 |
|
\begin {equation} |
298 |
|
\rho_{kT}(\mathbf{r}^N) = |
299 |
|
\frac{e^{-\beta V(\mathbf{r}^N)}} |
317 |
|
By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$ |
318 |
|
Eq.~\ref{introEq:Importance4} becomes |
319 |
|
\begin{equation} |
320 |
< |
\langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}} |
320 |
> |
\langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT} |
321 |
|
\label{introEq:Importance5} |
322 |
|
\end{equation} |
323 |
|
The difficulty is selecting points $\mathbf{r}^N$ such that they are |
324 |
|
sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution |
325 |
< |
was proposed by Metropolis et al.\cite{metropolis:1953} which involved |
325 |
> |
was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved |
326 |
|
the use of a Markov chain whose limiting distribution was |
327 |
|
$\rho_{kT}(\mathbf{r}^N)$. |
328 |
|
|
435 |
|
integrated in order to obtain information about both the positions and |
436 |
|
momentum of a system, allowing the calculation of not only |
437 |
|
configurational observables, but momenta dependent ones as well: |
438 |
< |
diffusion constants, velocity auto correlations, folding/unfolding |
439 |
< |
events, etc. Due to the principle of ergodicity, |
438 |
> |
diffusion constants, relaxation events, folding/unfolding |
439 |
> |
events, etc. With the time dependent information gained from a |
440 |
> |
Molecular Dynamics simulation, one can also calculate time correlation |
441 |
> |
functions of the form\cite{Hansen86} |
442 |
> |
\begin{equation} |
443 |
> |
\langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau} |
444 |
> |
\int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime} |
445 |
> |
\label{introEq:timeCorr} |
446 |
> |
\end{equation} |
447 |
> |
These correlations can be used to measure fundamental time constants |
448 |
> |
of a system, such as diffusion constants from the velocity |
449 |
> |
autocorrelation or dipole relaxation times from the dipole |
450 |
> |
autocorrelation. Due to the principle of ergodicity, |
451 |
|
Sec.~\ref{introSec:ergodic}, the average of these observables over the |
452 |
|
time period of the simulation are taken to be the ensemble averages |
453 |
|
for the system. |
454 |
|
|
455 |
|
The choice of when to use molecular dynamics over Monte Carlo |
456 |
|
techniques, is normally decided by the observables in which the |
457 |
< |
researcher is interested. If the observables depend on momenta in |
457 |
> |
researcher is interested. If the observables depend on time in |
458 |
|
any fashion, then the only choice is molecular dynamics in some form. |
459 |
|
However, when the observable is dependent only on the configuration, |
460 |
< |
then most of the time Monte Carlo techniques will be more efficient. |
460 |
> |
then for most small systems, Monte Carlo techniques will be more efficient. |
461 |
|
|
462 |
|
The focus of research in the second half of this dissertation is |
463 |
|
centered around the dynamic properties of phospholipid bilayers, |
529 |
|
simulation box on an infinite lattice in Cartesian space. Any given |
530 |
|
particle leaving the simulation box on one side will have an image of |
531 |
|
itself enter on the opposite side (see Fig.~\ref{introFig:pbc}). In |
532 |
< |
addition, this sets that any given particle pair has an image, real or |
533 |
< |
periodic, within $fix$ of each other. A discussion of the method used |
534 |
< |
to calculate the periodic image can be found in |
532 |
> |
addition, this sets that any two particles have an image, real or |
533 |
> |
periodic, within $\text{box}/2$ of each other. A discussion of the |
534 |
> |
method used to calculate the periodic image can be found in |
535 |
|
Sec.\ref{oopseSec:pbc}. |
536 |
|
|
537 |
|
\begin{figure} |
605 |
|
\mathcal{O}(\Delta t^4) |
606 |
|
\label{introEq:verletBack} |
607 |
|
\end{equation} |
608 |
< |
Adding together Eq.~\ref{introEq:verletForward} and |
608 |
> |
Where $m$ is the mass of the particle, $q(t)$ is the position at time |
609 |
> |
$t$, $v(t)$ the velocity, and $F(t)$ the force acting on the |
610 |
> |
particle. Adding together Eq.~\ref{introEq:verletForward} and |
611 |
|
Eq.~\ref{introEq:verletBack} results in, |
612 |
|
\begin{equation} |
613 |
< |
eq here |
613 |
> |
q(t+\Delta t)+q(t-\Delta t) = |
614 |
> |
2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4) |
615 |
|
\label{introEq:verletSum} |
616 |
|
\end{equation} |
617 |
|
Or equivalently, |
618 |
|
\begin{equation} |
619 |
< |
eq here |
619 |
> |
q(t+\Delta t) \approx |
620 |
> |
2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2 |
621 |
|
\label{introEq:verletFinal} |
622 |
|
\end{equation} |
623 |
|
Which contains an error in the estimate of the new positions on the |
625 |
|
|
626 |
|
In practice, however, the simulations in this research were integrated |
627 |
|
with a velocity reformulation of the Verlet method.\cite{allen87:csl} |
628 |
< |
\begin{equation} |
629 |
< |
eq here |
630 |
< |
\label{introEq:MDvelVerletPos} |
631 |
< |
\end{equation} |
632 |
< |
\begin{equation} |
581 |
< |
eq here |
628 |
> |
\begin{align} |
629 |
> |
q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 % |
630 |
> |
\label{introEq:MDvelVerletPos} \\% |
631 |
> |
% |
632 |
> |
v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] % |
633 |
|
\label{introEq:MDvelVerletVel} |
634 |
< |
\end{equation} |
634 |
> |
\end{align} |
635 |
|
The original Verlet algorithm can be regained by substituting the |
636 |
|
velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet |
637 |
|
formulations are chosen in this research because the algorithms have |
653 |
|
reversible. The fact that it shadows the true Hamiltonian in phase |
654 |
|
space is acceptable in actual simulations as one is interested in the |
655 |
|
ensemble average of the observable being measured. From the ergodic |
656 |
< |
hypothesis (Sec.~\ref{introSec:StatThermo}), it is known that the time |
656 |
> |
hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time |
657 |
|
average will match the ensemble average, therefore two similar |
658 |
|
trajectories in phase space should give matching statistical averages. |
659 |
|
|
660 |
|
\subsection{\label{introSec:MDfurther}Further Considerations} |
661 |
+ |
|
662 |
|
In the simulations presented in this research, a few additional |
663 |
|
parameters are needed to describe the motions. The simulations |
664 |
< |
involving water and phospholipids in Ch.~\ref{chaptLipids} are |
664 |
> |
involving water and phospholipids in Ch.~\ref{chapt:lipid} are |
665 |
|
required to integrate the equations of motions for dipoles on atoms. |
666 |
|
This involves an additional three parameters be specified for each |
667 |
|
dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are |
668 |
|
taken to be the Euler angles, where $\phi$ is a rotation about the |
669 |
|
$z$-axis, and $\theta$ is a rotation about the new $x$-axis, and |
670 |
|
$\psi$ is a final rotation about the new $z$-axis (see |
671 |
< |
Fig.~\ref{introFig:euleerAngles}). This sequence of rotations can be |
672 |
< |
accumulated into a single $3 \times 3$ matrix $\mathbf{A}$ |
671 |
> |
Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be |
672 |
> |
accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$, |
673 |
|
defined as follows: |
674 |
|
\begin{equation} |
675 |
< |
eq here |
675 |
> |
\mathbf{A} = |
676 |
> |
\begin{bmatrix} |
677 |
> |
\cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &% |
678 |
> |
\sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &% |
679 |
> |
\sin\theta\sin\psi \\% |
680 |
> |
% |
681 |
> |
-\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &% |
682 |
> |
-\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &% |
683 |
> |
\sin\theta\cos\psi \\% |
684 |
> |
% |
685 |
> |
\sin\phi\sin\theta &% |
686 |
> |
-\cos\phi\sin\theta &% |
687 |
> |
\cos\theta |
688 |
> |
\end{bmatrix} |
689 |
|
\label{introEq:EulerRotMat} |
690 |
|
\end{equation} |
691 |
|
|
692 |
< |
The equations of motion for Euler angles can be written down as |
693 |
< |
\cite{allen87:csl} |
694 |
< |
\begin{equation} |
695 |
< |
eq here |
696 |
< |
\label{introEq:MDeuleeerPsi} |
697 |
< |
\end{equation} |
692 |
> |
\begin{figure} |
693 |
> |
\centering |
694 |
> |
\includegraphics[width=\linewidth]{eulerRotFig.eps} |
695 |
> |
\caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).} |
696 |
> |
\label{introFig:eulerAngles} |
697 |
> |
\end{figure} |
698 |
> |
|
699 |
> |
The equations of motion for Euler angles can be written down |
700 |
> |
as\cite{allen87:csl} |
701 |
> |
\begin{align} |
702 |
> |
\dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} + |
703 |
> |
\omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} + |
704 |
> |
\omega^s_z |
705 |
> |
\label{introEq:MDeulerPhi} \\% |
706 |
> |
% |
707 |
> |
\dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi |
708 |
> |
\label{introEq:MDeulerTheta} \\% |
709 |
> |
% |
710 |
> |
\dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} - |
711 |
> |
\omega^s_y \frac{\cos\phi}{\sin\theta} |
712 |
> |
\label{introEq:MDeulerPsi} |
713 |
> |
\end{align} |
714 |
|
Where $\omega^s_i$ is the angular velocity in the lab space frame |
715 |
|
along Cartesian coordinate $i$. However, a difficulty arises when |
716 |
|
attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and |
717 |
|
Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in |
718 |
|
both equations means there is a non-physical instability present when |
719 |
< |
$\theta$ is 0 or $\pi$. |
720 |
< |
|
721 |
< |
To correct for this, the simulations integrate the rotation matrix, |
722 |
< |
$\mathbf{A}$, directly, thus avoiding the instability. |
642 |
< |
This method was proposed by Dullwebber |
643 |
< |
\emph{et. al.}\cite{Dullwebber:1997}, and is presented in |
719 |
> |
$\theta$ is 0 or $\pi$. To correct for this, the simulations integrate |
720 |
> |
the rotation matrix, $\mathbf{A}$, directly, thus avoiding the |
721 |
> |
instability. This method was proposed by Dullweber |
722 |
> |
\emph{et. al.}\cite{Dullweber1997}, and is presented in |
723 |
|
Sec.~\ref{introSec:MDsymplecticRot}. |
724 |
|
|
725 |
< |
\subsubsection{\label{introSec:MDliouville}Liouville Propagator} |
725 |
> |
\subsection{\label{introSec:MDliouville}Liouville Propagator} |
726 |
|
|
727 |
|
Before discussing the integration of the rotation matrix, it is |
728 |
|
necessary to understand the construction of a ``good'' integration |
729 |
|
scheme. It has been previously |
730 |
< |
discussed(Sec.~\ref{introSec:MDintegrate}) how it is desirable for an |
730 |
> |
discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an |
731 |
|
integrator to be symplectic, or time reversible. The following is an |
732 |
|
outline of the Trotter factorization of the Liouville Propagator as a |
733 |
< |
scheme for generating symplectic integrators. \cite{Tuckerman:1992} |
733 |
> |
scheme for generating symplectic integrators.\cite{Tuckerman92} |
734 |
|
|
735 |
|
For a system with $f$ degrees of freedom the Liouville operator can be |
736 |
|
defined as, |
737 |
|
\begin{equation} |
738 |
< |
eq here |
738 |
> |
iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} + |
739 |
> |
F_j\frac{\partial}{\partial p_j} \biggr ] |
740 |
|
\label{introEq:LiouvilleOperator} |
741 |
|
\end{equation} |
742 |
< |
Here, $r_j$ and $p_j$ are the position and conjugate momenta of a |
743 |
< |
degree of freedom, and $f_j$ is the force on that degree of freedom. |
742 |
> |
Here, $q_j$ and $p_j$ are the position and conjugate momenta of a |
743 |
> |
degree of freedom, and $F_j$ is the force on that degree of freedom. |
744 |
|
$\Gamma$ is defined as the set of all positions and conjugate momenta, |
745 |
< |
$\{r_j,p_j\}$, and the propagator, $U(t)$, is defined |
745 |
> |
$\{q_j,p_j\}$, and the propagator, $U(t)$, is defined |
746 |
|
\begin {equation} |
747 |
< |
eq here |
747 |
> |
U(t) = e^{iLt} |
748 |
|
\label{introEq:Lpropagator} |
749 |
|
\end{equation} |
750 |
|
This allows the specification of $\Gamma$ at any time $t$ as |
751 |
|
\begin{equation} |
752 |
< |
eq here |
752 |
> |
\Gamma(t) = U(t)\Gamma(0) |
753 |
|
\label{introEq:Lp2} |
754 |
|
\end{equation} |
755 |
|
It is important to note, $U(t)$ is a unitary operator meaning |
760 |
|
|
761 |
|
Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the |
762 |
|
Trotter theorem to yield |
763 |
< |
\begin{equation} |
764 |
< |
eq here |
765 |
< |
\label{introEq:Lp4} |
766 |
< |
\end{equation} |
767 |
< |
Where $\Delta t = \frac{t}{P}$. |
763 |
> |
\begin{align} |
764 |
> |
e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\% |
765 |
> |
% |
766 |
> |
&= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\% |
767 |
> |
% |
768 |
> |
&= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\, |
769 |
> |
e^{iL_1\frac{\Delta t}{2}} \biggr ]^P + |
770 |
> |
\mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4} |
771 |
> |
\end{align} |
772 |
> |
Where $\Delta t = t/P$. |
773 |
|
With this, a discrete time operator $G(\Delta t)$ can be defined: |
774 |
< |
\begin{equation} |
775 |
< |
eq here |
774 |
> |
\begin{align} |
775 |
> |
G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\, |
776 |
> |
e^{iL_1\frac{\Delta t}{2}} \notag \\% |
777 |
> |
% |
778 |
> |
&= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\, |
779 |
> |
U_1 \biggl ( \frac{\Delta t}{2} \biggr ) |
780 |
|
\label{introEq:Lp5} |
781 |
< |
\end{equation} |
782 |
< |
Because $U_1(t)$ and $U_2(t)$ are unitary, $G|\Delta t)$ is also |
781 |
> |
\end{align} |
782 |
> |
Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also |
783 |
|
unitary. Meaning an integrator based on this factorization will be |
784 |
|
reversible in time. |
785 |
|
|
786 |
|
As an example, consider the following decomposition of $L$: |
787 |
+ |
\begin{align} |
788 |
+ |
iL_1 &= \dot{q}\frac{\partial}{\partial q}% |
789 |
+ |
\label{introEq:Lp6a} \\% |
790 |
+ |
% |
791 |
+ |
iL_2 &= F(q)\frac{\partial}{\partial p}% |
792 |
+ |
\label{introEq:Lp6b} |
793 |
+ |
\end{align} |
794 |
+ |
This leads to propagator $G( \Delta t )$ as, |
795 |
|
\begin{equation} |
796 |
< |
eq here |
797 |
< |
\label{introEq:Lp6} |
796 |
> |
G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \, |
797 |
> |
e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \, |
798 |
> |
e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} |
799 |
> |
\label{introEq:Lp7} |
800 |
|
\end{equation} |
801 |
< |
Operating $G(\Delta t)$ on $\Gamma)0)$, and utilizing the operator property |
801 |
> |
Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property |
802 |
|
\begin{equation} |
803 |
< |
eq here |
803 |
> |
e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c) |
804 |
|
\label{introEq:Lp8} |
805 |
|
\end{equation} |
806 |
< |
Where $c$ is independent of $q$. One obtains the following: |
807 |
< |
\begin{equation} |
808 |
< |
eq here |
809 |
< |
\label{introEq:Lp8} |
810 |
< |
\end{equation} |
806 |
> |
Where $c$ is independent of $x$. One obtains the following: |
807 |
> |
\begin{align} |
808 |
> |
\dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &= |
809 |
> |
\dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\% |
810 |
> |
% |
811 |
> |
q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )% |
812 |
> |
\label{introEq:Lp9b}\\% |
813 |
> |
% |
814 |
> |
\dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) + |
815 |
> |
\frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c} |
816 |
> |
\end{align} |
817 |
|
Or written another way, |
818 |
< |
\begin{equation} |
819 |
< |
eq here |
820 |
< |
\label{intorEq:Lp9} |
821 |
< |
\end{equation} |
818 |
> |
\begin{align} |
819 |
> |
q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t + |
820 |
> |
\frac{F[q(0)]}{m}\frac{\Delta t^2}{2} % |
821 |
> |
\label{introEq:Lp10a} \\% |
822 |
> |
% |
823 |
> |
\dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m} |
824 |
> |
\biggl [F[q(0)] + F[q(\Delta t)] \biggr] % |
825 |
> |
\label{introEq:Lp10b} |
826 |
> |
\end{align} |
827 |
|
This is the velocity Verlet formulation presented in |
828 |
< |
Sec.~\ref{introSec:MDintegrate}. Because this integration scheme is |
828 |
> |
Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is |
829 |
|
comprised of unitary propagators, it is symplectic, and therefore area |
830 |
|
preserving in phase space. From the preceding factorization, one can |
831 |
|
see that the integration of the equations of motion would follow: |
839 |
|
\item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values. |
840 |
|
\end{enumerate} |
841 |
|
|
842 |
< |
\subsubsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix} |
842 |
> |
\subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix} |
843 |
|
|
844 |
|
Based on the factorization from the previous section, |
845 |
< |
Dullweber\emph{et al.}\cite{Dullweber:1997}~ proposed a scheme for the |
845 |
> |
Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the |
846 |
|
symplectic propagation of the rotation matrix, $\mathbf{A}$, as an |
847 |
|
alternative method for the integration of orientational degrees of |
848 |
|
freedom. The method starts with a straightforward splitting of the |
849 |
|
Liouville operator: |
850 |
< |
\begin{equation} |
851 |
< |
eq here |
852 |
< |
\label{introEq:SR1} |
853 |
< |
\end{equation} |
854 |
< |
Where $\boldsymbol{\tau}(\mathbf{A})$ are the torques of the system |
855 |
< |
due to the configuration, and $\boldsymbol{/pi}$ are the conjugate |
850 |
> |
\begin{align} |
851 |
> |
iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} + |
852 |
> |
\mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}} |
853 |
> |
\label{introEq:SR1a} \\% |
854 |
> |
% |
855 |
> |
iL_F &= F(q)\frac{\partial}{\partial p} |
856 |
> |
\label{introEq:SR1b} \\% |
857 |
> |
iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi} |
858 |
> |
\label{introEq:SR1b} \\% |
859 |
> |
\end{align} |
860 |
> |
Where $\tau(\mathbf{A})$ is the torque of the system |
861 |
> |
due to the configuration, and $\pi$ is the conjugate |
862 |
|
angular momenta of the system. The propagator, $G(\Delta t)$, becomes |
863 |
|
\begin{equation} |
864 |
< |
eq here |
864 |
> |
G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \, |
865 |
> |
e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \, |
866 |
> |
e^{\Delta t\,iL_{\text{pos}}} \, |
867 |
> |
e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \, |
868 |
> |
e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} |
869 |
|
\label{introEq:SR2} |
870 |
|
\end{equation} |
871 |
|
Propagation of the linear and angular momenta follows as in the Verlet |
872 |
|
scheme. The propagation of positions also follows the Verlet scheme |
873 |
|
with the addition of a further symplectic splitting of the rotation |
874 |
< |
matrix propagation, $\mathcal{G}_{\text{rot}}(\Delta t)$. |
874 |
> |
matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within |
875 |
> |
$U_{\text{pos}}(\Delta t)$. |
876 |
|
\begin{equation} |
877 |
< |
eq here |
877 |
> |
\mathcal{U}_{\text{rot}}(\Delta t) = |
878 |
> |
\mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\, |
879 |
> |
\mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\, |
880 |
> |
\mathcal{U}_z (\Delta t)\, |
881 |
> |
\mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\, |
882 |
> |
\mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\, |
883 |
|
\label{introEq:SR3} |
884 |
|
\end{equation} |
885 |
< |
Where $\mathcal{G}_j$ is a unitary rotation of $\mathbf{A}$ and |
886 |
< |
$\boldsymbol{\pi}$ about each axis $j$. As all propagations are now |
885 |
> |
Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and |
886 |
> |
$\pi$ about each axis $j$. As all propagations are now |
887 |
|
unitary and symplectic, the entire integration scheme is also |
888 |
|
symplectic and time reversible. |
889 |
|
|
890 |
|
\section{\label{introSec:layout}Dissertation Layout} |
891 |
|
|
892 |
< |
This dissertation is divided as follows:Chapt.~\ref{chapt:RSA} |
892 |
> |
This dissertation is divided as follows:Ch.~\ref{chapt:RSA} |
893 |
|
presents the random sequential adsorption simulations of related |
894 |
|
pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE} |
895 |
|
is about the writing of the molecular dynamics simulation package |
896 |
< |
{\sc oopse}, Ch.~\ref{chapt:lipid} regards the simulations of |
897 |
< |
phospholipid bilayers using a mesoscale model, and lastly, |
896 |
> |
{\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of |
897 |
> |
phospholipid bilayers using a mesoscale model. And lastly, |
898 |
|
Ch.~\ref{chapt:conclusion} concludes this dissertation with a |
899 |
|
summary of all results. The chapters are arranged in chronological |
900 |
|
order, and reflect the progression of techniques I employed during my |
901 |
|
research. |
902 |
|
|
903 |
< |
The chapter concerning random sequential adsorption |
904 |
< |
simulations is a study in applying the principles of theoretical |
905 |
< |
research in order to obtain a simple model capable of explaining the |
906 |
< |
results. My advisor, Dr. Gezelter, and I were approached by a |
907 |
< |
colleague, Dr. Lieberman, about possible explanations for partial |
908 |
< |
coverage of a gold surface by a particular compound of hers. We |
909 |
< |
suggested it might be due to the statistical packing fraction of disks |
910 |
< |
on a plane, and set about to simulate this system. As the events in |
911 |
< |
our model were not dynamic in nature, a Monte Carlo method was |
912 |
< |
employed. Here, if a molecule landed on the surface without |
913 |
< |
overlapping another, then its landing was accepted. However, if there |
914 |
< |
was overlap, the landing we rejected and a new random landing location |
915 |
< |
was chosen. This defined our acceptance rules and allowed us to |
916 |
< |
construct a Markov chain whose limiting distribution was the surface |
917 |
< |
coverage in which we were interested. |
903 |
> |
The chapter concerning random sequential adsorption simulations is a |
904 |
> |
study in applying Statistical Mechanics simulation techniques in order |
905 |
> |
to obtain a simple model capable of explaining the results. My |
906 |
> |
advisor, Dr. Gezelter, and I were approached by a colleague, |
907 |
> |
Dr. Lieberman, about possible explanations for the partial coverage of a |
908 |
> |
gold surface by a particular compound of hers. We suggested it might |
909 |
> |
be due to the statistical packing fraction of disks on a plane, and |
910 |
> |
set about to simulate this system. As the events in our model were |
911 |
> |
not dynamic in nature, a Monte Carlo method was employed. Here, if a |
912 |
> |
molecule landed on the surface without overlapping another, then its |
913 |
> |
landing was accepted. However, if there was overlap, the landing we |
914 |
> |
rejected and a new random landing location was chosen. This defined |
915 |
> |
our acceptance rules and allowed us to construct a Markov chain whose |
916 |
> |
limiting distribution was the surface coverage in which we were |
917 |
> |
interested. |
918 |
|
|
919 |
|
The following chapter, about the simulation package {\sc oopse}, |
920 |
|
describes in detail the large body of scientific code that had to be |
921 |
< |
written in order to study phospholipid bilayer. Although there are |
921 |
> |
written in order to study phospholipid bilayers. Although there are |
922 |
|
pre-existing molecular dynamic simulation packages available, none |
923 |
|
were capable of implementing the models we were developing.{\sc oopse} |
924 |
|
is a unique package capable of not only integrating the equations of |
930 |
|
|
931 |
|
Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been |
932 |
|
able to parameterize a mesoscale model for phospholipid simulations. |
933 |
< |
This model retains information about solvent ordering about the |
933 |
> |
This model retains information about solvent ordering around the |
934 |
|
bilayer, as well as information regarding the interaction of the |
935 |
< |
phospholipid head groups' dipole with each other and the surrounding |
935 |
> |
phospholipid head groups' dipoles with each other and the surrounding |
936 |
|
solvent. These simulations give us insight into the dynamic events |
937 |
|
that lead to the formation of phospholipid bilayers, as well as |
938 |
|
provide the foundation for future exploration of bilayer phase |