| 30 |
|
thermodynamic properties of the system are being probed, then chose |
| 31 |
|
which method best suits that objective. |
| 32 |
|
|
| 33 |
< |
\subsection{\label{introSec:statThermo}Statistical Thermodynamics} |
| 33 |
> |
\subsection{\label{introSec:statThermo}Statistical Mechanics} |
| 34 |
|
|
| 35 |
< |
ergodic hypothesis |
| 35 |
> |
The following section serves as a brief introduction to some of the |
| 36 |
> |
Statistical Mechanics concepts present in this dissertation. What |
| 37 |
> |
follows is a brief derivation of Blotzman weighted statistics, and an |
| 38 |
> |
explanation of how one can use the information to calculate an |
| 39 |
> |
observable for a system. This section then concludes with a brief |
| 40 |
> |
discussion of the ergodic hypothesis and its relevance to this |
| 41 |
> |
research. |
| 42 |
|
|
| 43 |
< |
enesemble averages |
| 43 |
> |
\subsection{\label{introSec:boltzman}Boltzman weighted statistics} |
| 44 |
> |
|
| 45 |
> |
Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$. |
| 46 |
> |
Let $\Omega(E_{gamma})$ represent the number of degenerate ways |
| 47 |
> |
$\boldsymbol{\Gamma}$, the collection of positions and conjugate |
| 48 |
> |
momenta of system $\gamma$, can be configured to give |
| 49 |
> |
$E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system |
| 50 |
> |
where energy is exchanged between the two systems, $\Omega(E)$, where |
| 51 |
> |
$E$ is the total energy of both systems, can be represented as |
| 52 |
> |
\begin{equation} |
| 53 |
> |
eq here |
| 54 |
> |
\label{introEq:SM1} |
| 55 |
> |
\end{equation} |
| 56 |
> |
Or additively as |
| 57 |
> |
\begin{equation} |
| 58 |
> |
eq here |
| 59 |
> |
\label{introEq:SM2} |
| 60 |
> |
\end{equation} |
| 61 |
> |
|
| 62 |
> |
The solution to Eq.~\ref{introEq:SM2} maximizes the number of |
| 63 |
> |
degenerative configurations in $E$. \cite{fix} |
| 64 |
> |
This gives |
| 65 |
> |
\begin{equation} |
| 66 |
> |
eq here |
| 67 |
> |
\label{introEq:SM3} |
| 68 |
> |
\end{equation} |
| 69 |
> |
Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and |
| 70 |
> |
$\frac{partialE_{\text{bath}}}{\partial E_{\gamma}}$ is |
| 71 |
> |
$-1$. Eq.~\ref{introEq:SM3} becomes |
| 72 |
> |
\begin{equation} |
| 73 |
> |
eq here |
| 74 |
> |
\label{introEq:SM4} |
| 75 |
> |
\end{equation} |
| 76 |
> |
|
| 77 |
> |
At this point, one can draw a relationship between the maximization of |
| 78 |
> |
degeneracy in Eq.~\ref{introEq:SM3} and the second law of |
| 79 |
> |
thermodynamics. Namely, that for a closed system, entropy wil |
| 80 |
> |
increase for an irreversible process.\cite{fix} Here the |
| 81 |
> |
process is the partitioning of energy among the two systems. This |
| 82 |
> |
allows the following definition of entropy: |
| 83 |
> |
\begin{equation} |
| 84 |
> |
eq here |
| 85 |
> |
\label{introEq:SM5} |
| 86 |
> |
\end{equation} |
| 87 |
> |
Where $k_B$ is the Boltzman constant. Having defined entropy, one can |
| 88 |
> |
also define the temperature of the system using the relation |
| 89 |
> |
\begin{equation} |
| 90 |
> |
eq here |
| 91 |
> |
\label{introEq:SM6} |
| 92 |
> |
\end{equation} |
| 93 |
> |
The temperature in the system $\gamma$ is then |
| 94 |
> |
\begin{equation} |
| 95 |
> |
eq here |
| 96 |
> |
\label{introEq:SM7} |
| 97 |
> |
\end{equation} |
| 98 |
> |
Applying this to Eq.~\ref{introEq:SM4} gives the following |
| 99 |
> |
\begin{equation} |
| 100 |
> |
eq here |
| 101 |
> |
\label{introEq:SM8} |
| 102 |
> |
\end{equation} |
| 103 |
> |
Showing that the partitioning of energy between the two systems is |
| 104 |
> |
actually a process of thermal equilibration. \cite{fix} |
| 105 |
> |
|
| 106 |
> |
An application of these results is to formulate the form of an |
| 107 |
> |
expectation value of an observable, $A$, in the canonical ensemble. In |
| 108 |
> |
the canonical ensemble, the number of particles, $N$, the volume, $V$, |
| 109 |
> |
and the temperature, $T$, are all held constant while the energy, $E$, |
| 110 |
> |
is allowed to fluctuate. Returning to the previous example, the bath |
| 111 |
> |
system is now an infinitly large thermal bath, whose exchange of |
| 112 |
> |
energy with the system $\gamma$ holds teh temperature constant. The |
| 113 |
> |
partitioning of energy in the bath system is then related to the total |
| 114 |
> |
energy of both systems and the fluctuations in $E_{\gamma}}$: |
| 115 |
> |
\begin{equation} |
| 116 |
> |
eq here |
| 117 |
> |
\label{introEq:SM9} |
| 118 |
> |
\end{equation} |
| 119 |
> |
As for the expectation value, it can be defined |
| 120 |
> |
\begin{equation} |
| 121 |
> |
eq here |
| 122 |
> |
\label{introEq:SM10} |
| 123 |
> |
\end{eequation} |
| 124 |
> |
Where $\int_{\boldsymbol{\Gamma}} d\Boldsymbol{\Gamma}$ denotes an |
| 125 |
> |
integration over all accessable phase space, $P_{\gamma}$ is the |
| 126 |
> |
probability of being in a given phase state and |
| 127 |
> |
$A(\boldsymbol{\Gamma})$ is some observable that is a function of the |
| 128 |
> |
phase state. |
| 129 |
> |
|
| 130 |
> |
Because entropy seeks to maximize the number of degenerate states at a |
| 131 |
> |
given energy, the probability of being in a particular state in |
| 132 |
> |
$\gamma$ will be directly proportional to the number of allowable |
| 133 |
> |
states the coupled system is able to assume. Namely, |
| 134 |
> |
\begin{equation} |
| 135 |
> |
eq here |
| 136 |
> |
\label{introEq:SM11} |
| 137 |
> |
\end{equation} |
| 138 |
> |
With $E_{\gamma} \lE$, $\ln \Omega$ can be expanded in a Taylor series: |
| 139 |
> |
\begin{equation} |
| 140 |
> |
eq here |
| 141 |
> |
\label{introEq:SM12} |
| 142 |
> |
\end{equation} |
| 143 |
> |
Higher order terms are omitted as $E$ is an infinite thermal |
| 144 |
> |
bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can |
| 145 |
> |
be rewritten: |
| 146 |
> |
\begin{equation} |
| 147 |
> |
eq here |
| 148 |
> |
\label{introEq:SM13} |
| 149 |
> |
\end{equation} |
| 150 |
> |
Where $\ln \Omega(E)$ has been factored out of the porpotionality as a |
| 151 |
> |
constant. Normalizing the probability ($\int_{\boldsymbol{\Gamma}} |
| 152 |
> |
d\boldsymbol{\Gamma} P_{\gamma} =1$ gives |
| 153 |
> |
\begin{equation} |
| 154 |
> |
eq here |
| 155 |
> |
\label{introEq:SM14} |
| 156 |
> |
\end{equation} |
| 157 |
> |
This result is the standard Boltzman statistical distribution. |
| 158 |
> |
Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages: |
| 159 |
> |
\begin{equation} |
| 160 |
> |
eq here |
| 161 |
> |
\label{introEq:SM15} |
| 162 |
> |
\end{equation} |
| 163 |
> |
|
| 164 |
> |
\subsection{\label{introSec:ergodic}The Ergodic Hypothesis} |
| 165 |
> |
|
| 166 |
> |
One last important consideration is that of ergodicity. Ergodicity is |
| 167 |
> |
the assumption that given an infinite amount of time, a system will |
| 168 |
> |
visit every available point in phase space.\cite{fix} For most |
| 169 |
> |
systems, this is a valid assumption, except in cases where the system |
| 170 |
> |
may be trapped in a local feature (\emph{i.~e.~glasses}). When valid, |
| 171 |
> |
ergodicity allows the unification of a time averaged observation and |
| 172 |
> |
an ensemble averged one. If an observation is averaged over a |
| 173 |
> |
sufficiently long time, the system is assumed to visit all |
| 174 |
> |
appropriately available points in phase space, giving a properly |
| 175 |
> |
weighted statistical average. This allows the researcher freedom of |
| 176 |
> |
choice when deciding how best to measure a given observable. When an |
| 177 |
> |
ensemble averaged approach seems most logical, the Monte Carlo |
| 178 |
> |
techniques described in Sec.~\ref{introSec:MC} can be utilized. |
| 179 |
> |
Conversely, if a problem lends itself to a time averaging approach, |
| 180 |
> |
the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be |
| 181 |
> |
employed. |
| 182 |
|
|
| 183 |
|
\subsection{\label{introSec:monteCarlo}Monte Carlo Simulations} |
| 184 |
|
|
| 387 |
|
|
| 388 |
|
The choice of when to use molecular dynamics over Monte Carlo |
| 389 |
|
techniques, is normally decided by the observables in which the |
| 390 |
< |
researcher is interested. If the observabvles depend on momenta in |
| 390 |
> |
researcher is interested. If the observables depend on momenta in |
| 391 |
|
any fashion, then the only choice is molecular dynamics in some form. |
| 392 |
|
However, when the observable is dependent only on the configuration, |
| 393 |
|
then most of the time Monte Carlo techniques will be more efficent. |
| 450 |
|
|
| 451 |
|
Another consideration one must resolve, is that in a given simulation |
| 452 |
|
a disproportionate number of the particles will feel the effects of |
| 453 |
< |
the surface. \cite{fix} For a cubic system of 1000 particles arranged |
| 453 |
> |
the surface.\cite{fix} For a cubic system of 1000 particles arranged |
| 454 |
|
in a $10x10x10$ cube, 488 particles will be exposed to the surface. |
| 455 |
|
Unless one is simulating an isolated particle group in a vacuum, the |
| 456 |
|
behavior of the system will be far from the desired bulk |
| 457 |
|
charecteristics. To offset this, simulations employ the use of |
| 458 |
< |
periodic boundary images. \cite{fix} |
| 458 |
> |
periodic boundary images.\cite{fix} |
| 459 |
|
|
| 460 |
|
The technique involves the use of an algorithm that replicates the |
| 461 |
|
simulation box on an infinite lattice in cartesian space. Any given |
| 473 |
|
cutoff radius be employed. Using a cutoff radius improves the |
| 474 |
|
efficiency of the force evaluation, as particles farther than a |
| 475 |
|
predetermined distance, $fix$, are not included in the |
| 476 |
< |
calculation. \cite{fix} In a simultation with periodic images, $fix$ |
| 476 |
> |
calculation.\cite{fix} In a simultation with periodic images, $fix$ |
| 477 |
|
has a maximum value of $fix$. Fig.~\ref{fix} illustrates how using an |
| 478 |
|
$fix$ larger than this value, or in the extreme limit of no $fix$ at |
| 479 |
|
all, the corners of the simulation box are unequally weighted due to |
| 527 |
|
order of $\Delta t^4$. |
| 528 |
|
|
| 529 |
|
In practice, however, the simulations in this research were integrated |
| 530 |
< |
with a velocity reformulation of teh Verlet method. \cite{allen87:csl} |
| 530 |
> |
with a velocity reformulation of teh Verlet method.\cite{allen87:csl} |
| 531 |
|
\begin{equation} |
| 532 |
|
eq here |
| 533 |
|
\label{introEq:MDvelVerletPos} |
| 550 |
|
therefore increases, but does not guarantee the ``correctness'' or the |
| 551 |
|
integrated trajectory. |
| 552 |
|
|
| 553 |
< |
It can be shown, \cite{Frenkel1996} that although the Verlet algorithm |
| 553 |
> |
It can be shown,\cite{Frenkel1996} that although the Verlet algorithm |
| 554 |
|
does not rigorously preserve the actual Hamiltonian, it does preserve |
| 555 |
|
a pseudo-Hamiltonian which shadows the real one in phase space. This |
| 556 |
|
pseudo-Hamiltonian is proveably area-conserving as well as time |
| 716 |
|
unitary and symplectic, the entire integration scheme is also |
| 717 |
|
symplectic and time reversible. |
| 718 |
|
|
| 719 |
< |
\section{\label{introSec:chapterLayout}Chapter Layout} |
| 719 |
> |
\section{\label{introSec:layout}Dissertation Layout} |
| 720 |
|
|
| 721 |
< |
\subsection{\label{introSec:RSA}Random Sequential Adsorption} |
| 721 |
> |
This dissertation is divided as follows:Chapt.~\ref{chapt:RSA} |
| 722 |
> |
presents the random sequential adsorption simulations of related |
| 723 |
> |
pthalocyanines on a gold (111) surface. Chapt.~\ref{chapt:OOPSE} |
| 724 |
> |
is about the writing of the molecular dynamics simulation package |
| 725 |
> |
{\sc oopse}, Chapt.~\ref{chapt:lipid} regards the simulations of |
| 726 |
> |
phospholipid bilayers using a mesoscale model, and lastly, |
| 727 |
> |
Chapt.~\ref{chapt:conclusion} concludes this dissertation with a |
| 728 |
> |
summary of all results. The chapters are arranged in chronological |
| 729 |
> |
order, and reflect the progression of techniques I employed during my |
| 730 |
> |
research. |
| 731 |
|
|
| 732 |
< |
\subsection{\label{introSec:OOPSE}The OOPSE Simulation Package} |
| 732 |
> |
The chapter concerning random sequential adsorption |
| 733 |
> |
simulations is a study in applying the principles of theoretical |
| 734 |
> |
research in order to obtain a simple model capaable of explaining the |
| 735 |
> |
results. My advisor, Dr. Gezelter, and I were approached by a |
| 736 |
> |
colleague, Dr. Lieberman, about possible explanations for partial |
| 737 |
> |
coverge of a gold surface by a particular compound of hers. We |
| 738 |
> |
suggested it might be due to the statistical packing fraction of disks |
| 739 |
> |
on a plane, and set about to simulate this system. As the events in |
| 740 |
> |
our model were not dynamic in nature, a Monte Carlo method was |
| 741 |
> |
emplyed. Here, if a molecule landed on the surface without |
| 742 |
> |
overlapping another, then its landing was accepted. However, if there |
| 743 |
> |
was overlap, the landing we rejected and a new random landing location |
| 744 |
> |
was chosen. This defined our acceptance rules and allowed us to |
| 745 |
> |
construct a Markov chain whose limiting distribution was the surface |
| 746 |
> |
coverage in which we were interested. |
| 747 |
|
|
| 748 |
< |
\subsection{\label{introSec:bilayers}A Mesoscale Model for |
| 749 |
< |
Phospholipid Bilayers} |
| 748 |
> |
The following chapter, about the simulation package {\sc oopse}, |
| 749 |
> |
describes in detail the large body of scientific code that had to be |
| 750 |
> |
written in order to study phospholipid bilayer. Although there are |
| 751 |
> |
pre-existing molecular dynamic simulation packages available, none |
| 752 |
> |
were capable of implementing the models we were developing.{\sc oopse} |
| 753 |
> |
is a unique package capable of not only integrating the equations of |
| 754 |
> |
motion in cartesian space, but is also able to integrate the |
| 755 |
> |
rotational motion of rigid bodies and dipoles. Add to this the |
| 756 |
> |
ability to perform calculations across parallel processors and a |
| 757 |
> |
flexible script syntax for creating systems, and {\sc oopse} becomes a |
| 758 |
> |
very powerful scientific instrument for the exploration of our model. |
| 759 |
> |
|
| 760 |
> |
Bringing us to Chapt.~\ref{chapt:lipid}. Using {\sc oopse}, I have been |
| 761 |
> |
able to parametrize a mesoscale model for phospholipid simulations. |
| 762 |
> |
This model retains information about solvent ordering about the |
| 763 |
> |
bilayer, as well as information regarding the interaction of the |
| 764 |
> |
phospholipid head groups' dipole with each other and the surrounding |
| 765 |
> |
solvent. These simulations give us insight into the dynamic events |
| 766 |
> |
that lead to the formation of phospholipid bilayers, as well as |
| 767 |
> |
provide the foundation for future exploration of bilayer phase |
| 768 |
> |
behavior with this model. |
| 769 |
> |
|
| 770 |
> |
Which leads into the last chapter, where I discuss future directions |
| 771 |
> |
for both{\sc oopse} and this mesoscale model. Additionally, I will |
| 772 |
> |
give a summary of results for this dissertation. |
| 773 |
> |
|
| 774 |
> |
|