ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/chrisDissertation/Introduction.tex
(Generate patch)

Comparing trunk/chrisDissertation/Introduction.tex (file contents):
Revision 3019 by chrisfen, Fri Sep 22 13:45:24 2006 UTC vs.
Revision 3025 by chrisfen, Tue Sep 26 16:02:25 2006 UTC

# Line 1 | Line 1
1 < \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
1 > \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
2  
3 + The following dissertation presents the primary aspects of the
4 + research I have performed and been involved with over the last several
5 + years. Rather than presenting the topics in a chronological fashion,
6 + they are arranged to form a series where the later topics apply and
7 + extend the findings of the former topics. This layout does lead to
8 + occasional situations where knowledge gleaned from earlier chapters
9 + (particularly chapter \ref{chap:electrostatics}) is not applied
10 + outright in the later work; however, I feel that this organization is
11 + more instructive and provides a more cohesive progression of research
12 + efforts.
13 +
14 + This introductory chapter gives a general overview of molecular
15 + simulations, with particular emphasis on considerations that need to
16 + be made in order to apply the technique properly. This leads quite
17 + naturally into chapter \ref{chap:electrostatics}, where we investigate
18 + correction techniques for proper handling of long-ranged electrostatic
19 + interactions in molecular simulations. In particular we develop and
20 + evaluate some new simple pairwise methods. These techniques make an
21 + appearance in the later chapters, as they are applied to specific
22 + systems of interest, showing how it they can improve the quality of
23 + various molecular simulations.
24 +
25 + In chapter \ref{chap:water}, we focus on simple water models,
26 + specifically the single-point soft sticky dipole (SSD) family of water
27 + models. These single-point models are more efficient than the common
28 + multi-point partial charge models and, in many cases, better capture
29 + the dynamic properties of water. We discuss improvements to these
30 + models in regards to long-range electrostatic corrections and show
31 + that these models can work well with the techniques discussed in
32 + chapter \ref{chap:electrostatics}. By investigating and improving
33 + simple water models, we are extending the limits of computational
34 + efficiency for systems that depend heavily on water calculations.
35 +
36 + In chapter \ref{chap:ice}, we study a unique polymorph of ice that we
37 + discovered while performing water simulations with the fast simple
38 + water models discussed in the previous chapter. This form of ice,
39 + which we call ``imaginary ice'' (Ice-$i$), has a low-density structure
40 + which is different from any known polymorph characterized in either
41 + experiment or other simulations. In this work, we perform a free
42 + energy analysis and see that this structure is in fact the
43 + thermodynamically preferred form of ice for both the single-point and
44 + commonly used multi-point water models under the chosen simulation
45 + conditions. We also consider electrostatic corrections, again
46 + including the techniques discussed in chapter
47 + \ref{chap:electrostatics}, to see how they influence the free energy
48 + results. This work, to some degree, addresses the appropriateness of
49 + using these simplistic water models outside of the conditions for
50 + which they were developed.
51 +
52 + Finally, in chapter \ref{chap:conclusion}, we summarize the work
53 + presented in the previous chapters and connect ideas together with
54 + some final comments. The supporting information follows in the
55 + appendix, and it gives a more detailed look at systems discussed in
56 + chapter \ref{chap:electrostatics}.
57 +
58 + \section{On Molecular Simulation}
59 +
60   In order to advance our understanding of natural chemical and physical
61   processes, researchers develop explanations for events observed
62   experimentally. These hypotheses, supported by a body of corroborating
63   observations, can develop into accepted theories for how these
64   processes occur. This validation process, as well as testing the
65   limits of these theories, is essential in developing a firm foundation
66 < for our knowledge. Theories involving molecular scale systems cause
67 < particular difficulties in this process because their size and often
68 < fast motions make them difficult to observe experimentally. One useful
69 < tool for addressing these difficulties is computer simulation.
66 > for our knowledge. Developing and validating theories involving
67 > molecular scale systems is particularly difficult because their size
68 > and often fast motions make them difficult to observe
69 > experimentally. One useful tool for addressing these difficulties is
70 > computer simulation.
71  
72   In computer simulations, we can develop models from either the theory
73 < or experimental knowledge and then test them in a controlled
74 < environment. Work done in this manner allows us to further refine
73 > or our experimental knowledge and then test them in controlled
74 > environments. Work done in this manner allows us to further refine
75   theories, more accurately represent what is happening in experimental
76   observations, and even make predictions about what one will see in
77 < experiments. Thus, computer simulations of molecular systems act as a
78 < bridge between theory and experiment.
77 > future experiments. Thus, computer simulations of molecular systems
78 > act as a bridge between theory and experiment.
79  
80   Depending on the system of interest, there are a variety of different
81   computational techniques that can used to test and gather information
# Line 35 | Line 93 | molecular dynamics near exclusively, so we will presen
93   molecular dynamics can be used to investigate dynamical
94   quantities. The research presented in the following chapters utilized
95   molecular dynamics near exclusively, so we will present a general
96 < introduction to molecular dynamics and not Monte Carlo. There are
97 < several resources available for those desiring a more in-depth
98 < presentation of either of these
41 < techniques.\cite{Allen87,Frenkel02,Leach01}
96 > introduction to molecular dynamics. There are several resources
97 > available for those desiring a more in-depth presentation of either of
98 > these techniques.\cite{Allen87,Frenkel02,Leach01}
99  
100   \section{\label{sec:MolecularDynamics}Molecular Dynamics}
101  
# Line 48 | Line 105 | given configuration of particles at time $t_1$, basica
105   configuration to evolve in a manner that mimics real molecular
106   systems. To do this, we start with clarifying what we know about a
107   given configuration of particles at time $t_1$, basically that each
108 < particle in the configuration has a position ($q$) and velocity
109 < ($\dot{q}$). We now want to know what the configuration will be at
108 > particle in the configuration has a position ($\mathbf{q}$) and velocity
109 > ($\dot{\mathbf{q}}$). We now want to know what the configuration will be at
110   time $t_2$. To find out, we need the classical equations of
111   motion, and one useful formulation of them is the Lagrangian form.
112  
# Line 65 | Line 122 | over time is zero,\cite{Tolman38}
122   motion, to say that the variation of the integral of the Lagrangian
123   over time is zero,\cite{Tolman38}
124   \begin{equation}
125 < \delta\int_{t_1}^{t_2}L(q,\dot{q})dt = 0.
125 > \delta\int_{t_1}^{t_2}L(\mathbf{q},\dot{\mathbf{q}})dt = 0.
126   \end{equation}
127 < This can be written as a summation of integrals to give
127 > The variation can be transferred to the variables that make up the
128 > Lagrangian,
129   \begin{equation}
130 < \int_{t_1}^{t_2}\sum_{i=1}^{3N}\left(\frac{\partial L}{\partial q_i}\delta q_i
131 <        + \frac{\partial L}{\partial \dot{q}_i}\delta \dot{q}_i\right)dt = 0.
130 > \int_{t_1}^{t_2}\sum_{i=1}^{3N}\left(
131 >        \frac{\partial L}{\partial \mathbf{q}_i}\delta \mathbf{q}_i
132 >        + \frac{\partial L}{\partial \dot{\mathbf{q}}_i}\delta
133 >                \dot{\mathbf{q}}_i\right)dt = 0.
134   \end{equation}
135 < Using fact that $\dot q$ is the derivative with respect to time of $q$
136 < and integrating the second partial derivative in the parenthesis by
137 < parts, this equation simplifies to
135 > Using the fact that $\dot{\mathbf{q}}$ is the derivative of
136 > $\mathbf{q}$ with respect to time and integrating the second partial
137 > derivative in the parenthesis by parts, this equation simplifies to
138   \begin{equation}
139   \int_{t_1}^{t_2}\sum_{i=1}^{3N}\left(
140 <        \frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i}
141 <        - \frac{\partial L}{\partial q_i}\right)\delta {q}_i dt = 0,
140 >        \frac{d}{dt}\frac{\partial L}{\partial \dot{\mathbf{q}}_i}
141 >        - \frac{\partial L}{\partial \mathbf{q}_i}\right)
142 >                \delta {\mathbf{q}}_i dt = 0,
143   \end{equation}
144 < and the above equation is only valid for
144 > and since each variable is independent, we can separate the
145 > contribution from each of the variables:
146   \begin{equation}
147 < \frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i}
148 <        - \frac{\partial L}{\partial q_i} = 0\quad\quad(i=1,2,\dots,3N).
147 > \frac{d}{dt}\frac{\partial L}{\partial \dot{\mathbf{q}}_i}
148 >        - \frac{\partial L}{\partial \mathbf{q}_i} = 0
149 >                \quad\quad(i=1,2,\dots,N).
150   \label{eq:formulation}
151   \end{equation}
152   To obtain the classical equations of motion for the particles, we can
# Line 116 | Line 179 | m\frac{d^2\mathbf{r}_i}{dt^2} = \sum_{j\ne i}\mathbf{f
179   \begin{equation}
180   m\frac{d^2\mathbf{r}_i}{dt^2} = \sum_{j\ne i}\mathbf{f}(r_{ij}),
181   \end{equation}
182 < with the following simple algorithm:
182 > with the following algorithm:
183   \begin{equation}
184   \mathbf{r}_i(t+\delta t) = -\mathbf{r}_i(t-\delta t) + 2\mathbf{r}_i(t)
185          + \sum_{j\ne i}\mathbf{f}(r_{ij}(t))\delta t^2,
# Line 126 | Line 189 | differential equation. The velocities are necessary fo
189   interesting to note that equation (\ref{eq:verlet}) does not include
190   velocities, and this makes sense since they are not present in the
191   differential equation. The velocities are necessary for the
192 < calculation of the kinetic energy, and they can be calculated at each
193 < time step with the following equation:
192 > calculation of the kinetic energy and can be calculated at each time
193 > step with the equation:
194   \begin{equation}
195   \mathbf{v}_i(t) = \frac{\mathbf{r}_i(t+\delta t)-\mathbf{r}_i(t-\delta t)}
196                         {2\delta t}.
# Line 141 | Line 204 | symplectic, meaning that it preserves area in phase-sp
204   higher order information that is discarded after integrating
205   steps. Another interesting property of this algorithm is that it is
206   symplectic, meaning that it preserves area in phase-space. Symplectic
207 < algorithms system stays in the region of phase-space dictated by the
208 < ensemble and enjoy greater long-time energy
209 < conservations.\cite{Frenkel02}
207 > algorithms keep the system evolving in the region of phase-space
208 > dictated by the ensemble and enjoy a greater degree of energy
209 > conservation.\cite{Frenkel02}
210  
211   While the error in the positions calculated using the Verlet algorithm
212   is small ($\mathcal{O}(\delta t^4)$), the error in the velocities is
213 < quite a bit larger ($\mathcal{O}(\delta t^2)$).\cite{Allen87} Swope
214 < {\it et al.}  developed a corrected for of this algorithm, the
213 > substantially larger ($\mathcal{O}(\delta t^2)$).\cite{Allen87} Swope
214 > {\it et al.}  developed a corrected version of this algorithm, the
215   `velocity Verlet' algorithm, which improves the error of the velocity
216   calculation and thus the energy conservation.\cite{Swope82} This
217   algorithm involves a full step of the positions,
# Line 161 | Line 224 | and a half step of the velocities,
224   \mathbf{v}\left(t+\frac{1}{2}\delta t\right) = \mathbf{v}(t)
225                                          + \frac{1}{2}\delta t\mathbf{a}(t).
226   \end{equation}
227 < After calculating new forces, the velocities can be updated to a full
228 < step,
227 > After forces are calculated at the new positions, the velocities can
228 > be updated to a full step,
229   \begin{equation}
230   \mathbf{v}(t+\delta t) = \mathbf{v}\left(t+\frac{1}{2}\delta t\right)
231                                  + \frac{1}{2}\delta t\mathbf{a}(t+\delta t).
# Line 177 | Line 240 | Rigid bodies are non-spherical particles or collection
240   \subsection{\label{sec:IntroIntegrate}Rigid Body Motion}
241  
242   Rigid bodies are non-spherical particles or collections of particles
243 < (e.g. $\mbox{C}_{60}$) that have a constant internal potential and
244 < move collectively.\cite{Goldstein01} Discounting iterative constraint
245 < procedures like {\sc shake} and {\sc rattle} for approximating rigid
246 < bodies, they are not included in most simulation packages because of
247 < the algorithmic complexity involved in propagating orientational
243 > that have a constant internal potential and move
244 > collectively.\cite{Goldstein01} To move these particles, the
245 > translational and rotational motion can be integrated
246 > independently. Discounting iterative constraint procedures like {\sc
247 > shake} and {\sc rattle} for approximating rigid body dynamics, rigid
248 > bodies are not included in most simulation packages because of the
249 > algorithmic complexity involved in propagating the orientational
250   degrees of freedom.\cite{Ryckaert77,Andersen83,Krautler01} Integrators
251   which propagate orientational motion with an acceptable level of
252   energy conservation for molecular dynamics are relatively new
# Line 205 | Line 270 | $\mathbf{r}_{ia}$, and $\boldsymbol{\tau}_{ia}$ are th
270   where $\boldsymbol{\tau}_i$ and $\mathbf{r}_i$ are the torque on and
271   position of the center of mass respectively, while $\mathbf{f}_{ia}$,
272   $\mathbf{r}_{ia}$, and $\boldsymbol{\tau}_{ia}$ are the force on,
273 < position of, and torque on the component particles.
273 > position of, and torque on the component particles of the rigid body.
274  
275   The summation of the total torque is done in the body fixed axis. In
276   order to move between the space fixed and body fixed coordinate axes,
# Line 230 | Line 295 | $\mathsf{A}$ can be expressed as arithmetic operations
295   representation of the orientation of a rigid
296   body.\cite{Evans77,Evans77b,Allen87} Thus, the elements of
297   $\mathsf{A}$ can be expressed as arithmetic operations involving the
298 < four quaternions ($q_w, q_x, q_y,$ and $q_z$),
298 > four quaternions ($q_0, q_1, q_2,$ and $q_3$),
299   \begin{equation}
300   \mathsf{A} = \left( \begin{array}{l@{\quad}l@{\quad}l}
301   q_0^2+q_1^2-q_2^2-q_3^2 & 2(q_1q_2+q_0q_3) & 2(q_1q_3-q_0q_2) \\
# Line 241 | Line 306 | performance enhancements over Euler angles, particular
306   Integration of the equations of motion involves a series of arithmetic
307   operations involving the quaternions and angular momenta and leads to
308   performance enhancements over Euler angles, particularly for very
309 < small systems.\cite{Evans77} This integration methods works well for
309 > small systems.\cite{Evans77} This integration method works well for
310   propagating orientational motion in the canonical ensemble ($NVT$);
311   however, energy conservation concerns arise when using the simple
312   quaternion technique under the microcanonical ($NVE$) ensemble.  An
313 < earlier implementation of {\sc oopse} utilized quaternions for
313 > earlier implementation of our simulation code utilized quaternions for
314   propagation of rotational motion; however, a detailed investigation
315   showed that they resulted in a steady drift in the total energy,
316 < something that has been observed by Kol {\it et al.} (also see
317 < section~\ref{sec:waterSimMethods}).\cite{Kol97}
316 > something that had also been observed by Kol {\it et al.} (see
317 > section~\ref{sec:waterSimMethods} for information on this
318 > investigation).\cite{Kol97}
319  
320 < Because of the outlined issues involving integration of the
321 < orientational motion using both Euler angles and quaternions, we
322 < decided to focus on a relatively new scheme that propagates the entire
323 < nine parameter rotation matrix. This techniques is a velocity-Verlet
324 < version of the symplectic splitting method proposed by Dullweber,
325 < Leimkuhler and McLachlan ({\sc dlm}).\cite{Dullweber97} When there are
326 < no directional atoms or rigid bodies present in the simulation, this
327 < integrator becomes the standard velocity-Verlet integrator which is
328 < known to effectively sample the microcanonical ($NVE$)
320 > Because of these issues involving integration of the orientational
321 > motion using both Euler angles and quaternions, we decided to focus on
322 > a relatively new scheme that propagates the entire nine parameter
323 > rotation matrix. This techniques is a velocity-Verlet version of the
324 > symplectic splitting method proposed by Dullweber, Leimkuhler and
325 > McLachlan ({\sc dlm}).\cite{Dullweber97} When there are no directional
326 > atoms or rigid bodies present in the simulation, this {\sc dlm}
327 > integrator reduces to the standard velocity-Verlet integrator, which
328 > is known to effectively sample the constant energy $NVE$
329   ensemble.\cite{Frenkel02}
330  
331   The key aspect of the integration method proposed by Dullweber
# Line 272 | Line 338 | The integration of the equations of motion is carried
338   substantial benefits in energy conservation.
339  
340   The integration of the equations of motion is carried out in a
341 < velocity-Verlet style two-part algorithm.\cite{Swope82} The first-part
341 > velocity-Verlet style two-part algorithm.\cite{Swope82} The first part
342   ({\tt moveA}) consists of a half-step ($t + \delta t/2$) of the linear
343   velocity (${\bf v}$) and angular momenta ({\bf j}) and a full-step ($t
344   + \delta t$) of the positions ({\bf r}) and rotation matrix,
# Line 334 | Line 400 | propagator.\cite{Trotter59} This has three effects:
400   {\it symplectic}),
401   \item the integrator is time-{\it reversible}, and
402   \item the error for a single time step is of order
403 < $\mathcal{O}\left(\delta t^4\right)$ for time steps of length $\delta t$.
403 > $\mathcal{O}\left(\delta t^3\right)$ for time steps of length $\delta t$.
404   \end{enumerate}
405  
406   After the initial half-step ({\tt moveA}), the forces and torques are
# Line 421 | Line 487 | integrator, while the quaternion scheme will require ~
487   0.001~kcal~mol$^{-1}$ per particle is desired, a nanosecond of
488   simulation time will require ~19 hours of CPU time with the {\sc dlm}
489   integrator, while the quaternion scheme will require ~154 hours of CPU
490 < time. This demonstrates the computational advantage of the integration
491 < scheme utilized in {\sc oopse}.
490 > time. This demonstrates the computational advantage of the {\sc dlm}
491 > integration scheme.
492  
493   \section{Accumulating Interactions}
494  
# Line 431 | Line 497 | become a cumbersome task for large systems since the n
497   potential and forces (and torques if the particle is a rigid body or
498   multipole) on each particle from their surroundings. This can quickly
499   become a cumbersome task for large systems since the number of pair
500 < interactions increases by $\mathcal{O}(N(N-1)/2)$ if you accumulate
501 < interactions between all particles in the system, note the utilization
502 < of Newton's third law to reduce the interaction number from
503 < $\mathcal{O}(N^2)$. The case of periodic boundary conditions further
504 < complicates matters by turning the finite system into an infinitely
505 < repeating one. Fortunately, we can reduce the scale of this problem by
506 < using spherical cutoff methods.
500 > interactions increases by $\mathcal{O}(N(N-1)/2)$ when accumulating
501 > interactions between all particles in the system. (Note the
502 > utilization of Newton's third law to reduce the interaction number
503 > from $\mathcal{O}(N^2)$.) Using periodic boundary conditions (section
504 > \ref{sec:periodicBoundaries}) further complicates matters by turning
505 > the finite system into an infinitely repeating one. Fortunately, we
506 > can reduce the scale of this problem by using spherical cutoff
507 > methods.
508  
509   \begin{figure}
510   \centering
# Line 450 | Line 517 | interactions between all particles in the simulation,
517   \end{figure}
518   With spherical cutoffs, rather than accumulating the full set of
519   interactions between all particles in the simulation, we only
520 < explicitly consider interactions between local particles out to a
521 < specified cutoff radius distance, $R_\textrm{c}$, (see figure
520 > explicitly consider interactions between particles separated by less
521 > than a specified cutoff radius distance, $R_\textrm{c}$, (see figure
522   \ref{fig:sphereCut}). This reduces the scaling of the interaction to
523   $\mathcal{O}(N\cdot\textrm{c})$, where `c' is a value that depends on
524   the size of $R_\textrm{c}$ (c $\approx R_\textrm{c}^3$). Determination
525   of which particles are within the cutoff is also an issue, because
526   this process requires a full loop over all $N(N-1)/2$ pairs. To reduce
527   the this expense, we can use neighbor lists.\cite{Verlet67,Thompson83}
528 < With neighbor lists, we have a second list of particles built from a
529 < list radius $R_\textrm{l}$, which is larger than $R_\textrm{c}$. Once
530 < any of the particles within $R_\textrm{l}$ move a distance of
531 < $R_\textrm{l}-R_\textrm{c}$ (the ``skin'' thickness), we rebuild the
532 < list with the full $N(N-1)/2$ loop.\cite{Verlet67} With an appropriate
533 < skin thickness, these updates are only performed every $\sim$20
534 < time steps, significantly reducing the time spent on pair-list
535 < bookkeeping operations.\cite{Allen87} If these neighbor lists are
536 < utilized, it is important that these list updates occur
528 >
529 > When using neighbor lists, we build a second list of particles built
530 > from a list radius $R_\textrm{l}$, which is larger than
531 > $R_\textrm{c}$. Once any particle within $R_\textrm{l}$ moves half the
532 > distance of $R_\textrm{l}-R_\textrm{c}$ (the ``skin'' thickness), we
533 > rebuild the list with the full $N(N-1)/2$ loop.\cite{Verlet67} With an
534 > appropriate skin thickness, these updates are only performed every
535 > $\sim$20 time steps, significantly reducing the time spent on
536 > pair-list bookkeeping operations.\cite{Allen87} If these neighbor
537 > lists are utilized, it is important that these list updates occur
538   regularly. Incorrect application of this technique leads to
539   non-physical dynamics, such as the ``flying block of ice'' behavior
540   for which improper neighbor list handling was identified a one of the
# Line 485 | Line 553 | region in order to smooth out the discontinuity.}
553   region in order to smooth out the discontinuity.}
554   \label{fig:ljCutoff}
555   \end{figure}
556 < As particle move in and out of $R_\textrm{c}$, there will be sudden
557 < discontinuous jumps in the potential (and forces) due to their
558 < appearance and disappearance. In order to prevent heating and poor
559 < energy conservation in the simulations, this discontinuity needs to be
560 < smoothed out. There are several ways to modify the function so that it
561 < crosses $R_\textrm{c}$ in a continuous fashion, and the easiest
562 < methods is shifting the potential. To shift the potential, we simply
563 < subtract out the value we calculate at $R_\textrm{c}$ from the whole
564 < potential. For the shifted form of the Lennard-Jones potential is
556 > As the distance between a pair of particles fluctuates around
557 > $R_\textrm{c}$, there will be sudden discontinuous jumps in the
558 > potential (and forces) due to their inclusion and exclusion from the
559 > interaction loop. In order to prevent heating and poor energy
560 > conservation in the simulations, this discontinuity needs to be
561 > smoothed out. There are several ways to modify the potential function
562 > to eliminate these discontinuities, and the easiest method is shifting
563 > the potential. To shift the potential, we simply subtract out the
564 > value of the function at $R_\textrm{c}$ from the whole potential. The
565 > shifted form of the Lennard-Jones potential is
566   \begin{equation}
567   V_\textrm{SLJ} = \left\{\begin{array}{l@{\quad\quad}l}
568          V_\textrm{LJ}(r_{ij}) - V_\textrm{LJ}(R_\textrm{c}) & r_{ij} < R_\textrm{c}, \\
# Line 502 | Line 571 | where
571   \end{equation}
572   where
573   \begin{equation}
574 < V_\textrm{LJ} = 4\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12} -
575 <                                \left(\frac{\sigma}{r_{ij}}\right)^6\right].
574 > V_\textrm{LJ}(r_{ij}) =
575 >        4\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12} -
576 >        \left(\frac{\sigma}{r_{ij}}\right)^6\right].
577   \end{equation}
578   In figure \ref{fig:ljCutoff}, the shifted form of the potential
579   reaches zero at the cutoff radius at the expense of the correct
580 < magnitude below $R_\textrm{c}$. This correction method also does
580 > magnitude inside $R_\textrm{c}$. This correction method also does
581   nothing to correct the discontinuity in the forces. We can account for
582 < this force discontinuity by shifting in the by applying the shifting
583 < in the forces rather than just the potential via
582 > this force discontinuity by applying the shifting in the forces as
583 > well as in the potential via
584   \begin{equation}
585   V_\textrm{SFLJ} = \left\{\begin{array}{l@{\quad\quad}l}
586          V_\textrm{LJ}({r_{ij}}) - V_\textrm{LJ}(R_\textrm{c}) -
# Line 520 | Line 590 | The forces are continuous with this potential; however
590   \end{array}\right.
591   \end{equation}
592   The forces are continuous with this potential; however, the inclusion
593 < of the derivative term distorts the potential even further than the
594 < simple shifting as shown in figure \ref{fig:ljCutoff}. The method for
595 < correcting the discontinuity which results in the smallest
596 < perturbation in both the potential and the forces is the use of a
597 < switching function. The cubic switching function,
593 > of the derivative term distorts the potential even further than
594 > shifting only the potential as shown in figure \ref{fig:ljCutoff}.
595 >
596 > The method for correcting these discontinuities which results in the
597 > smallest perturbation in both the potential and the forces is the use
598 > of a switching function. The cubic switching function,
599   \begin{equation}
600   S(r) = \left\{\begin{array}{l@{\quad\quad}l}
601          1 & r_{ij} \leqslant R_\textrm{sw}, \\
# Line 544 | Line 615 | switching transition.
615  
616   \subsection{\label{sec:LJCorrections}Long-Range Interaction Corrections}
617  
618 < While a good approximation, accumulating interaction only from the
619 < nearby particles can lead to some issues, because the further away
620 < surrounding particles do still have a small effect. For instance,
618 > While a good approximation, accumulating interactions only from nearby
619 > particles can lead to some issues, because particles at distances
620 > greater than $R_\textrm{c}$ still have a small effect. For instance,
621   while the strength of the Lennard-Jones interaction is quite weak at
622   $R_\textrm{c}$ in figure \ref{fig:ljCutoff}, we are discarding all of
623 < the attractive interaction that extends out to extremely long
624 < distances. Thus, the potential is a little too high and the pressure
625 < on the central particle from the surroundings is a little too low. For
626 < homogeneous Lennard-Jones systems, we can correct for this neglect by
627 < assuming a uniform density and integrating the missing part,
623 > the attractive interactions that extend out to extremely long
624 > distances. Thus, the potential will be a little too high and the
625 > pressure on the central particle from the surroundings will be a
626 > little too low. For homogeneous Lennard-Jones systems, we can correct
627 > for this effect by assuming a uniform density ($\rho$) and integrating
628 > the missing part,
629   \begin{equation}
630   V_\textrm{full}(r_{ij}) \approx V_\textrm{c}
631                  + 2\pi N\rho\int^\infty_{R_\textrm{c}}r^2V_\textrm{LJ}(r)dr,
632   \end{equation}
633   where $V_\textrm{c}$ is the truncated Lennard-Jones
634 < potential.\cite{Allen87} Like the potential, other Lennard-Jones
635 < properties can be corrected by integration over the relevant
636 < function. Note that with heterogeneous systems, this correction begins
637 < to break down because the density is no longer uniform.
634 > potential.\cite{Allen87} Like the potential, other properties can be
635 > corrected by integration over the relevant function. Note that with
636 > heterogeneous systems, this correction breaks down because the density
637 > is no longer uniform.
638  
639   Correcting long-range electrostatic interactions is a topic of great
640   importance in the field of molecular simulations. There have been
641   several techniques developed to address this issue, like the Ewald
642   summation and the reaction field technique. An in-depth analysis of
643 < this problem, as well as useful corrective techniques, is presented in
643 > this problem, as well as useful correction techniques, is presented in
644   chapter \ref{chap:electrostatics}.
645  
646 < \subsection{Periodic Boundary Conditions}
646 > \subsection{\label{sec:periodicBoundaries}Periodic Boundary Conditions}
647  
648   In typical molecular dynamics simulations there are no restrictions
649   placed on the motion of particles outside of what the inter-particle
# Line 607 | Line 679 | unrestricted fashion while calculating interactions us
679   region in space. To avoid this ``soft'' confinement, it is common
680   practice to allow the particle coordinates to expand in an
681   unrestricted fashion while calculating interactions using a wrapped
682 < set of ``minimum image'' coordinates. These minimum image coordinates
683 < need not be stored because they are easily calculated on the fly when
684 < determining particle distances.
682 > set of ``minimum image'' coordinates. These coordinates need not be
683 > stored because they are easily calculated while determining particle
684 > distances.
685  
686   \section{Calculating Properties}
687  
688 < In order to use simulations to model experimental process and evaluate
689 < theories, we need to be able to extract properties from the
688 > In order to use simulations to model experimental processes and
689 > evaluate theories, we need to be able to extract properties from the
690   results. In experiments, we can measure thermodynamic properties such
691   as the pressure and free energy. In computer simulations, we can
692   calculate properties from the motion and configuration of particles in
# Line 623 | Line 695 | isobaric-isothermal ($NPT$), and microcanonical ($NVE$
695  
696   The work presented in the later chapters use the canonical ($NVT$),
697   isobaric-isothermal ($NPT$), and microcanonical ($NVE$) statistical
698 < mechanics ensembles. The different ensembles lend themselves to more
698 > mechanical ensembles. The different ensembles lend themselves to more
699   effectively calculating specific properties. For instance, if we
700   concerned ourselves with the calculation of dynamic properties, which
701   are dependent upon the motion of the particles, it is better to choose
702 < an ensemble that does not add system motions to keep properties like
703 < the temperature or pressure constant. In this case, the $NVE$ ensemble
704 < would be the most appropriate choice. In chapter \ref{chap:ice}, we
705 < discuss calculating free energies, which are non-mechanical
706 < thermodynamic properties, and these calculations also depend on the
707 < chosen ensemble.\cite{Allen87} The Helmholtz free energy ($A$) depends
708 < on the $NVT$ partition function ($Q_{NVT}$),
702 > an ensemble that does not add artificial motions to keep properties
703 > like the temperature or pressure constant. In this case, the $NVE$
704 > ensemble would be the most appropriate choice. In chapter
705 > \ref{chap:ice}, we discuss calculating free energies, which are
706 > non-mechanical thermodynamic properties, and these calculations also
707 > depend on the chosen ensemble.\cite{Allen87} The Helmholtz free energy
708 > ($A$) depends on the $NVT$ partition function ($Q_{NVT}$),
709   \begin{equation}
710   A = -k_\textrm{B}T\ln Q_{NVT},
711   \end{equation}
# Line 652 | Line 724 | calculated from an average over the densities for the
724   chosen property like the density in the $NPT$ ensemble, where the
725   volume is allowed to fluctuate. The density for the simulation is
726   calculated from an average over the densities for the individual
727 < configurations. Being that the configurations from the integration
728 < process are related to one another by the time evolution of the
729 < interactions, this average is technically a time average. In
730 < calculating thermodynamic properties, we would actually prefer an
731 < ensemble average that is representative of all accessible states of
732 < the system. We can calculate thermodynamic properties from the time
733 < average by taking advantage of the ergodic hypothesis, which states
734 < that over a long period of time, the time average and the ensemble
735 < average are the same.
727 > configurations. Since the configurations from the integration process
728 > are related to one another by the time evolution of the interactions,
729 > this average is technically a time average. In calculating
730 > thermodynamic properties, we would actually prefer an ensemble average
731 > that is representative of all accessible states of the system. We can
732 > calculate thermodynamic properties from the time average by taking
733 > advantage of the ergodic hypothesis, which states that for a
734 > sufficiently chaotic system, and over a long enough period of time,
735 > the time and ensemble averages are the same.
736  
737 < In addition to the average, the fluctuations of that particular
738 < property can be determined via the standard deviation. Fluctuations
739 < are useful for measuring various thermodynamic properties in computer
737 > In addition to the average, the fluctuations of a particular property
738 > can be determined via the standard deviation. Not only are
739 > fluctuations useful for determining the spread of values around the
740 > average and the error in the calculation of the value, they are also
741 > useful for measuring various thermodynamic properties in computer
742   simulations. In section \ref{sec:t5peThermo}, we use fluctuations in
743 < properties like the enthalpy and volume to calculate various
743 > properties like the enthalpy and volume to calculate other
744   thermodynamic properties, such as the constant pressure heat capacity
745   and the isothermal compressibility.
746  
747 < \section{Application of the Techniques}
747 > \section{OOPSE}
748  
749   In the following chapters, the above techniques are all utilized in
750   the study of molecular systems. There are a number of excellent
# Line 678 | Line 752 | Because of our interest in rigid body dynamics, point
752   incorporate many of these
753   methods.\cite{Brooks83,MacKerell98,Pearlman95,Berendsen95,Lindahl01,Smith96,Ponder87}
754   Because of our interest in rigid body dynamics, point multipoles, and
755 < systems where the orientational degrees cannot be handled using the
756 < common constraint procedures (like {\sc shake}), the majority of the
757 < following work was performed using {\sc oopse}, the object-oriented
758 < parallel simulation engine.\cite{Meineke05} The {\sc oopse} package
759 < started out as a collection of separate programs written within our
760 < group, and has developed into one of the few parallel molecular
761 < dynamics packages capable of accurately integrating rigid bodies and
762 < point multipoles.
755 > systems where the orientational degrees of freedom cannot be handled
756 > using the common constraint procedures (like {\sc shake}), the
757 > majority of the following work was performed using {\sc oopse}, the
758 > object-oriented parallel simulation engine.\cite{Meineke05} The {\sc
759 > oopse} package started out as a collection of separate programs
760 > written within our group, and has developed into one of the few
761 > parallel molecular dynamics packages capable of accurately integrating
762 > rigid bodies and point multipoles. This simulation package is
763 > open-source software, available to anyone interested in performing
764 > molecular dynamics simulations. More information about {\sc oopse} can
765 > be found in reference \cite{Meineke05} or at the {\tt
766 > http://oopse.org} website.
767  
690 In chapter \ref{chap:electrostatics}, we investigate correction
691 techniques for proper handling of long-ranged electrostatic
692 interactions. In particular we develop and evaluate some new pairwise
693 methods which we have incorporated into {\sc oopse}. These techniques
694 make an appearance in the later chapters, as they are applied to
695 specific systems of interest, showing how it they can improve the
696 quality of various molecular simulations.
768  
698 In chapter \ref{chap:water}, we focus on simple water models,
699 specifically the single-point soft sticky dipole (SSD) family of water
700 models. These single-point models are more efficient than the common
701 multi-point partial charge models and, in many cases, better capture
702 the dynamic properties of water. We discuss improvements to these
703 models in regards to long-range electrostatic corrections and show
704 that these models can work well with the techniques discussed in
705 chapter \ref{chap:electrostatics}. By investigating and improving
706 simple water models, we are extending the limits of computational
707 efficiency for systems that heavily depend on water calculations.
708
709 In chapter \ref{chap:ice}, we study a unique polymorph of ice that we
710 discovered while performing water simulations with the fast simple
711 water models discussed in the previous chapter. This form of ice,
712 which we called ``imaginary ice'' (Ice-$i$), has a low-density
713 structure which is different from any known polymorph from either
714 experiment or other simulations. In this study, we perform a free
715 energy analysis and see that this structure is in fact the
716 thermodynamically preferred form of ice for both the single-point and
717 commonly used multi-point water models under the chosen simulation
718 conditions. We also consider electrostatic corrections, again
719 including the techniques discussed in chapter
720 \ref{chap:electrostatics}, to see how they influence the free energy
721 results. This work, to some degree, addresses the appropriateness of
722 using these simplistic water models outside of the conditions for
723 which they were developed.
724
725 In the final chapter we summarize the work presented previously. We
726 also close with some final comments before the supporting information
727 presented in the appendices.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines