1 |
\chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND} |
2 |
|
3 |
In order to advance our understanding of natural chemical and physical |
4 |
processes, researchers develop explanations for events observed |
5 |
experimentally. These hypotheses, supported by a body of corroborating |
6 |
observations, can develop into accepted theories for how these |
7 |
processes occur. This validation process, as well as testing the |
8 |
limits of these theories, is essential in developing a firm foundation |
9 |
for our knowledge. Theories involving molecular scale systems cause |
10 |
particular difficulties in this process because their size and often |
11 |
fast motions make them difficult to observe experimentally. One useful |
12 |
tool for addressing these difficulties is computer simulation. |
13 |
|
14 |
In computer simulations, we can develop models from either the theory |
15 |
or experimental knowledge and then test them in a controlled |
16 |
environment. Work done in this manner allows us to further refine |
17 |
theories, more accurately represent what is happening in experimental |
18 |
observations, and even make predictions about what one will see in |
19 |
experiments. Thus, computer simulations of molecular systems act as a |
20 |
bridge between theory and experiment. |
21 |
|
22 |
Depending on the system of interest, there are a variety of different |
23 |
computational techniques that can used to test and gather information |
24 |
from the developed models. In the study of classical systems, the two |
25 |
most commonly used techniques are Monte Carlo and molecular |
26 |
dynamics. Both of these methods operate by calculating interactions |
27 |
between particles of our model systems; however, the progression of |
28 |
the simulation under the different techniques is vastly |
29 |
different. Monte Carlo operates through random configuration changes |
30 |
that that follow rules adhering to a specific statistical mechanics |
31 |
ensemble, while molecular dynamics is chiefly concerned with solving |
32 |
the classic equations of motion to move between configurations within |
33 |
an ensemble. Thermodynamic properties can be calculated from both |
34 |
techniques; but because of the random nature of Monte Carlo, only |
35 |
molecular dynamics can be used to investigate dynamical |
36 |
quantities. The research presented in the following chapters utilized |
37 |
molecular dynamics near exclusively, so we will present a general |
38 |
introduction to molecular dynamics and not Monte Carlo. There are |
39 |
several resources available for those desiring a more in-depth |
40 |
presentation of either of these |
41 |
techniques.\cite{Allen87,Frenkel02,Leach01} |
42 |
|
43 |
\section{\label{sec:MolecularDynamics}Molecular Dynamics} |
44 |
|
45 |
\section{\label{sec:MovingParticles}Propagating Particle Motion} |
46 |
|
47 |
\subsection{\label{sec:IntroIntegrate}Rigid Body Motion} |
48 |
|
49 |
Rigid bodies are non-spherical particles or collections of particles |
50 |
(e.g. $\mbox{C}_{60}$) that have a constant internal potential and |
51 |
move collectively.\cite{Goldstein01} Discounting iterative constraint |
52 |
procedures like {\sc shake} and {\sc rattle}, they are not included in |
53 |
most simulation packages because of the algorithmic complexity |
54 |
involved in propagating orientational degrees of |
55 |
freedom.\cite{Ryckaert77,Andersen83,Krautler01} Integrators which |
56 |
propagate orientational motion with an acceptable level of energy |
57 |
conservation for molecular dynamics are relatively new inventions. |
58 |
|
59 |
Moving a rigid body involves determination of both the force and |
60 |
torque applied by the surroundings, which directly affect the |
61 |
translational and rotational motion in turn. In order to accumulate |
62 |
the total force on a rigid body, the external forces and torques must |
63 |
first be calculated for all the internal particles. The total force on |
64 |
the rigid body is simply the sum of these external forces. |
65 |
Accumulation of the total torque on the rigid body is more complex |
66 |
than the force because the torque is applied to the center of mass of |
67 |
the rigid body. The space-fixed torque on rigid body $i$ is |
68 |
\begin{equation} |
69 |
\boldsymbol{\tau}_i= |
70 |
\sum_{a}\biggl[(\mathbf{r}_{ia}-\mathbf{r}_i)\times \mathbf{f}_{ia} |
71 |
+ \boldsymbol{\tau}_{ia}\biggr], |
72 |
\label{eq:torqueAccumulate} |
73 |
\end{equation} |
74 |
where $\boldsymbol{\tau}_i$ and $\mathbf{r}_i$ are the torque on and |
75 |
position of the center of mass respectively, while $\mathbf{f}_{ia}$, |
76 |
$\mathbf{r}_{ia}$, and $\boldsymbol{\tau}_{ia}$ are the force on, |
77 |
position of, and torque on the component particles. |
78 |
|
79 |
The summation of the total torque is done in the body fixed axis. In |
80 |
order to move between the space fixed and body fixed coordinate axes, |
81 |
parameters describing the orientation must be maintained for each |
82 |
rigid body. At a minimum, the rotation matrix ($\mathsf{A}$) can be |
83 |
described by the three Euler angles ($\phi, \theta,$ and $\psi$), |
84 |
where the elements of $\mathsf{A}$ are composed of trigonometric |
85 |
operations involving $\phi, \theta,$ and $\psi$.\cite{Goldstein01} |
86 |
Direct propagation of the Euler angles has a known $1/\sin\theta$ |
87 |
divergence in the equations of motion for $\phi$ and $\psi$, leading |
88 |
to numerical instabilities any time one of the directional atoms or |
89 |
rigid bodies has an orientation near $\theta=0$ or |
90 |
$\theta=\pi$.\cite{Allen87} One of the most practical ways to avoid |
91 |
this ``gimbal point'' is to switch to another angular set defining the |
92 |
orientation of the rigid body near this point.\cite{Barojas73} This |
93 |
procedure results in additional book-keeping and increased algorithm |
94 |
complexity. In the search for more elegant alternative methods, Evans |
95 |
proposed the use of quaternions to describe and propagate |
96 |
orientational motion.\cite{Evans77} |
97 |
|
98 |
The quaternion method for integration involves a four dimensional |
99 |
representation of the orientation of a rigid |
100 |
body.\cite{Evans77,Evans77b,Allen87} Thus, the elements of |
101 |
$\mathsf{A}$ can be expressed as arithmetic operations involving the |
102 |
four quaternions ($q_w, q_x, q_y,$ and $q_z$), |
103 |
\begin{equation} |
104 |
\mathsf{A} = \left( \begin{array}{l@{\quad}l@{\quad}l} |
105 |
q_0^2+q_1^2-q_2^2-q_3^2 & 2(q_1q_2+q_0q_3) & 2(q_1q_3-q_0q_2) \\ |
106 |
2(q_1q_2-q_0q_3) & q_0^2-q_1^2+q_2^2-q_3^2 & 2(q_2q_3+q_0q_1) \\ |
107 |
2(q_1q_3+q_0q_2) & 2(q_2q_3-q_0q_1) & q_0^2-q_1^2-q_2^2+q_3^2 \\ |
108 |
\end{array}\right). |
109 |
\end{equation} |
110 |
Integration of the equations of motion involves a series of arithmetic |
111 |
operations involving the quaternions and angular momenta and leads to |
112 |
performance enhancements over Euler angles, particularly for very |
113 |
small systems.\cite{Evans77} This integration methods works well for |
114 |
propagating orientational motion in the canonical ensemble ($NVT$); |
115 |
however, energy conservation concerns arise when using the simple |
116 |
quaternion technique under the microcanonical ($NVE$) ensemble. An |
117 |
earlier implementation of {\sc oopse} utilized quaternions for |
118 |
propagation of rotational motion; however, a detailed investigation |
119 |
showed that they resulted in a steady drift in the total energy, |
120 |
something that has been observed by Kol {\it et al.} (also see |
121 |
section~\ref{sec:waterSimMethods}).\cite{Kol97} |
122 |
|
123 |
Because of the outlined issues involving integration of the |
124 |
orientational motion using both Euler angles and quaternions, we |
125 |
decided to focus on a relatively new scheme that propagates the entire |
126 |
nine parameter rotation matrix. This techniques is a velocity-Verlet |
127 |
version of the symplectic splitting method proposed by Dullweber, |
128 |
Leimkuhler and McLachlan ({\sc dlm}).\cite{Dullweber97} When there are |
129 |
no directional atoms or rigid bodies present in the simulation, this |
130 |
integrator becomes the standard velocity-Verlet integrator which is |
131 |
known to effectively sample the microcanonical ($NVE$) |
132 |
ensemble.\cite{Frenkel02} |
133 |
|
134 |
The key aspect of the integration method proposed by Dullweber |
135 |
\emph{et al.} is that the entire $3 \times 3$ rotation matrix is |
136 |
propagated from one time step to the next. In the past, this would not |
137 |
have been as feasible, since the rotation matrix for a single body has |
138 |
nine elements compared with the more memory-efficient methods (using |
139 |
three Euler angles or four quaternions). Computer memory has become |
140 |
much less costly in recent years, and this can be translated into |
141 |
substantial benefits in energy conservation. |
142 |
|
143 |
The integration of the equations of motion is carried out in a |
144 |
velocity-Verlet style two-part algorithm.\cite{Swope82} The first-part |
145 |
({\tt moveA}) consists of a half-step ($t + \delta t/2$) of the linear |
146 |
velocity (${\bf v}$) and angular momenta ({\bf j}) and a full-step ($t |
147 |
+ \delta t$) of the positions ({\bf r}) and rotation matrix, |
148 |
\begin{equation*} |
149 |
{\tt moveA} = \left\{\begin{array}{r@{\quad\leftarrow\quad}l} |
150 |
{\bf v}\left(t + \delta t / 2\right) & {\bf v}(t) |
151 |
+ \left( {\bf f}(t) / m \right)(\delta t/2), \\ |
152 |
% |
153 |
{\bf r}(t + \delta t) & {\bf r}(t) |
154 |
+ {\bf v}\left(t + \delta t / 2 \right)\delta t, \\ |
155 |
% |
156 |
{\bf j}\left(t + \delta t / 2 \right) & {\bf j}(t) |
157 |
+ \boldsymbol{\tau}^b(t)(\delta t/2), \\ |
158 |
% |
159 |
\mathsf{A}(t + \delta t) & \mathrm{rotate}\left( {\bf j} |
160 |
(t + \delta t / 2)\delta t \cdot |
161 |
\overleftrightarrow{\mathsf{I}}^{-1} \right), |
162 |
\end{array}\right. |
163 |
\end{equation*} |
164 |
where $\overleftrightarrow{\mathsf{I}}^{-1}$ is the inverse of the |
165 |
moment of inertia tensor. The $\mathrm{rotate}$ function is the |
166 |
product of rotations about the three body-fixed axes, |
167 |
\begin{equation} |
168 |
\mathrm{rotate}({\bf a}) = \mathsf{G}_x(a_x / 2) \cdot |
169 |
\mathsf{G}_y(a_y / 2) \cdot \mathsf{G}_z(a_z) \cdot \mathsf{G}_y(a_y / |
170 |
2) \cdot \mathsf{G}_x(a_x /2), |
171 |
\label{eq:dlmTrot} |
172 |
\end{equation} |
173 |
where each rotational propagator, $\mathsf{G}_\alpha(\theta)$, rotates |
174 |
both the rotation matrix ($\mathsf{A}$) and the body-fixed angular |
175 |
momentum (${\bf j}$) by an angle $\theta$ around body-fixed axis |
176 |
$\alpha$, |
177 |
\begin{equation} |
178 |
\mathsf{G}_\alpha( \theta ) = \left\{ |
179 |
\begin{array}{l@{\quad\leftarrow\quad}l} |
180 |
\mathsf{A}(t) & \mathsf{A}(0) \cdot \mathsf{R}_\alpha(\theta)^\textrm{T},\\ |
181 |
{\bf j}(t) & \mathsf{R}_\alpha(\theta) \cdot {\bf j}(0). |
182 |
\end{array} |
183 |
\right. |
184 |
\end{equation} |
185 |
$\mathsf{R}_\alpha$ is a quadratic approximation to the single-axis |
186 |
rotation matrix. For example, in the small-angle limit, the rotation |
187 |
matrix around the body-fixed x-axis can be approximated as |
188 |
\begin{equation} |
189 |
\mathsf{R}_x(\theta) \approx \left( |
190 |
\begin{array}{ccc} |
191 |
1 & 0 & 0 \\ |
192 |
0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} & -\frac{\theta}{1+\theta^2 / 4} \\ |
193 |
0 & \frac{\theta}{1+\theta^2 / 4} & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} |
194 |
\end{array} |
195 |
\right). |
196 |
\end{equation} |
197 |
The remaining rotations follow in a straightforward manner. As seen |
198 |
from the form of equation~(\ref{eq:dlmTrot}), the {\sc dlm} method |
199 |
uses a Trotter factorization of the orientational |
200 |
propagator.\cite{Trotter59} This has three effects: |
201 |
\begin{enumerate} |
202 |
\item the integrator is area-preserving in phase space (i.e. it is |
203 |
{\it symplectic}), |
204 |
\item the integrator is time-{\it reversible}, and |
205 |
\item the error for a single time step is of order |
206 |
$\mathcal{O}\left(\delta t^4\right)$ for time steps of length $\delta t$. |
207 |
\end{enumerate} |
208 |
|
209 |
After the initial half-step ({\tt moveA}), the forces and torques are |
210 |
evaluated for all of the particles. Once completed, the velocities can |
211 |
be advanced to complete the second-half of the two-part algorithm |
212 |
({\tt moveB}), resulting an a completed full step of both the |
213 |
positions and momenta, |
214 |
\begin{equation*} |
215 |
{\tt moveB} = \left\{\begin{array}{r@{\quad\leftarrow\quad}l} |
216 |
{\bf v}\left(t + \delta t \right) & |
217 |
{\bf v}\left(t + \delta t / 2 \right) |
218 |
+ \left({\bf f}(t + \delta t) / m \right)(\delta t/2), \\ |
219 |
% |
220 |
{\bf j}\left(t + \delta t \right) & |
221 |
{\bf j}\left(t + \delta t / 2 \right) |
222 |
+ \boldsymbol{\tau}^b(t + \delta t)(\delta t/2). |
223 |
\end{array}\right. |
224 |
\end{equation*} |
225 |
|
226 |
The matrix rotations used in the {\sc dlm} method end up being more |
227 |
costly computationally than the simpler arithmetic quaternion |
228 |
propagation. With the same time step, a 1024-molecule water simulation |
229 |
incurs approximately a 10\% increase in computation time using the |
230 |
{\sc dlm} method in place of quaternions. This cost is more than |
231 |
justified when comparing the energy conservation achieved by the two |
232 |
methods. Figure \ref{fig:quatdlm} provides a comparative analysis of |
233 |
the {\sc dlm} method versus the traditional quaternion scheme. |
234 |
|
235 |
\begin{figure} |
236 |
\centering |
237 |
\includegraphics[width=3.5in]{./figures/dlmVsQuat.pdf} |
238 |
\caption[Energy conservation analysis of the {\sc dlm} and quaternion |
239 |
integration methods]{Analysis of the energy conservation of the {\sc |
240 |
dlm} and quaternion integration methods. $\delta \mathrm{E}_1$ is the |
241 |
linear drift in energy over time and $\delta \mathrm{E}_0$ is the |
242 |
standard deviation of energy fluctuations around this drift. All |
243 |
simulations were of a 1024 SSD water system at 298 K starting from the |
244 |
same initial configuration. Note that the {\sc dlm} method provides |
245 |
more than an order-of-magnitude improvement in both the energy drift |
246 |
and the size of the energy fluctuations when compared with the |
247 |
quaternion method at any given time step. At time steps larger than 4 |
248 |
fs, the quaternion scheme resulted in rapidly rising energies which |
249 |
eventually lead to simulation failure. Using the {\sc dlm} method, |
250 |
time steps up to 8 fs can be taken before this behavior is evident.} |
251 |
\label{fig:quatdlm} |
252 |
\end{figure} |
253 |
|
254 |
In figure \ref{fig:quatdlm}, $\delta \mbox{E}_1$ is a measure of the |
255 |
linear energy drift in units of $\mbox{kcal mol}^{-1}$ per particle |
256 |
over a nanosecond of simulation time, and $\delta \mbox{E}_0$ is the |
257 |
standard deviation of the energy fluctuations in units of $\mbox{kcal |
258 |
mol}^{-1}$ per particle. In the top plot, it is apparent that the |
259 |
energy drift is reduced by a significant amount (2 to 3 orders of |
260 |
magnitude improvement at all tested time steps) by choosing the {\sc |
261 |
dlm} method over the simple non-symplectic quaternion integration |
262 |
method. In addition to this improvement in energy drift, the |
263 |
fluctuations in the total energy are also dampened by 1 to 2 orders of |
264 |
magnitude by utilizing the {\sc dlm} method. |
265 |
|
266 |
\begin{figure} |
267 |
\centering |
268 |
\includegraphics[width=\linewidth]{./figures/compCost.pdf} |
269 |
\caption[Energy drift as a function of required simulation run |
270 |
time]{Energy drift as a function of required simulation run time. |
271 |
$\delta \mathrm{E}_1$ is the linear drift in energy over time. |
272 |
Simulations were performed on a single 2.5 GHz Pentium 4 |
273 |
processor. Simulation time comparisons can be made by tracing |
274 |
horizontally from one curve to the other. For example, a simulation |
275 |
that takes 24 hours using the {\sc dlm} method will take roughly |
276 |
210 hours using the simple quaternion method if the same degree of |
277 |
energy conservation is desired.} |
278 |
\label{fig:cpuCost} |
279 |
\end{figure} |
280 |
Although the {\sc dlm} method is more computationally expensive than |
281 |
the traditional quaternion scheme for propagating of a time step, |
282 |
consideration of the computational cost for a long simulation with a |
283 |
particular level of energy conservation is in order. A plot of energy |
284 |
drift versus computational cost was generated |
285 |
(Fig.~\ref{fig:cpuCost}). This figure provides an estimate of the CPU |
286 |
time required under the two integration schemes for 1 nanosecond of |
287 |
simulation time for the model 1024-molecule system. By choosing a |
288 |
desired energy drift value it is possible to determine the CPU time |
289 |
required for both methods. If a $\delta \mbox{E}_1$ of |
290 |
0.001~kcal~mol$^{-1}$ per particle is desired, a nanosecond of |
291 |
simulation time will require ~19 hours of CPU time with the {\sc dlm} |
292 |
integrator, while the quaternion scheme will require ~154 hours of CPU |
293 |
time. This demonstrates the computational advantage of the integration |
294 |
scheme utilized in {\sc oopse}. |
295 |
|
296 |
\subsection{Periodic Boundary Conditions} |
297 |
|
298 |
\begin{figure} |
299 |
\centering |
300 |
\includegraphics[width=4.5in]{./figures/periodicImage.pdf} |
301 |
\caption{With periodic boundary conditions imposed, when particles |
302 |
move out of one side the simulation box, they wrap back in the |
303 |
opposite side. In this manner, a finite system of particles behaves as |
304 |
an infinite system.} |
305 |
\label{fig:periodicImage} |
306 |
\end{figure} |
307 |
|
308 |
\section{Accumulating Interactions} |
309 |
|
310 |
In the force calculation between {\tt moveA} and {\tt moveB} mentioned |
311 |
in section \ref{sec:IntroIntegrate}, we need to accumulate the |
312 |
potential and forces (and torques if the particle is a rigid body or |
313 |
multipole) on each particle from their surroundings. This can quickly |
314 |
become a cumbersome task for large systems since the number of pair |
315 |
interactions increases by $\mathcal{O}(N(N-1)/2)$ if you accumulate |
316 |
interactions between all particles in the system, note the utilization |
317 |
of Newton's third law to reduce the interaction number from |
318 |
$\mathcal{O}(N^2)$. The case of periodic boundary conditions further |
319 |
complicates matters by turning the finite system into an infinitely |
320 |
repeating one. Fortunately, we can reduce the scale of this problem by |
321 |
using spherical cutoff methods. |
322 |
|
323 |
\begin{figure} |
324 |
\centering |
325 |
\includegraphics[width=3.5in]{./figures/sphericalCut.pdf} |
326 |
\caption{When using a spherical cutoff, only particles within a chosen |
327 |
cutoff radius distance, $R_\textrm{c}$, of the central particle are |
328 |
included in the pairwise summation. This reduces a problem that scales |
329 |
by $\sim\mathcal{O}(N^2)$ to one that scales by $\sim\mathcal{O}(N)$.} |
330 |
\label{fig:sphereCut} |
331 |
\end{figure} |
332 |
With spherical cutoffs, rather than accumulating the full set of |
333 |
interactions between all particles in the simulation, we only |
334 |
explicitly consider interactions between local particles out to a |
335 |
specified cutoff radius distance, $R_\textrm{c}$, (see figure |
336 |
\ref{fig:sphereCut}). This reduces the scaling of the interaction to |
337 |
$\mathcal{O}(N\cdot\textrm{c})$, where `c' is a value that depends on |
338 |
the size of $R_\textrm{c}$ (c $\approx R_\textrm{c}^3$). Determination |
339 |
of which particles are within the cutoff is also an issue, because |
340 |
this process requires a full loop over all $N(N-1)/2$ pairs. To reduce |
341 |
the this expense, we can use neighbor lists.\cite{Verlet67,Thompson83} |
342 |
With neighbor lists, we have a second list of particles built from a |
343 |
list radius $R_\textrm{l}$, which is larger than $R_\textrm{c}$. Once |
344 |
any of the particles within $R_\textrm{l}$ move a distance of |
345 |
$R_\textrm{l}-R_\textrm{c}$ (the ``skin'' thickness), we rebuild the |
346 |
list with the full $N(N-1)/2$ loop.\cite{Verlet67} With an appropriate |
347 |
skin thickness, these updates are only performed every $\sim$20 |
348 |
time steps, significantly reducing the time spent on pair-list |
349 |
bookkeeping operations.\cite{Allen87} If these neighbor lists are |
350 |
utilized, it is important that these list updates occur |
351 |
regularly. Incorrect application of this technique leads to |
352 |
non-physical dynamics, such as the ``flying block of ice'' behavior |
353 |
for which improper neighbor list handling was identified a one of the |
354 |
possible causes.\cite{Harvey98,Sagui99} |
355 |
|
356 |
\subsection{Correcting Cutoff Discontinuities} |
357 |
\begin{figure} |
358 |
\centering |
359 |
\includegraphics[width=3.5in]{./figures/ljCutoffSquare.pdf} |
360 |
\caption{The common methods to smooth the potential discontinuity |
361 |
introduced when using a cutoff include a shifted potential, a shifted |
362 |
force, and a switching function. The shifted potential and shifted |
363 |
force both lift the whole potential so that it zeroes at |
364 |
$R_\textrm{c}$, thereby reducing the strength of the interaction. The |
365 |
(cubic) switching function only alters the potential in the switching |
366 |
region in order to smooth out the discontinuity.} |
367 |
\label{fig:ljCutoff} |
368 |
\end{figure} |
369 |
As particle move in and out of $R_\textrm{c}$, there will be sudden |
370 |
discontinuous jumps in the potential (and forces) due to their |
371 |
appearance and disappearance. In order to prevent heating and poor |
372 |
energy conservation in the simulations, this discontinuity needs to be |
373 |
smoothed out. There are several ways to modify the function so that it |
374 |
crosses $R_\textrm{c}$ in a continuous fashion, and the easiest |
375 |
methods is shifting the potential. To shift the potential, we simply |
376 |
subtract out the value we calculate at $R_\textrm{c}$ from the whole |
377 |
potential. For the shifted form of the Lennard-Jones potential is |
378 |
\begin{equation} |
379 |
V_\textrm{SLJ} = \left\{\begin{array}{l@{\quad\quad}l} |
380 |
V_\textrm{LJ}(r_{ij}) - V_\textrm{LJ}(R_\textrm{c}) & r_{ij} < R_\textrm{c}, \\ |
381 |
0 & r_{ij} \geqslant R_\textrm{c}, |
382 |
\end{array}\right. |
383 |
\end{equation} |
384 |
where |
385 |
\begin{equation} |
386 |
V_\textrm{LJ} = 4\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12} - |
387 |
\left(\frac{\sigma}{r_{ij}}\right)^6\right]. |
388 |
\end{equation} |
389 |
In figure \ref{fig:ljCutoff}, the shifted form of the potential |
390 |
reaches zero at the cutoff radius at the expense of the correct |
391 |
magnitude below $R_\textrm{c}$. This correction method also does |
392 |
nothing to correct the discontinuity in the forces. We can account for |
393 |
this force discontinuity by shifting in the by applying the shifting |
394 |
in the forces rather than just the potential via |
395 |
\begin{equation} |
396 |
V_\textrm{SFLJ} = \left\{\begin{array}{l@{\quad\quad}l} |
397 |
V_\textrm{LJ}({r_{ij}}) - V_\textrm{LJ}(R_\textrm{c}) - |
398 |
\left(\frac{d V(r_{ij})}{d r_{ij}}\right)_{r_{ij}=R_\textrm{c}} |
399 |
(r_{ij} - R_\textrm{c}) & r_{ij} < R_\textrm{c}, \\ |
400 |
0 & r_{ij} \geqslant R_\textrm{c}. |
401 |
\end{array}\right. |
402 |
\end{equation} |
403 |
The forces are continuous with this potential; however, the inclusion |
404 |
of the derivative term distorts the potential even further than the |
405 |
simple shifting as shown in figure \ref{fig:ljCutoff}. The method for |
406 |
correcting the discontinuity which results in the smallest |
407 |
perturbation in both the potential and the forces is the use of a |
408 |
switching function. The cubic switching function, |
409 |
\begin{equation} |
410 |
S(r) = \left\{\begin{array}{l@{\quad\quad}l} |
411 |
1 & r_{ij} \leqslant R_\textrm{sw}, \\ |
412 |
\frac{(R_\textrm{c} + 2r_{ij} - 3R_\textrm{sw}) |
413 |
(R_\textrm{c} - r_{ij})^2}{(R_\textrm{c} - R_\textrm{sw})^3} |
414 |
& R_\textrm{sw} < r_{ij} \leqslant R_\textrm{c}, \\ |
415 |
0 & r_{ij} > R_\textrm{c}, |
416 |
\end{array}\right. |
417 |
\end{equation} |
418 |
is sufficient to smooth the potential (again, see figure |
419 |
\ref{fig:ljCutoff}) and the forces by only perturbing the potential in |
420 |
the switching region. If smooth second derivatives are required, a |
421 |
higher order polynomial switching function (i.e. fifth order |
422 |
polynomial) can be used.\cite{Andrea83,Leach01} It should be noted |
423 |
that the higher the order of the polynomial, the more abrupt the |
424 |
switching transition. |
425 |
|
426 |
\subsection{Long-Range Interaction Corrections} |
427 |
|
428 |
While a good approximation, accumulating interaction only from the |
429 |
nearby particles can lead to some issues, because the further away |
430 |
surrounding particles do still have a small effect. For instance, |
431 |
while the strength of the Lennard-Jones interaction is quite weak at |
432 |
$R_\textrm{c}$ in figure \ref{fig:ljCutoff}, we are discarding all of |
433 |
the attractive interaction that extends out to extremely long |
434 |
distances. Thus, the potential is a little too high and the pressure |
435 |
on the central particle from the surroundings is a little too low. For |
436 |
homogeneous Lennard-Jones systems, we can correct for this neglect by |
437 |
assuming a uniform density and integrating the missing part, |
438 |
\begin{equation} |
439 |
V_\textrm{full}(r_{ij}) \approx V_\textrm{c} |
440 |
+ 2\pi N\rho\int^\infty_{R_\textrm{c}}r^2V_\textrm{LJ}(r)dr, |
441 |
\end{equation} |
442 |
where $V_\textrm{c}$ is the truncated Lennard-Jones |
443 |
potential.\cite{Allen87} Like the potential, other Lennard-Jones |
444 |
properties can be corrected by integration over the relevant |
445 |
function. Note that with heterogeneous systems, this correction begins |
446 |
to break down because the density is no longer uniform. |
447 |
|
448 |
Correcting long-range electrostatic interactions is a topic of great |
449 |
importance in the field of molecular simulations. There have been |
450 |
several techniques developed to address this issue, like the Ewald |
451 |
summation and the reaction field technique. An in-depth analysis of |
452 |
this problem, as well as useful corrective techniques, is presented in |
453 |
chapter \ref{chap:electrostatics}. |
454 |
|
455 |
\section{Measuring Properties} |
456 |
|
457 |
\section{Application of the Techniques} |
458 |
|
459 |
In the following chapters, the above techniques are all utilized in |
460 |
the study of molecular systems. There are a number of excellent |
461 |
simulation packages available, both free and commercial, which |
462 |
incorporate many of these |
463 |
methods.\cite{Brooks83,MacKerell98,Pearlman95,Berendsen95,Lindahl01,Smith96,Ponder87} |
464 |
Because of our interest in rigid body dynamics, point multipoles, and |
465 |
systems where the orientational degrees cannot be handled using the |
466 |
common constraint procedures (like {\sc shake}), the majority of the |
467 |
following work was performed using {\sc oopse}, the object-oriented |
468 |
parallel simulation engine.\cite{Meineke05} The {\sc oopse} package |
469 |
started out as a collection of separate programs written within our |
470 |
group, and has developed into one of the few parallel molecular |
471 |
dynamics packages capable of accurately integrating rigid bodies and |
472 |
point multipoles. |
473 |
|
474 |
In chapter \ref{chap:electrostatics}, we investigate correction |
475 |
techniques for proper handling of long-ranged electrostatic |
476 |
interactions. In particular we develop and evaluate some new pairwise |
477 |
methods which we have incorporated into {\sc oopse}. These techniques |
478 |
make an appearance in the later chapters, as they are applied to |
479 |
specific systems of interest, showing how it they can improve the |
480 |
quality of various molecular simulations. |
481 |
|
482 |
In chapter \ref{chap:water}, we focus on simple water models, |
483 |
specifically the single-point soft sticky dipole (SSD) family of water |
484 |
models. These single-point models are more efficient than the common |
485 |
multi-point partial charge models and, in many cases, better capture |
486 |
the dynamic properties of water. We discuss improvements to these |
487 |
models in regards to long-range electrostatic corrections and show |
488 |
that these models can work well with the techniques discussed in |
489 |
chapter \ref{chap:electrostatics}. By investigating and improving |
490 |
simple water models, we are extending the limits of computational |
491 |
efficiency for systems that heavily depend on water calculations. |
492 |
|
493 |
In chapter \ref{chap:ice}, we study a unique polymorph of ice that we |
494 |
discovered while performing water simulations with the fast simple |
495 |
water models discussed in the previous chapter. This form of ice, |
496 |
which we called ``imaginary ice'' (Ice-$i$), has a low-density |
497 |
structure which is different from any known polymorph from either |
498 |
experiment or other simulations. In this study, we perform a free |
499 |
energy analysis and see that this structure is in fact the |
500 |
thermodynamically preferred form of ice for both the single-point and |
501 |
commonly used multi-point water models under the chosen simulation |
502 |
conditions. We also consider electrostatic corrections, again |
503 |
including the techniques discussed in chapter |
504 |
\ref{chap:electrostatics}, to see how they influence the free energy |
505 |
results. This work, to some degree, addresses the appropriateness of |
506 |
using these simplistic water models outside of the conditions for |
507 |
which they were developed. |
508 |
|
509 |
In the final chapter we summarize the work presented previously. We |
510 |
also close with some final comments before the supporting information |
511 |
presented in the appendices. |