3 |
|
\section{Background on the Problem\label{In:sec:pro}} |
4 |
|
Phospholipid molecules are the primary topic of this dissertation |
5 |
|
because of their critical role as the foundation of biological |
6 |
< |
membranes. Lipids, when dispersed in water, self assemble into bilayer |
7 |
< |
structures. The phase behavior of lipid bilayers have been explored |
6 |
> |
membranes. Lipids, when dispersed in water, self assemble into a |
7 |
> |
mumber of topologically distinct bilayer structures. The phase |
8 |
> |
behavior of lipid bilayers has been explored |
9 |
|
experimentally~\cite{Cevc87}, however, a complete understanding of the |
10 |
< |
mechanism for self-assembly has not been achieved. |
10 |
> |
mechanism and driving forces behind the various phases has not been |
11 |
> |
achieved. |
12 |
|
|
13 |
|
\subsection{Ripple Phase\label{In:ssec:ripple}} |
14 |
|
The $P_{\beta'}$ {\it ripple phase} of lipid bilayers, named from the |
15 |
|
periodic buckling of the membrane, is an intermediate phase which is |
16 |
|
developed either from heating the gel phase $L_{\beta'}$ or cooling |
17 |
< |
the fluid phase $L_\alpha$. Although the ripple phase has been |
18 |
< |
observed in several |
19 |
< |
experiments~\cite{Sun96,Katsaras00,Copeland80,Meyer96,Kaasgaard03}, |
20 |
< |
the physical mechanism for the formation of the ripple phase has never been |
21 |
< |
explained and the microscopic structure of the ripple phase has never |
22 |
< |
been elucidated by experiments. Computational simulation is a perfect |
23 |
< |
tool to study the microscopic properties for a system. However, the |
24 |
< |
large length scale the ripple structure and the long time scale |
25 |
< |
of the formation of the ripples are crucial obstacles to performing |
26 |
< |
the actual work. The principal ideas explored in this dissertation are |
27 |
< |
attempts to break the computational task up by |
17 |
> |
the fluid phase $L_\alpha$. A Sketch is shown in |
18 |
> |
figure~\ref{Infig:phaseDiagram}.~\cite{Cevc87} |
19 |
> |
\begin{figure} |
20 |
> |
\centering |
21 |
> |
\includegraphics[width=\linewidth]{./figures/inPhaseDiagram.pdf} |
22 |
> |
\caption{A phase diagram of lipid bilayer. With increasing the |
23 |
> |
temperature, the bilayer can go through a gel ($L_{\beta'}$), ripple |
24 |
> |
($P_{\beta'}$) to fluid ($L_\alpha$) phase transition.} |
25 |
> |
\label{Infig:phaseDiagram} |
26 |
> |
\end{figure} |
27 |
> |
Most structural information of the ripple phase has been obtained by |
28 |
> |
the X-ray diffraction~\cite{Sun96,Katsaras00} and freeze-fracture |
29 |
> |
electron microscopy (FFEM).~\cite{Copeland80,Meyer96} The X-ray |
30 |
> |
diffraction work by Katsaras {\it et al.} showed that a rich phase |
31 |
> |
diagram exhibiting both {\it asymmetric} and {\it symmetric} ripples |
32 |
> |
is possible for lecithin bilayers.\cite{Katsaras00} Recently, |
33 |
> |
Kaasgaard {\it et al.} used atomic force microscopy (AFM) to observe |
34 |
> |
ripple phase morphology in bilayers supported on |
35 |
> |
mica.~\cite{Kaasgaard03} |
36 |
> |
\begin{figure} |
37 |
> |
\centering |
38 |
> |
\includegraphics[width=\linewidth]{./figures/inRipple.pdf} |
39 |
> |
\caption{The experimental observed ripple phase. The top image is |
40 |
> |
obtained by X-ray diffraction~\cite{Sun96}, and the bottom one is |
41 |
> |
observed by AFM.~\cite{Kaasgaard03}} |
42 |
> |
\label{Infig:ripple} |
43 |
> |
\end{figure} |
44 |
> |
Figure~\ref{Infig:ripple} shows the ripple phase oberved by X-ray |
45 |
> |
diffraction and AFM. The experimental results provide strong support |
46 |
> |
for a 2-dimensional triangular packing lattice of the lipid molecules |
47 |
> |
within the ripple phase. This is a notable change from the observed |
48 |
> |
lipid packing within the gel phase,~\cite{Cevc87} although Tenchov |
49 |
> |
{\it et al.} have recently observed near-hexagonal packing in some |
50 |
> |
phosphatidylcholine (PC) gel phases.~\cite{Tenchov2001} However, the |
51 |
> |
physical mechanism for the formation of the ripple phase has never |
52 |
> |
been explained and the microscopic structure of the ripple phase has |
53 |
> |
never been elucidated by experiments. Computational simulation is a |
54 |
> |
perfect tool to study the microscopic properties for a |
55 |
> |
system. However, the large length scale the ripple structure and the |
56 |
> |
long time scale of the formation of the ripples are crucial obstacles |
57 |
> |
to performing the actual work. The principal ideas explored in this |
58 |
> |
dissertation are attempts to break the computational task up by |
59 |
|
\begin{itemize} |
60 |
|
\item Simplifying the lipid model. |
61 |
|
\item Improving algorithm for integrating the equations of motion. |
62 |
|
\end{itemize} |
63 |
|
In chapters~\ref{chap:mc} and~\ref{chap:md}, we use a simple point |
64 |
< |
dipole model and a coarse-grained model to perform the Monte Carlo and |
65 |
< |
Molecular Dynamics simulations respectively, and in |
66 |
< |
chapter~\ref{chap:ld}, we develop a Langevin Dynamics algorithm which |
67 |
< |
excludes the explicit solvent to improve the efficiency of the |
68 |
< |
simulations. |
64 |
> |
dipole spin model and a coarse-grained molecualr scale model to |
65 |
> |
perform the Monte Carlo and Molecular Dynamics simulations |
66 |
> |
respectively, and in chapter~\ref{chap:ld}, we develop a Langevin |
67 |
> |
Dynamics algorithm which excludes the explicit solvent to improve the |
68 |
> |
efficiency of the simulations. |
69 |
|
|
70 |
|
\subsection{Lattice Model\label{In:ssec:model}} |
71 |
< |
The gel-like characteristic of the ripple phase makes it feasible to |
72 |
< |
apply a lattice model to study the system. It is claimed that the |
73 |
< |
packing of the lipid molecules in ripple phase is |
74 |
< |
hexagonal~\cite{Cevc87}. The popular $2$ dimensional lattice models, |
75 |
< |
{\it i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it |
43 |
< |
frustration} on triangular lattices. The Hamiltonian of the systems |
44 |
< |
are given by |
71 |
> |
The gel-like characteristic (relatively immobile molecules) exhibited |
72 |
> |
by the ripple phase makes it feasible to apply a lattice model to |
73 |
> |
study the system. The popular $2$ dimensional lattice models, {\it |
74 |
> |
i.e.}, the Ising, $X-Y$, and Heisenberg models, show {\it frustration} |
75 |
> |
on triangular lattices. The Hamiltonians of these systems are given by |
76 |
|
\begin{equation} |
77 |
|
H = |
78 |
|
\begin{cases} |
82 |
|
\end{cases} |
83 |
|
\end{equation} |
84 |
|
where $J$ has non zero value only when spins $s_n$ ($\vec s_n$) and |
85 |
< |
$s_{n'}$ ($\vec s_{n'}$) are the nearest neighbours. |
85 |
> |
$s_{n'}$ ($\vec s_{n'}$) are nearest neighbors. When $J > 0$, spins |
86 |
> |
prefer an aligned structure, and if $J < 0$, spins prefer an |
87 |
> |
anti-aligned structure. |
88 |
> |
|
89 |
|
\begin{figure} |
90 |
|
\centering |
91 |
< |
\includegraphics[width=\linewidth]{./figures/inFrustration.pdf} |
92 |
< |
\caption{Frustration on a triangular lattice, the spins are |
93 |
< |
represented by arrows. No matter which direction the spin on the top |
94 |
< |
of triangle points to, the Hamiltonain of the system goes up.} |
91 |
> |
\includegraphics[width=3in]{./figures/inFrustration.pdf} |
92 |
> |
\caption{Frustration on triangular lattice, the spins and dipoles are |
93 |
> |
represented by arrows. The multiple local minima of energy states |
94 |
> |
induce the frustration for spins and dipoles picking the directions.} |
95 |
|
\label{Infig:frustration} |
96 |
|
\end{figure} |
97 |
< |
Figure~\ref{Infig:frustration} shows an illustration of the |
98 |
< |
frustration on a triangular lattice. When $J < 0$, the spins want to |
99 |
< |
be anti-aligned, The direction of the spin on top of the triangle will |
100 |
< |
make the energy go up no matter which direction it picks, therefore |
101 |
< |
infinite possibilities for the packing of spins induce what is known |
102 |
< |
as ``complete regular frustration'' which leads to disordered low |
103 |
< |
temperature phases. |
97 |
> |
The spins in figure~\ref{Infig:frustration} shows an illustration of |
98 |
> |
the frustration for $J < 0$ on a triangular lattice. There are |
99 |
> |
multiple local minima energy states which are independent of the |
100 |
> |
direction of the spin on top of the triangle, therefore infinite |
101 |
> |
possibilities for the packing of spins which induces what is known as |
102 |
> |
``complete regular frustration'' which leads to disordered low |
103 |
> |
temperature phases. The similarity goes to the dipoles on a hexagonal |
104 |
> |
lattice, which are shown by the dipoles in |
105 |
> |
figure~\ref{Infig:frustration}. In this circumstance, the dipoles want |
106 |
> |
to be aligned, however, due to the long wave fluctuation, at low |
107 |
> |
temperature, the aligned state becomes unstable, vortex is formed and |
108 |
> |
results in multiple local minima of energy states. The dipole on the |
109 |
> |
center of the hexagonal lattice is frustrated. |
110 |
|
|
111 |
< |
The lack of translational degree of freedom in lattice models prevents |
112 |
< |
their utilization in models for surface buckling which would |
113 |
< |
correspond to ripple formation. In chapter~\ref{chap:mc}, a modified |
114 |
< |
lattice model is introduced to tackle this specific situation. |
111 |
> |
The lack of translational degrees of freedom in lattice models |
112 |
> |
prevents their utilization in models for surface buckling. In |
113 |
> |
chapter~\ref{chap:mc}, a modified lattice model is introduced to |
114 |
> |
tackle this specific situation. |
115 |
|
|
116 |
|
\section{Overview of Classical Statistical Mechanics\label{In:sec:SM}} |
117 |
|
Statistical mechanics provides a way to calculate the macroscopic |
119 |
|
computational simulations. This section serves as a brief introduction |
120 |
|
to key concepts of classical statistical mechanics that we used in |
121 |
|
this dissertation. Tolman gives an excellent introduction to the |
122 |
< |
principles of statistical mechanics~\cite{Tolman1979}. A large part of |
122 |
> |
principles of statistical mechanics.~\cite{Tolman1979} A large part of |
123 |
|
section~\ref{In:sec:SM} will follow Tolman's notation. |
124 |
|
|
125 |
|
\subsection{Ensembles\label{In:ssec:ensemble}} |
126 |
|
In classical mechanics, the state of the system is completely |
127 |
|
described by the positions and momenta of all particles. If we have an |
128 |
< |
$N$ particle system, there are $6N$ coordinates ($3N$ position $(q_1, |
128 |
> |
$N$ particle system, there are $6N$ coordinates ($3N$ positions $(q_1, |
129 |
|
q_2, \ldots, q_{3N})$ and $3N$ momenta $(p_1, p_2, \ldots, p_{3N})$) |
130 |
|
to define the instantaneous state of the system. Each single set of |
131 |
|
the $6N$ coordinates can be considered as a unique point in a $6N$ |
150 |
|
\label{Ineq:normalized} |
151 |
|
\end{equation} |
152 |
|
then the value of $\rho$ gives the probability of finding the system |
153 |
< |
in a unit volume in the phase space. |
153 |
> |
in a unit volume in phase space. |
154 |
|
|
155 |
|
Liouville's theorem describes the change in density $\rho$ with |
156 |
< |
time. The number of representive points at a given volume in the phase |
156 |
> |
time. The number of representative points at a given volume in phase |
157 |
|
space at any instant can be written as: |
158 |
|
\begin{equation} |
159 |
|
\label{Ineq:deltaN} |
160 |
|
\delta N = \rho~\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. |
161 |
|
\end{equation} |
162 |
< |
To calculate the change in the number of representive points in this |
162 |
> |
To calculate the change in the number of representative points in this |
163 |
|
volume, let us consider a simple condition: the change in the number |
164 |
< |
of representive points in $q_1$ axis. The rate of the number of the |
165 |
< |
representive points entering the volume at $q_1$ per unit time is: |
164 |
> |
of representative points along the $q_1$ axis. The rate of the number |
165 |
> |
of the representative points entering the volume at $q_1$ per unit time |
166 |
> |
is: |
167 |
|
\begin{equation} |
168 |
|
\label{Ineq:deltaNatq1} |
169 |
|
\rho~\dot q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N, |
170 |
|
\end{equation} |
171 |
< |
and the rate of the number of representive points leaving the volume |
171 |
> |
and the rate of the number of representative points leaving the volume |
172 |
|
at another position $q_1 + \delta q_1$ is: |
173 |
|
\begin{equation} |
174 |
|
\label{Ineq:deltaNatq1plusdeltaq1} |
175 |
|
\left( \rho + \frac{\partial \rho}{\partial q_1} \delta q_1 \right)\left(\dot q_1 + |
176 |
|
\frac{\partial \dot q_1}{\partial q_1} \delta q_1 \right)\delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. |
177 |
|
\end{equation} |
178 |
< |
Here the higher order differentials are neglected. So the change of |
179 |
< |
the number of the representive points is the difference of |
180 |
< |
eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, which |
181 |
< |
gives us: |
178 |
> |
Here the higher order differentials are neglected. So the change in |
179 |
> |
the number of representative points is the difference between |
180 |
> |
eq.~\ref{Ineq:deltaNatq1} and eq.~\ref{Ineq:deltaNatq1plusdeltaq1}, |
181 |
> |
which gives us: |
182 |
|
\begin{equation} |
183 |
|
\label{Ineq:deltaNatq1axis} |
184 |
|
-\left(\rho \frac{\partial {\dot q_1}}{\partial q_1} + \frac{\partial {\rho}}{\partial q_1} \dot q_1 \right)\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N, |
185 |
|
\end{equation} |
186 |
|
where, higher order differetials are neglected. If we sum over all the |
187 |
< |
axes in the phase space, we can get the change of the number of |
188 |
< |
representive points in a given volume with time: |
187 |
> |
axes in the phase space, we can get the change in the number of |
188 |
> |
representative points in a given volume with time: |
189 |
|
\begin{equation} |
190 |
|
\label{Ineq:deltaNatGivenVolume} |
191 |
|
\frac{d(\delta N)}{dt} = -\sum_{i=1}^N \left[\rho \left(\frac{\partial |
193 |
|
{\dot p_i}}{\partial p_i}\right) + \left( \frac{\partial {\rho}}{\partial |
194 |
|
q_i} \dot q_i + \frac{\partial {\rho}}{\partial p_i} \dot p_i\right)\right]\delta q_1 \delta q_2 \ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N. |
195 |
|
\end{equation} |
196 |
< |
From Hamilton's equation of motion, |
196 |
> |
From Hamilton's equations of motion, |
197 |
|
\begin{equation} |
198 |
|
\frac{\partial {\dot q_i}}{\partial q_i} = - \frac{\partial |
199 |
|
{\dot p_i}}{\partial p_i}, |
203 |
|
eq.~\ref{Ineq:deltaNatGivenVolume}. If both sides of |
204 |
|
eq.~\ref{Ineq:deltaNatGivenVolume} are divided by $\delta q_1 \delta q_2 |
205 |
|
\ldots \delta q_N \delta p_1 \delta p_2 \ldots \delta p_N$, then we |
206 |
< |
can derive Liouville's theorem: |
206 |
> |
arrive at Liouville's theorem: |
207 |
|
\begin{equation} |
208 |
|
\left( \frac{\partial \rho}{\partial t} \right)_{q, p} = -\sum_{i} \left( |
209 |
|
\frac{\partial {\rho}}{\partial |
222 |
|
\end{equation} |
223 |
|
It is easy to note that the left side of |
224 |
|
equation~\ref{Ineq:anotherFormofLiouville} is the total derivative of |
225 |
< |
$\rho$ with respect of $t$, which means |
225 |
> |
$\rho$ with respect to $t$, which means |
226 |
|
\begin{equation} |
227 |
|
\frac{d \rho}{dt} = 0, |
228 |
|
\label{Ineq:conservationofRho} |
229 |
|
\end{equation} |
230 |
|
and the rate of density change is zero in the neighborhood of any |
231 |
< |
selected moving representive points in the phase space. |
231 |
> |
selected moving representative points in the phase space. |
232 |
|
|
233 |
|
The condition of the ensemble is determined by the density |
234 |
|
distribution. If we consider the density distribution as only a |
235 |
|
function of $q$ and $p$, which means the rate of change of the phase |
236 |
< |
space density in the neighborhood of all representive points in the |
236 |
> |
space density in the neighborhood of all representative points in the |
237 |
|
phase space is zero, |
238 |
|
\begin{equation} |
239 |
|
\left( \frac{\partial \rho}{\partial t} \right)_{q, p} = 0. |
240 |
|
\label{Ineq:statEquilibrium} |
241 |
|
\end{equation} |
242 |
|
We may conclude the ensemble is in {\it statistical equilibrium}. An |
243 |
< |
ensemble in statistical equilibrium often means the system is also in |
243 |
> |
ensemble in statistical equilibrium means the system is also in |
244 |
|
macroscopic equilibrium. If $\left( \frac{\partial \rho}{\partial t} |
245 |
|
\right)_{q, p} = 0$, then |
246 |
|
\begin{equation} |
252 |
|
\end{equation} |
253 |
|
If $\rho$ is a function only of some constant of the motion, $\rho$ is |
254 |
|
independent of time. For a conservative system, the energy of the |
255 |
< |
system is one of the constants of the motion. Here are several |
256 |
< |
examples: when the density distribution is constant everywhere in the |
257 |
< |
phase space, |
255 |
> |
system is one of the constants of the motion. There are many |
256 |
> |
thermodynamically relevant ensembles: when the density distribution is |
257 |
> |
constant everywhere in the phase space, |
258 |
|
\begin{equation} |
259 |
|
\rho = \mathrm{const.} |
260 |
|
\label{Ineq:uniformEnsemble} |
261 |
|
\end{equation} |
262 |
< |
the ensemble is called {\it uniform ensemble}. Another useful |
263 |
< |
ensemble is called {\it microcanonical ensemble}, for which: |
262 |
> |
the ensemble is called {\it uniform ensemble}. |
263 |
> |
|
264 |
> |
\subsubsection{The Microcanonical Ensemble\label{In:sssec:microcanonical}} |
265 |
> |
Another useful ensemble is the {\it microcanonical ensemble}, for |
266 |
> |
which: |
267 |
|
\begin{equation} |
268 |
|
\rho = \delta \left( H(q^N, p^N) - E \right) \frac{1}{\Sigma (N, V, E)} |
269 |
|
\label{Ineq:microcanonicalEnsemble} |
279 |
|
\label{Ineq:gibbsEntropy} |
280 |
|
\end{equation} |
281 |
|
where $k_B$ is the Boltzmann constant and $C^N$ is a number which |
282 |
< |
makes the argument of $\ln$ dimensionless, in this case, it is the |
283 |
< |
total phase space volume of one state. The entropy in microcanonical |
282 |
> |
makes the argument of $\ln$ dimensionless. In this case, $C^N$ is the |
283 |
> |
total phase space volume of one state. The entropy of a microcanonical |
284 |
|
ensemble is given by |
285 |
|
\begin{equation} |
286 |
|
S = k_B \ln \left(\frac{\Sigma(N, V, E)}{C^N}\right). |
287 |
|
\label{Ineq:entropy} |
288 |
|
\end{equation} |
289 |
+ |
|
290 |
+ |
\subsubsection{The Canonical Ensemble\label{In:sssec:canonical}} |
291 |
|
If the density distribution $\rho$ is given by |
292 |
|
\begin{equation} |
293 |
|
\rho = \frac{1}{Z_N}e^{-H(q^N, p^N) / k_B T}, |
298 |
|
Z_N = \int d \vec q~^N \int_\Gamma d \vec p~^N e^{-H(q^N, p^N) / k_B T}, |
299 |
|
\label{Ineq:partitionFunction} |
300 |
|
\end{equation} |
301 |
< |
which is also known as {\it partition function}. $\Gamma$ indicates |
302 |
< |
that the integral is over all the phase space. In the canonical |
301 |
> |
which is also known as the canonical{\it partition function}. $\Gamma$ |
302 |
> |
indicates that the integral is over all phase space. In the canonical |
303 |
|
ensemble, $N$, the total number of particles, $V$, total volume, and |
304 |
< |
$T$, the temperature are constants. The systems with the lowest |
304 |
> |
$T$, the temperature, are constants. The systems with the lowest |
305 |
|
energies hold the largest population. According to maximum principle, |
306 |
< |
the thermodynamics maximizes the entropy $S$, |
306 |
> |
thermodynamics maximizes the entropy $S$, implying that |
307 |
|
\begin{equation} |
308 |
|
\begin{array}{ccc} |
309 |
|
\delta S = 0 & \mathrm{and} & \delta^2 S < 0. |
310 |
|
\end{array} |
311 |
|
\label{Ineq:maximumPrinciple} |
312 |
|
\end{equation} |
313 |
< |
From Eq.~\ref{Ineq:maximumPrinciple} and two constrains of the canonical |
314 |
< |
ensemble, {\it i.e.}, total probability and average energy conserved, |
315 |
< |
the partition function is calculated as |
313 |
> |
From Eq.~\ref{Ineq:maximumPrinciple} and two constrains on the |
314 |
> |
canonical ensemble, {\it i.e.}, total probability and average energy |
315 |
> |
must be conserved, the partition function can be shown to be |
316 |
> |
equivalent to |
317 |
|
\begin{equation} |
318 |
|
Z_N = e^{-A/k_B T}, |
319 |
|
\label{Ineq:partitionFunctionWithFreeEnergy} |
321 |
|
where $A$ is the Helmholtz free energy. The significance of |
322 |
|
Eq.~\ref{Ineq:entropy} and~\ref{Ineq:partitionFunctionWithFreeEnergy} is |
323 |
|
that they serve as a connection between macroscopic properties of the |
324 |
< |
system and the distribution of the microscopic states. |
324 |
> |
system and the distribution of microscopic states. |
325 |
|
|
326 |
|
There is an implicit assumption that our arguments are based on so |
327 |
< |
far. A representive point in the phase space is equally to be found in |
328 |
< |
any same extent of the regions. In other words, all energetically |
329 |
< |
accessible states are represented equally, the probabilities to find |
330 |
< |
the system in any of the accessible states is equal. This is called |
331 |
< |
equal a {\it priori} probabilities. |
327 |
> |
far. A representative point in the phase space is equally likely to be |
328 |
> |
found in any energetically allowed region. In other words, all |
329 |
> |
energetically accessible states are represented equally, the |
330 |
> |
probabilities to find the system in any of the accessible states is |
331 |
> |
equal. This is called the principle of equal a {\it priori} |
332 |
> |
probabilities. |
333 |
|
|
334 |
< |
\subsection{Statistical Average\label{In:ssec:average}} |
335 |
< |
Given the density distribution $\rho$ in the phase space, the average |
334 |
> |
\subsection{Statistical Averages\label{In:ssec:average}} |
335 |
> |
Given a density distribution $\rho$ in phase space, the average |
336 |
|
of any quantity ($F(q^N, p^N$)) which depends on the coordinates |
337 |
|
($q^N$) and the momenta ($p^N$) for all the systems in the ensemble |
338 |
|
can be calculated based on the definition shown by |
339 |
|
Eq.~\ref{Ineq:statAverage1} |
340 |
|
\begin{equation} |
341 |
< |
\langle F(q^N, p^N, t) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N |
342 |
< |
F(q^N, p^N, t) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}. |
341 |
> |
\langle F(q^N, p^N) \rangle = \frac{\int d \vec q~^N \int d \vec p~^N |
342 |
> |
F(q^N, p^N) \rho}{\int d \vec q~^N \int d \vec p~^N \rho}. |
343 |
|
\label{Ineq:statAverage1} |
344 |
|
\end{equation} |
345 |
|
Since the density distribution $\rho$ is normalized to unity, the mean |
346 |
|
value of $F(q^N, p^N)$ is simplified to |
347 |
|
\begin{equation} |
348 |
< |
\langle F(q^N, p^N, t) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N, |
349 |
< |
p^N, t) \rho, |
348 |
> |
\langle F(q^N, p^N) \rangle = \int d \vec q~^N \int d \vec p~^N F(q^N, |
349 |
> |
p^N) \rho, |
350 |
|
\label{Ineq:statAverage2} |
351 |
|
\end{equation} |
352 |
< |
called {\it ensemble average}. However, the quantity is often averaged |
353 |
< |
for a finite time in real experiments, |
352 |
> |
called the {\it ensemble average}. However, the quantity is often |
353 |
> |
averaged for a finite time in real experiments, |
354 |
|
\begin{equation} |
355 |
|
\langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty} |
356 |
< |
\frac{1}{T} \int_{t_0}^{t_0+T} F(q^N, p^N, t) dt. |
356 |
> |
\frac{1}{T} \int_{t_0}^{t_0+T} F[q^N(t), p^N(t)] dt. |
357 |
|
\label{Ineq:timeAverage1} |
358 |
|
\end{equation} |
359 |
|
Usually this time average is independent of $t_0$ in statistical |
360 |
|
mechanics, so Eq.~\ref{Ineq:timeAverage1} becomes |
361 |
|
\begin{equation} |
362 |
|
\langle F(q^N, p^N) \rangle_t = \lim_{T \rightarrow \infty} |
363 |
< |
\frac{1}{T} \int_{0}^{T} F(q^N, p^N, t) dt |
363 |
> |
\frac{1}{T} \int_{0}^{T} F[q^N(t), p^N(t)] dt |
364 |
|
\label{Ineq:timeAverage2} |
365 |
|
\end{equation} |
366 |
|
for an infinite time interval. |
367 |
|
|
368 |
< |
{\it ergodic hypothesis}, an important hypothesis from the statistical |
369 |
< |
mechanics point of view, states that the system will eventually pass |
368 |
> |
\subsubsection{Ergodicity\label{In:sssec:ergodicity}} |
369 |
> |
The {\it ergodic hypothesis}, an important hypothesis governing modern |
370 |
> |
computer simulations states that the system will eventually pass |
371 |
|
arbitrarily close to any point that is energetically accessible in |
372 |
|
phase space. Mathematically, this leads to |
373 |
|
\begin{equation} |
374 |
< |
\langle F(q^N, p^N, t) \rangle = \langle F(q^N, p^N) \rangle_t. |
374 |
> |
\langle F(q^N, p^N) \rangle = \langle F(q^N, p^N) \rangle_t. |
375 |
|
\label{Ineq:ergodicity} |
376 |
|
\end{equation} |
377 |
< |
Eq.~\ref{Ineq:ergodicity} validates the Monte Carlo method which we will |
378 |
< |
discuss in section~\ref{In:ssec:mc}. An ensemble average of a quantity |
379 |
< |
can be related to the time average measured in the experiments. |
377 |
> |
Eq.~\ref{Ineq:ergodicity} validates Molecular Dynamics as a form of |
378 |
> |
averaging for sufficiently ergodic systems. Also Monte Carlo may be |
379 |
> |
used to obtain ensemble averages of a quantity which are related to |
380 |
> |
time averages measured in experiments. |
381 |
|
|
382 |
< |
\subsection{Correlation Function\label{In:ssec:corr}} |
383 |
< |
Thermodynamic properties can be computed by equillibrium statistical |
384 |
< |
mechanics. On the other hand, {\it Time correlation function} is a |
385 |
< |
powerful method to understand the evolution of a dynamic system in |
386 |
< |
non-equillibrium statistical mechanics. Imagine a property $A(q^N, |
387 |
< |
p^N, t)$ as a function of coordinates $q^N$ and momenta $p^N$ has an |
388 |
< |
intial value at $t_0$, at a later time $t_0 + \tau$ this value is |
389 |
< |
changed. If $\tau$ is very small, the change of the value is minor, |
390 |
< |
and the later value of $A(q^N, p^N, t_0 + |
391 |
< |
\tau)$ is correlated to its initial value. Howere, when $\tau$ is large, |
392 |
< |
this correlation is lost. The correlation function is a measurement of |
393 |
< |
this relationship and is defined by~\cite{Berne90} |
382 |
> |
\subsection{Correlation Functions\label{In:ssec:corr}} |
383 |
> |
Thermodynamic properties can be computed by equilibrium statistical |
384 |
> |
mechanics. On the other hand, {\it Time correlation functions} are a |
385 |
> |
powerful tool to understand the evolution of a dynamical |
386 |
> |
systems. Imagine that property $A(q^N, p^N, t)$ as a function of |
387 |
> |
coordinates $q^N$ and momenta $p^N$ has an intial value at $t_0$, and |
388 |
> |
at a later time $t_0 + \tau$ this value has changed. If $\tau$ is very |
389 |
> |
small, the change of the value is minor, and the later value of |
390 |
> |
$A(q^N, p^N, t_0 + \tau)$ is correlated to its initial value. However, |
391 |
> |
when $\tau$ is large, this correlation is lost. A time correlation |
392 |
> |
function measures this relationship and is defined |
393 |
> |
by~\cite{Berne90} |
394 |
|
\begin{equation} |
395 |
< |
C(t) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow \infty} |
395 |
> |
C_{AA}(\tau) = \langle A(0)A(\tau) \rangle = \lim_{T \rightarrow |
396 |
> |
\infty} |
397 |
|
\frac{1}{T} \int_{0}^{T} dt A(t) A(t + \tau). |
398 |
|
\label{Ineq:autocorrelationFunction} |
399 |
|
\end{equation} |
400 |
< |
Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of a |
401 |
< |
single variable, called {\it autocorrelation function}. The defination |
402 |
< |
of the correlation function for two different variables is similar to |
403 |
< |
that of autocorrelation function, which is |
400 |
> |
Eq.~\ref{Ineq:autocorrelationFunction} is the correlation function of |
401 |
> |
a single variable, called an {\it autocorrelation function}. The |
402 |
> |
definition of the correlation function for two different variables is |
403 |
> |
similar to that of autocorrelation function, which is |
404 |
|
\begin{equation} |
405 |
< |
C(t) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty} |
405 |
> |
C_{AB}(\tau) = \langle A(0)B(\tau) \rangle = \lim_{T \rightarrow \infty} |
406 |
|
\frac{1}{T} \int_{0}^{T} dt A(t) B(t + \tau), |
407 |
|
\label{Ineq:crosscorrelationFunction} |
408 |
|
\end{equation} |
409 |
|
and called {\it cross correlation function}. |
410 |
|
|
411 |
< |
In section~\ref{In:ssec:average} we know from Eq.~\ref{Ineq:ergodicity} |
412 |
< |
the relationship between time average and ensemble average. We can put |
413 |
< |
the correlation function in a classical mechanics form, |
411 |
> |
We know from the ergodic hypothesis that there is a relationship |
412 |
> |
between time average and ensemble average. We can put the correlation |
413 |
> |
function in a classical mechanics form, |
414 |
|
\begin{equation} |
415 |
< |
C(t) = \langle A(0)A(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) A(t + \tau) \rho(q, p) |
415 |
> |
C_{AA}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)] |
416 |
> |
A[q^N(t+\tau), q^N(t+\tau)] \rho(q, p) |
417 |
|
\label{Ineq:autocorrelationFunctionCM} |
418 |
|
\end{equation} |
419 |
|
and |
420 |
|
\begin{equation} |
421 |
< |
C(t) = \langle A(0)B(\tau) \rangle = \int d \vec q~^N \int d \vec p~^N A(t) B(t + \tau) |
422 |
< |
\rho(q, p) |
421 |
> |
C_{AB}(\tau) = \int d \vec q~^N \int d \vec p~^N A[(q^N(t), p^N(t)] |
422 |
> |
B[q^N(t+\tau), q^N(t+\tau)] \rho(q, p) |
423 |
|
\label{Ineq:crosscorrelationFunctionCM} |
424 |
|
\end{equation} |
425 |
|
as autocorrelation function and cross correlation function |