ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/chuckDissertation/introduction.tex
(Generate patch)

Comparing trunk/chuckDissertation/introduction.tex (file contents):
Revision 3483 by chuckv, Tue Jan 13 14:39:50 2009 UTC vs.
Revision 3496 by chuckv, Wed Apr 8 19:13:41 2009 UTC

# Line 3 | Line 3
3   %!TEX root = /Users/charles/Documents/chuckDissertation/dissertation.tex
4   \chapter{\label{chap:intro}INTRODUCTION AND BACKGROUND}
5  
6 < \section{\label{introSec:introintro}INTRODUCTION}
6 > \section{\label{introSec:introintro}Introduction}
7  
8 < % TODO: Write a general introduction tying everything together.
9 < This dissertation presents research that I have performed and been involved with over my course of studies at the University of Notre Dame. I have chosen to present this material in a chronological manner since the research naturally lends itself to this style. Introductory material relating to Molecular Dynamics on metallic system is presented in the opening chapter.
8 > This dissertation presents research that I have performed and been involved with over the course of my studies at the University of Notre Dame. I have chosen to present this material in a chronological manner since the research naturally lends itself to this style. Introductory material relating to Molecular Dynamics techniques is presented in this opening chapter. The primary focus of this dissertation is the study of the structure and dynamics of phases exhibited by metallic systems, particularly metallic glass-formers and nanoparticles (NPs). Molecular Dynamics augments experimental information (both structural information and particle dynamics) about these systems and allows the probing of the fundamental nature of phase transitions in these materials.
9  
10 < \section{\label{introSec:MolecularDynamics}COMPUTER SIMULATION METHODS}
10 > Chapter \ref{chap:metalglass} explores transport dynamics in a known glass former (the eutectic mixture of silver and copper). $\mathrm{Ag}_6\mathrm{Cu}_4$ presents an interesting system to study because it closely resembles binary Lennard-Jones systems that have correlation functions that decay according to the Kohlrausch-Williams-Watts ({\sc kww}) law.\cite{Kohlrausch:1863zv,Williams:1970fk} This law is general such that it can be applied to not only relaxation in glasses, but also in NPs and will be explored in several chapters in this dissertation. In addition, the observation that dynamics in glassy materials can be described by a fractal waiting time distribution known as the Continuous Time Random Walk ({\sc CTRW}) will be investigated.
11  
12 < Computational techniques allow us to explore and validate hypotheses for experimentally observed behavior that may otherwise not be accessible through traditional experimental techniques. Additionally, computer simulations allow for theory to propose areas of interest to which physical experimental techniques may be applied. Two of the most common computer simulation methods, Monte Carlo {\sc MC }and Molecular Dynamics {\sc MD} have been widely applied to a variety of model systems ranging from large proteins to metallic clusters. These methods typically use classical approximations for interactions between particles that allow them to be applied to much larger and more complex systems than would be practical for more detailed ab-initio quantum based models. While these two techniques use similar descriptions for the interactions between particles, Monte Carlo and Molecular Dynamics differ in their fundamental technique for sampling phase space configurations for the system of interest. Monte Carlo is a stochastic approach based upon exploring a systems potential energy surface by randomly probing the systems configuration space within a given ensemble. Conversely, the Molecular Dynamic approach uses the potential energy surface to explicitly calculate the forces within a system and integrate those forces in time to produce a phase space trajectory. Both techniques allow for sampling of thermodynamic quantities within an ensemble, but only Molecular Dynamics allow us to calculate properties that are dynamical in nature. That is to say, properties that depend on knowing the trajectory of the system as a function of time through phase space. Such properties include diffusion, viscosity, thermal conductivity, and other time-dependent correlation functions. Since many of the interesting questions pertaining to systems presented in this dissertation involve dynamics, Molecular Dynamics was the method of choice for our computer simulations.\cite{Allen87,Frenkel02,Leach01}
12 > Spontaneous alloying of bimetallic Au-Ag NPs after synthesis is the focus of study in Chapter \ref{chap:nanodiffusion}. Metallic nanoparticles exhibit many physical and chemical properties that differ significantly from their bulk counterparts. These unique properties may be partly due to the enormous surface area to volume ratio of NPs which leads to excess surface free energy. If the excess surface free energy is comparable in magnitude to the lattice or binding energies of the NPs, structural instability can result.
13 > Computational techniques will be used to explore whether the hypothesis that a small fraction of vacancies formed at the Au-Ag core-shell interface during synthesis can result in the alloying of the particle is consistent with experimental observations.
14  
15 < Many of the systems of interest in this dissertation present unique challenges for existing Molecular Dynamics computational codes. Because metallic potentials and parallel processing capabilities are found in few available Molecular Dynamics programs, we have developed an open source computational code that would address some of these deficiencies. The Object Oriented Parallel Simulation Engine ({\sc oopse}) was the result of the desire to create a Molecular Dynamics engine that incorporated modern object oriented syntax for building a molecular simulation with the ability to simulate large systems that require unique model for molecular interactions.\cite{Meineke:2004uq} We have released {\sc oopse} under a permissive open source license because of the belief that computational codes used to publish scientific material should be transparent and open to peer review. Computational codes used in the first two chapters of this dissertation were the precursor of what was eventually released as {\sc oopse}.
15 > Chapter \ref{chap:bulkmod} reports on simulations of the transient
16 > response of metallic nanoparticles to the nearly instantaneous heating
17 > undergone when photons are absorbed during ultrafast laser excitation
18 > experiments. The time scale for heating (determined by the electron-phonon
19 > coupling constant) is faster than a single period of the breathing
20 > mode for spherical nanoparticles. Hot-electron pressure and direct lattice heating contribute to the thermal excitation of the atomic degrees of freedom, and both
21 > mechanisms are rapid enough to coherently excite the breathing mode of
22 > the spherical particles. Molecular Dynamics is used to augment experimental data by replicating the laser-excitation event to probe the dynamics of the NPs after excitation.
23  
24 + Chapter \ref{chap:nanoglass} combines observations from the previous chapters to determine if it is plausible to construct a glassy nanobead from the $\mathrm{Ag}_6\mathrm{Cu}_4$ system. One of the side effects of absorption of laser radiation at these frequencies is the rapid (sub-picosecond) heating of the electronic degrees of freedom in the metal. Since these experiments are carried out in condensed phase surroundings, the large surface area to volume ratio makes the heat transfer to the surrounding solvent a relatively rapid process. In Chapter \ref{chap:bulkmod} we observe that the cooling rate for these particles (10$^{11}$-10$^{12}$ K/s) is in excess of the cooling rate required for glass formation in bulk metallic alloys.\cite{Greer:1995qy} Given this fact, it may be possible to use laser excitation to melt, alloy and quench metallic nanoparticles in order to form glassy nanobeads.
25 +
26 + \section{\label{introSec:MolecularDynamics}Computer Simulation Methods}
27 +
28 + Computational techniques allow us to explore and validate hypotheses about experimentally observed behavior that may otherwise not be accessible through traditional experimental techniques. Additionally, computer simulations allow for theory to propose areas of interest to which physical experimental techniques may be applied. Two of the most common computer simulation methods, Monte Carlo ({\sc MC})and Molecular Dynamics ({\sc MD}) have been widely applied to a variety of model systems ranging from large proteins to metallic clusters. These methods typically (but not always) use classical approximations for the interactions between particles. This allows them to be applied to much larger and more complex systems than would be practical for more detailed {\it ab initio} quantum mechanical models. While these two techniques use similar descriptions for the interactions between particles, Monte Carlo and Molecular Dynamics differ in their fundamental technique for sampling configurations for the system of interest. Monte Carlo sampling is a stochastic approach based on randomly probing the configuration space commensurate with a given ensemble. Conversely, the Molecular Dynamics approach uses the potential energy surface to explicitly calculate the forces within a system and integrate those forces in time to produce a phase space trajectory. Both techniques allow for sampling of thermodynamic quantities within an ensemble, but only Molecular Dynamics allow us to calculate properties that are dynamical in nature. That is to say, properties that depend on knowing the trajectory of the system as a function of time through phase space. Such properties include diffusion constants, viscosities, thermal conductivities, and other time-dependent correlation functions. Since many of the interesting questions pertaining to systems presented in this dissertation involve dynamics, Molecular Dynamics was the method of choice for our computer simulations.\cite{Allen87,Frenkel02,Leach01}
29 +
30 + Many of the systems of interest in this dissertation present unique challenges for existing Molecular Dynamics computational codes. Because metallic potentials and parallel processing capabilities are found in few available Molecular Dynamics programs, we have developed an open source computational code that would address some of these deficiencies. The Object Oriented Parallel Simulation Engine ({\sc oopse}) was the result of the desire to create a Molecular Dynamics engine that incorporated modern object oriented syntax for building a molecular simulation with the ability to simulate large systems that require unique models for molecular interactions.\cite{Meineke:2004uq} We have released {\sc oopse} under a permissive open source license because of the belief that computational codes used to publish scientific material should be transparent and open to peer review. Computational codes used in the first two chapters of this dissertation were the precursor of what was eventually released as {\sc oopse}.
31 +
32   %TODO: Do I need to introduce the concepts of phase space and configuration space?
33 < \subsection{\label{introSec:EmpiricalEfncs}EMPIRICAL ENERGY FUNCTIONS} It is computationally advantageous to split the potential energy into intra-molecular and inter-molecular contributions to the total potential energy. Typically, intra-molecular potential energy functions include the short-ranged (bonded) portion of the potential and the long-range (non-bonded) interactions are inter-molecular. The general form of the potential is then,
33 > \subsection{\label{introSec:EmpiricalEfncs}Empirical Energy Functions} It is computationally advantageous to split the potential energy into intra-molecular and inter-molecular contributions to the total potential energy. Typically, intra-molecular potential energy functions include the short-ranged (bonded) portion of the potential and the long-range (non-bonded) interactions are inter-molecular. The general form of the potential is then,
34   \begin{equation}
35          V = V_{\mathrm{short-range}} + V_{\mathrm{long-range}}.
36   \end{equation}
37 < The short-ranged portion includes the explicit bonds, bends, torsions, and improper-torsons which have been defined in the {\sc oopse} meta-data file for the molecules present in the simulation. The functional forms and parameters for these interactions are defined by the choice of force field for the simululation.
37 > The short-ranged portion includes the explicit bonds, bends, torsions, and improper-torsions  for the molecules present in the simulation. The functional forms and parameters for these interactions are defined by the choice of force field for the simululation.
38  
39   For a system containing $N$ atoms, we can decompose the long-range potential into terms that contain the coordinates of individual atoms, pairs, triplets, etc. such that the total long-range potential is given by
40  
41   \begin{equation}
42 <        v_{\mathrm{long-range}} = \sum_{i}v_{1}(\mathbf{r}_{i}) + \sum_{i}\sum_{j>i}v_{2}(\mathbf{r}_{i},\mathbf{r}_{j}) + \sum_{i}\sum_{j>i}\sum_{k>j>i}v_{3}(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{r}_{j}) + \ldots .
42 >        V_{\mathrm{long-range}} = \sum_{i}v_{1}(\mathbf{r}_{i}) + \sum_{i}\sum_{j>i}v_{2}(\mathbf{r}_{i},\mathbf{r}_{j}) + \sum_{i}\sum_{j>i}\sum_{k>j>i}v_{3}(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{r}_{k}) + \ldots .
43   \label{eq:genlongrange}
44   \end{equation}
45 < The sum $\sum_{i}\sum_{j>i}$ indicates a summation over all pairs containing $i$ and $j$ while not double counting any pair $ij$ twice and not including a self interaction. The first term in equation \ref{eq:genlongrange} is due to an external field acting on the particles in a system such as an external electric field or the interaction of particles with a container wall. The second term in \ref{eq:genlongrange} defines the pair potential depending only on the magnitude of the pair separation $r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|$. The later terms presented in \ref{eq:genlongrange} correspond to three-body interactions, four body-interactions, and so on. For a system of three molecules, the contribution of the pair-pair interactions to the total potential energy would be $V_{\mathrm{AB}}+V_{\mathrm{BC}}+V_{\mathrm{AC}}$ where $V_{\mathrm{AB}}$ is evaluated as though molecule C were absent. This is clearly unphysical because the presence of molecule C {\em will} modify the interaction between molecules A and B. The correction do to the presence of molecule C can be included in out previous expression for the total potential energy such that there is a three-body correction $V_{\mathrm{long-range}} = V_{\mathrm{AB}}+V_{\mathrm{BC}}+V_{\mathrm{AC}}+V_{\mathrm{ABC}}$. If we have a system of four molecules, we would have to include a four-body correction term.  Because of the expense of computing terms higher then pair-pair interactions and that the contribution of higher order terms is generally small, pair-pair interactions are included as the main term for calculating the long-range potential. The pairwise potential can modified to include average three-body effects forming an {\em effective} pair potential
45 > The sum $\sum_{i}\sum_{j>i}$ indicates a summation over all pairs containing $i$ and $j$ which does not double count any pair ($ij$) and does not include self interactions. The first term in (\ref{eq:genlongrange}) is due to an external field acting on the particles in a system such as an external electric field or the interaction of particles with a container wall. The second term in (\ref{eq:genlongrange}) defines a pair potential which often depends only on the magnitude of the pair separation $r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|$. The later terms presented in (\ref{eq:genlongrange}) correspond to three-body interactions, four body-interactions, and so on. For a system of three molecules, the contribution of the pair-pair interactions to the total potential energy would be $V_{\mathrm{AB}}+V_{\mathrm{BC}}+V_{\mathrm{AC}}$ where $V_{\mathrm{AB}}$ is evaluated as though molecule C were absent. This is often unphysical because the presence of molecule C {\em will} modify the interaction between molecules A and B. The correction due to the presence of molecule C can be included in our previous expression for the total potential energy such that there is a three-body correction $V_{\mathrm{long-range}} = V_{\mathrm{AB}}+V_{\mathrm{BC}}+V_{\mathrm{AC}}+V_{\mathrm{ABC}}$. If we have a system of four molecules, we would have to include a four-body correction term.  Because of the expense of computing terms higher than pair interactions and the fact that the contribution of higher order terms is generally small, pair interactions are the primary term for calculating the long-range potential. The pairwise potential can be modified to include average three-body effects forming an {\em effective} pair potential
46  
47  
48   \begin{equation}
# Line 35 | Line 50 | The sum $\sum_{i}\sum_{j>i}$ indicates a summation ove
50          \label{eq:effpair}
51   \end{equation}
52  
53 < The functional form for pair-potential represented in computer simulations is typically an effective pair potential that approximates contributions from many-body interactions and includes them in the pair-pair interaction.\cite{Allen87} These functional forms are typically fit from empirical data and quantum mechanical calculations. Effective pair potentials work well for describing the interactions in the noble gases progressively well in sold, liquid and gas phases. As the system becomes more dense and polarizable, the effective pair approximation begins to fail to adequately describe that system. For example, in a solid, the cohesive energy, $E_{\mathrm{coh}}$ would be calculated as a sum over the bond pairs as indicated by Equation  \ref{eq:genlongrange}. If we consider a set of crystal structures that differ only in coordination, $Z$, of the central atom , we would expect the cohesive energy to scale linearly with coordination. In reality, the cohesive energy scales more weekly do to multi-body screening effects and actually scales as $-Z^{1/2}$.\cite{Nieminen:1990hw,DAW:1993p1640} We will find in Section \ref{introSec:eam} that multi-body effects are very important in accurately describing metallic systems.
53 > The functional form for pair-potential represented in computer simulations is typically an effective pair potential that approximates contributions from many-body interactions and includes them in the pair interaction.\cite{Allen87} These functional forms are typically fit to empirical data and quantum mechanical calculations. Effective pair potentials work best for describing the interactions in the noble gases increasing in effectiveness from solid to gas phases. As the system becomes more dense and polarizable, the effective pair approximation fails to adequately describe that system. For example, in a solid, the cohesive energy, $E_{\mathrm{coh}}$ would be calculated as a sum over the bond pairs as indicated by Equation  \ref{eq:genlongrange}. If we consider a set of crystal structures that differ only in coordination, $Z$, of the central atom , we would expect the cohesive energy to scale linearly with coordination. In reality, the cohesive energy scales more weakly due to multi-body screening effects and actually scales as $-Z^{1/2}$.\cite{Nieminen:1990hw,DAW:1993p1640} We will find in Section \ref{introSec:eam} that multi-body effects are very important in accurately describing metallic systems.
54  
55 < In general, the interaction energy between two molecules as a function of their inter-molecular distance takes the form shown in Figure \ref{fig:ljform}. There is an attractive region at long-range where $-\partial V/\partial R < 0$ and becomes steeply repulsive at small $R$ to account for compressibility in condensed phase material. We represent the distance at which the energy passes through zero and becomes deeply repulsive as $\sigma$. The minimum energy the pair configuration possesses, $-\epsilon$, occurs at the minimum energy interaction distance of $R_m$.
55 > In general, the interaction energy between two molecules as a function of their inter-molecular distance takes the shape shown in Figure \ref{fig:ljform}. There is an attractive region at long-range where $-\partial V/\partial R < 0$ while at shorter distances, V, becomes steeply repulsive to account for the low compressibility of particles in condensed phase materials. We represent the distance at which the energy passes through zero and becomes deeply repulsive as $\sigma$. The minimum energy the pair configuration possesses, $-\epsilon$, occurs at the minimum energy interaction distance of $R_m$.
56  
57   \begin{figure}[htbp]
58          %\centering
59                  \includegraphics[height=3in]{images/ljform.pdf}
60 <        \caption{General form of the pair-pair interaction energy between molecule where $-\epsilon$ is the minimum energy, $\sigma$ is the close interaction distance and $R_m$ is the minimum energy seperation.}
60 >        \caption{General shape of the pair interaction energy between molecules where $-\epsilon$ is the minimum energy, $\sigma$ is the close interaction distance and $R_m$ is the minimum energy separation.}
61          \label{fig:ljform}
62   \end{figure}
63  
64  
65  
66  
67 < As indicated in Equation \ref{eq:genlongrange}, calculation of the long-range (non-bonded) potential involves a sum over all pairs of atoms (except for atom pairs which are involved in an intra-molecular interaction). It should be noted that atoms in this context refer to long-range interaction sites as defined by the force-field model. These interaction sites may not necessarily correspond to the location of an atomic site in a molecule. The expense of directly calculating all of the long-range interactions for $N$ atoms would involve $N(N-1)/2$ evaluations of atomic distances. To reduce the number of distance evaluations between pairs of atoms, {\sc oopse} uses a switched cutoff distance beyond which interactions are not calculated. Further, a Verlet neighbor list is implemented which creates a look-up of interactions that are within a specific cutoff distance.\cite{Allen87} A complication of using cutoff distances with neutral groups which contain charges is that the system will exhibit pathological forces unless the cutoff is applied to the neutral groups evenly instead of to the individual long-range interaction sites.\cite{Leach01} {\sc oopse} allows users to specify cutoff groups which may contain an arbitrary number of interaction sites with-in a molecule. Thus, members in a cutoff group are treated as a single unit for the purpose of evaluating the switching function:
53 < \begin{equation}
54 <        V_{\mathrm{long-range}} = \sum_{a} \sum_{b>a} s(r_{ab}) \sum_{i \in a} \sum_{j \in b} V_{ij}(r_{ij}),
55 < \end{equation}
56 < where $r_{ab}$ is the distance between the centers of mass of the two cutoff groups ($a$ and $b$). The sums over $a$ and $b$ are over the cutoff groups that are present in the simulation. Atoms which are not explicitly defined as members of a {\tt cutoffGroup} are treated as a group consisting of only one atom. In simulations containing only Lennard-Jones atoms, the cutoff radius has a default value of $2.5\sigma_{ii}$, where $\sigma_{ii}$ is the largest Lennard-Jones length parameter present in the simulation. In simulations containing charged or dipolar atoms, the default cutoff radius is $15 \mbox{\AA}$.
67 > As indicated in Equation \ref{eq:genlongrange}, calculation of the long-range (non-bonded) potential involves a sum over all pairs of atoms (as defined by the molecular model) (except for atom pairs which are involved in a short-ranged bonded interaction). It should be noted that atoms in this context refers to interaction sites as defined by the force-field model. These interaction sites may not necessarily correspond to the location of an atomic site in a molecule. The expense of directly calculating all of the long-range interactions for $N$ atoms would involve $N(N-1)/2$ evaluations of interatomic distances. To reduce the number of distance evaluations between pairs of atoms, it is common to use a switched cutoff distance beyond which interactions are not calculated.\cite{Allen87,Frenkel02,Leach01} Further, a Verlet neighbor list can be implemented which creates a look-up of interactions that are within a specific cutoff distance.\cite{Allen87}
68  
69 + \section{\label{introSec:LJPot}The Lennard-Jones Force Field}
70  
71 < \section{\label{introSec:LJPot}THE LENNARD-JONES FORCE FIELD}
60 <
61 < The most basic force field implemented in {\sc oopse} is the Lennard-Jones force field, which mimics the van der Waals interaction at long distances and uses an empirical repulsion at short distances. The general functional form for the Lennard-Jones potential is presented in Figure \ref{fig:ljform} is given by:
71 > One of the most basic force fields is the Lennard-Jones force field, which mimics the van der Waals interaction at long distances and uses an empirical repulsion at short distances. The general functional form for the Lennard-Jones potential is presented in Figure \ref{fig:ljform} and is given by:
72   \begin{equation}
73          V_{\text{LJ}}(r_{ij}) = 4\epsilon_{ij} \biggl[ \biggl(\frac{\sigma_{ij}}{r_{ij}}\biggr)^{12} - \biggl(\frac{\sigma_{ij}}{r_{ij}}\biggr)^{6} \biggr], \label{eq:lennardJonesPot}
74   \end{equation}
75   where $r_{ij}$ is the distance between particles $i$ and $j$, $\sigma_{ij}$ scales the length of the interaction, and $\epsilon_{ij}$ scales the well depth of the potential.
76  
77 < Interactions between dissimilar particles require cross term parameters for $\sigma$ and $\epsilon$ to be generated. These parameters are determined using the Lorentz-Berthelot mixing rules:\cite{Allen87}
77 > Interactions between dissimilar particles require parameters for $\sigma$ and $\epsilon$ to be generated. These parameters are usually determined using the Lorentz-Berthelot mixing rules:\cite{Allen87}
78   \begin{equation}
79          \sigma_{ij} = \frac{1}{2}[\sigma_{ii} + \sigma_{jj}], \label{eq:sigmaMix}
80   \end{equation}
# Line 74 | Line 84 | and
84   \end{equation}
85  
86   % TODO: General MD introduction, can grab from  OOPSE paper.
87 < \section{\label{introSec:MetallicPotentials}METALLIC POTENTIALS} Bonding in metallic systems is significantly different from systems comprised elements that have a covalently dominated binding mechanism. One of the first attempts to construct a model for bonding in metals was constructed by Drude in 1900 by extending the classical kinetic theory of gases to electrons in metallic systems.\cite{Ashcroft:1976zt,Drude:1900p1479,Drude:1900p1481} Just as in the kinetic theory of gasses, the Drude model assumes that electrons behave as a hard spheres that move in a straight line unless colliding with other particles. Electron collision events instantaneously alter the trajectory for an electron. Electronic thermal equilibrium is assumed to occur through electron collisions and obeys Maxwell-Boltzmann statistics. For metals, Drude assumed that electrons were mobile negative charge carriers and that there is a compensating positive charge center attached to much heavier immobile particles. A single atom of a metallic element consists of a nucleus surrounded by a core electron density that is tightly bound to the nucleus. The valance electrons are more weekly bound and form an electron gas when the isolated metallic atoms condense to form a metal. This electron gas is typically referred to as the conduction band in a condensed system. Despite using relatively simplistic assumptions, the Drude model provides a reasonable explanation for AC and DC electrical conductivity, Hall effect and electron thermal conductivity.\cite{Ashcroft:1976zt}
87 > \section{\label{introSec:MetallicPotentials}Metallic Potentials} Bonding in metallic systems is significantly different from systems comprising elements that have a mix of covalent, coulombic, and van der Waals interactions. One of the first attempts to model bonding in metals was constructed by Drude in 1900 by extending the classical kinetic theory of gases to electrons in metallic systems.\cite{Ashcroft:1976zt,Drude:1900p1479,Drude:1900p1481} Just as in the kinetic theory of gases, the Drude model assumes that electrons behave as hard spheres that move in a straight line unless colliding with other particles. Electron collision events instantaneously alter the trajectory for an electron. Electronic thermal equilibrium is assumed to occur through electron collisions and obeys Maxwell-Boltzmann statistics. For metals, Drude assumed that electrons were mobile negative charge carriers and that there is a compensating positive charge center attached to much heavier immobile particles. A single atom of a metallic element consists of a nucleus surrounded by a core electron density that is tightly bound to the nucleus. The valence electrons are more weakly bound and form an electron gas when the isolated metallic atoms condense to form a metal. Despite using relatively simplistic assumptions, the Drude model provides a reasonable explanation for AC and DC electrical conductivity, the Hall effect and electron thermal conductivity.\cite{Ashcroft:1976zt}
88  
89 < Sommerfeld subsequently extended the Drude model by incorporating quantum effects into the model.\cite{Ashcroft:1976zt,Kittel:1996fk} The Pauli exclusion principle requires that, since electrons are fermions, only one electron occupy any single electronic state at a time. Additionally, electrons are indistinguishable particles both in intrinsic physical properties and, do the Heisenberg uncertainty principle, with respect to the individual position and momentum an electron possesses. Fermi-Dirac statistics account for both indistinguishability and anti-symmetric exchange between pairs of electrons by modifying the probability of finding an electron at a given energy
89 > Sommerfeld subsequently extended the Drude model by incorporating quantum effects.\cite{Ashcroft:1976zt,Kittel:1996fk} The Pauli exclusion principle requires that, since electrons are fermions, only one electron occupy any single electronic state at a time. Additionally, electrons are indistinguishable particles. Fermi-Dirac statistics account for both indistinguishability and anti-symmetric exchange between pairs of electrons by modifying the probability of finding an electron at a given energy
90   \begin{equation}
91          f(E)=\frac{1}{e^{(E-E_{F})/k_{B}T}+1} \label{eq:fermi-dirac}
92   \end{equation}
93 < where $E_{F}$ is the Fermi energy of the system. $E_{F}$ can be written as a function of the number density of electrons,
93 > where $E_{F}$ is the Fermi energy of the system at temperature $T$ and $k_B$ is Boltzmann's constant. $E_{F}$ is by definition the topmost filled level ($n_f$) in a N electron system that is in the ground state confined between $0$ and length $L$.
94   \begin{equation}
95 <        E_{F} = \frac{\hbar^2}{2m}\left(3\pi^2 Z\rho\right)^{3/2}.
95 >        E_{F} = \frac{\hbar^2}{2m}\left(\frac{n_f\pi}{L}\right)^{2}.
96   \end{equation}
97 < Sommerfeld's free electron model uses Fermi-Dirac statistics instead of the classical Maxwell-Boltzmann statistics in the Drude model producing a much more accurate model that successfully describe numerous phenomenon in metallic systems. These include Wiedemann-Franz law relating electrical and thermal conductivity, temperature dependence of the heat capacity, and the electronic density of states. However, the free electron model makes quantitative predictions that are directly contradicted by experimental observation. Ashcroft and Mermen have an extensive discussion in Chapter 3 of their book describing the failures of the free electron model do to the oversimplification of the role of ions and the fundamental nature nature of interactions in metallic systems.\cite{Ashcroft:1976zt} Most significant of these failure is the inability to distinguish between metals, semi-metals, semiconductors and insulators and the relation of valance electrons in a free atom to the conduction band electrons in a metal.
97 > Sommerfeld's free electron model uses Fermi-Dirac statistics instead of the classical Maxwell-Boltzmann statistics in the Drude model producing a much more accurate model that successfully describes numerous phenomena in metallic systems. These include the Wiedemann-Franz law relating electrical and thermal conductivity, temperature dependence of the heat capacity, and the electronic density of states.\cite{Ashcroft:1976zt} However, the free electron model makes quantitative predictions that are directly contradicted by experimental observation. Ashcroft and Mermen have an extensive discussion in Chapter 3 of their book describing the failures of the free electron model due to the oversimplification of the role of ions and the fundamental nature of
98 > interactions in metallic systems.\cite{Ashcroft:1976zt} Most significant of these failure is the inability to distinguish between metals, semi-metals, semiconductors and insulators and the relation of valence electrons in a free atom to the conduction band electrons in a metal.
99  
100   Many of these deficiencies can be corrected by accounting for ions contained in a periodic lattice with an electron density modulated by these lattice positions. A model pseudo-potential for the pair-pair interaction can be constructed that has the form
101  
# Line 94 | Line 105 | where $v_d(r_{ij})$ represents the direct interaction
105   \end{equation}
106   where $v_d(r_{ij})$ represents the direct interaction between ion pairs and $v_l(r_{ij})$ is a indirect interaction due to ions being placed in a bath of electrons.\cite{Egelstaff:1992yb} Effective Medium Theory ({\sc EMT}) was one of the first pseudo-potential models that uses such a functional form for the potential.\cite{Nrskov:1980p1752,Nrskov:1982p1753,Stott:1980p1754}
107  
108 < In {\sc EMT} the real material is replaced by {\it jellium} which consists of a homogeneous electron gas. The metal ions are replaced by a constant positive background density. The first order approximation for the change in energy due to embedding an atom in jellium at position $\mathbf{r}$ is
108 > In {\sc EMT}, the real material is replaced by {\it jellium} which consists of a homogeneous electron gas. The metal ions are replaced by a constant positive background density. The first order approximation for the change in energy due to embedding an atom in jellium at position $\mathbf{r}$ is
109  
110   \begin{equation}
111          \Delta E(\mathbf{r}) = E_{\mathrm{atom+jellium}}-( E_{\mathrm{atom}}+E_{\mathrm{jellium}})
112          \equiv \Delta E^{\mathrm{embed}}(\rho_0(\mathbf{r}))
113   \end{equation}
114 < where $\Delta E^{\mathrm{embed}}(\rho_0(\mathbf{r}))$ is the {\it embedding energy} of an atom in a homogeneous electron gas with density $\rho$ and $\rho_0(\mathbf{r})$ is the electron density at $\mathbf{r}$. Puska et al. showed that the embedding energy us a universal function of the electron density.\cite{Puska:1981p1755} They found that for the noble gasses, the embedding energy was linear for all values of $\rho$ because the closed electron shell only causes a repulsive interaction. Van der Waals interactions do provide attractive interactions, but these effects are not part of the embedded interaction. For other elements, there is a minimum do to the propensity of these elements to form bonds. Several different potential models for metals share the basic notion of {\sc EMT} but differ in the functional form for the embedding energy.
114 > where $\Delta E^{\mathrm{embed}}(rho)$ is the {\it embedding energy} of an atom in a homogeneous electron gas with density $\rho$ and $\rho_0(\mathbf{r})$ is the electron density at $\mathbf{r}$. Puska et al. showed that the embedding energy is a universal function of the electron density.\cite{Puska:1981p1755} They found that for the noble gases, the embedding energy was linear for all values of $\rho$ because the closed electron shell only causes a repulsive interaction. Van der Waals interactions do provide attractive interactions, but these effects are not part of the embedded interaction. For other elements, there is a minimum in the potential due to the propensity of these elements to form bonds. Several different potential models for metals share the basic notion of {\sc EMT} but differ in the functional form for the embedding energy.
115  
116   %TODO paragraph on Egelstaff metal potential, first minimum much deeper then 1/r^6, second minumum do to ion-ion pair interactions.
117 < \subsection{\label{introSec:eam}EMBEDDED ATOM METHOD}
117 > \subsection{\label{introSec:eam}Embedded Atom Method}
118  
119 < In section \ref{introSec:EmpiricalEfncs}, the pair potential approximation was introduced as a computationally expedient method for calculating the interaction energy for a system of molecules. However, it was pointed out that the effective pair approximation was inadequate for describing interactions in metallic solids and liquids. An example of the inadequacy of this approximation for metallic systems is presented by Daw et al. involving differences in elastic constants for cubic lattice solids composed of noble gasses and fcc metals.\cite{DAW:1993p1640} It is known for a solid described by a purely pairwise interaction that the elastic constants $C_{12}$ and $C_{44}$ are equal and is known as the Chauchy relation. We can also define a Chauchy pressure as the difference between the two elastic constants, $P_c=C_{12}-C_{44}$.\cite{Kittel:1996fk} Thus, the ideal ratio of $C_{12}:C_{44}$ is $1$. Both Ar, and Kr have ratios very close to $1$, but fcc metals have ratios closer to 2. Both Pt, and Au have ratios that are between 3 and 4. This can be seen in Table \ref{tab:manybod} reproduced from Table 1 in Daw et al. Also included in this Table \ref{tab:manybod} is the ratio of vacancy formation energy ($E^f_v$) to cohesive energy($E_{\mathrm{coh}}$). In a solid with purely pairwise interactions, this ratio is once again $1.0$. The metals have values closer to $0.35$ with Au and Pt again as outliers significantly deviating from pairwise bonding. The last indicator presented in Table \ref{tab:manybod} is that of the ratio between cohesive energy and melting temperature. Once again, FCC metals differ significantly from a strictly pair-wise interaction model.
119 > In section \ref{introSec:EmpiricalEfncs}, the pair potential approximation was introduced as a computationally expedient method for calculating the interaction energy for a system of molecules. However, it was pointed out that the effective pair approximation was inadequate for describing interactions in metallic solids and liquids. An example of the inadequacy of this approximation for metallic systems is presented by Daw {\it et al.} involving differences in elastic constants for cubic lattice solids composed of noble gases and fcc metals.\cite{DAW:1993p1640} It is known for a solid described by a purely pairwise interaction that the elastic stiffness constants $C_{12}$ and $C_{44}$ are equal. This is known as the Cauchy relation. We can also define a Cauchy pressure as the difference between the two elastic constants, $P_c=C_{12}-C_{44}$.\cite{Kittel:1996fk}  Both Ar and Kr have ratios very close to $1$, but fcc metals have ratios closer to 2. Pt and Au have ratios that are between 3 and 4. This can be seen in Table \ref{tab:manybod} (which is reproduced from Table 1 in Daw {\it et al.}). Also included in Table \ref{tab:manybod} is the ratio of vacancy formation energy ($E^f_v$) to cohesive energy($E_{\mathrm{coh}}$). In a solid with purely pairwise interactions, this ratio is once again $1.0$. The metals have values closer to $0.35$ with Au and Pt again as outliers significantly deviating from pairwise bonding. The last indicator presented in Table \ref{tab:manybod} is that of the ratio between cohesive energy and melting temperature. Once again, FCC metals differ significantly from a strictly pair-wise interaction model.
120  
121 < \begin{table}
122 <        \caption{ DEVIATION FROM PAIRWISE BEHAVIOR IN FCC METALS COMPARED WITH NON-METALLIC SYSTEMS. REPRODUCED FROM REFERENCE \cite{DAW:1993p1640} TABLE 1.}
121 > \begin{table}[tpb]
122 >        \setlength{\capwidth}{0.7\textwidth}
123 >        \begin{center}
124 >        
125 > \begin{minipage}{\textwidth}
126 >        \renewcommand{\thefootnote}{\thempfootnote}
127 >        \caption[DEVIATION FROM PAIRWISE BEHAVIOR IN FCC METALS COMPARED WITH NON-METALLIC SYSTEMS]{ DEVIATION FROM PAIRWISE BEHAVIOR IN FCC METALS COMPARED WITH NON-METALLIC SYSTEMS\footnote{Reproduced from Daw {\it et al.} \cite{DAW:1993p1640} Table 1.}}
128          \centering
129   \begin{tabular}{cccc}
130   \toprule
# Line 119 | Line 135 | LJ & 1.0 & 1.00 & 13\\
135   %\hline
136   LJ & 1.0 & 1.00 & 13\\
137   \hline
138 < Rare Gasses &&&\\
138 > Rare Gases &&&\\
139   %\hline
140   Ar & 1.1 & 0.95 & 11\\
141   %\hline
# Line 140 | Line 156 | Au & 3.7 & 0.23 & 34\\
156   Au & 3.7 & 0.23 & 34\\
157   \bottomrule
158   \end{tabular}
159 + \renewcommand{\footnoterule}{}
160 + \end{minipage}
161   \label{tab:manybod}
162 +
163 + \end{center}
164   \end{table}
165  
166 < Accurate description of metallic bonding requires the consideration of coordination-dependent bonding but at the same time must be computationally efficient to implement. The Embedded Atom Method ({\sc EAM}) of Daw and Baskes uses the idea as in Equation \ref{eq:metpseudo} of separating metallic interactions into those do to ion-pair interactions and those do to embedding an ion in an electron gas.\cite{Daw84,DAW:1983ht} Density Functional Theory is a useful starting point to determine the embedding energy since one can prove using this theory that the energy of a system of atoms can be written as a functional of the electronic density.\cite{Hohenberg:1964bs} Using density-functional theory as a starting point, they derived an approximate expression for the cohesive energy in a metallic system. The density-functional expression for the cohesive energy of a solid is given by
166 > Accurate description of metallic bonding requires the consideration of coordination dependent bonding but at the same time must be computationally efficient to implement. The Embedded Atom Method ({\sc EAM}) of Daw and Baskes uses the idea in Equation \ref{eq:metpseudo} of separating metallic interactions into those due to ion-pair interactions and those due to embedding an ion in an electron gas.\cite{Daw84,DAW:1983ht} Density Functional Theory is a useful starting point to determine the embedding energy since one can prove using this theory that the ground state energy of a system of atoms can be written as a unique functional of the electronic density.\cite{Hohenberg:1964bs} Using density-functional theory as a starting point, they derived an approximate expression for the cohesive energy in a metallic system. The density-functional expression for the cohesive energy of a solid is given by
167  
168   \begin{equation}
169 <        E_{\mathrm{coh}} = G[\rho] + \frac{1}{2} \sum_{i,j,i\neq j} \frac{Z_i Z_j}{R_{ij}} - \sum_{i} \int \frac{Z_i \rho(\mathbf{r})}{|\mathbf{r}-\mathbf{R}_i|}\mathrm{d}\mathbf{r} + \frac{1}{2} \int\int \frac{\rho(\mathbf{r}_1)\rho(\mathbf{r}_2)}{r_{12}} \mathrm{d}\mathbf{r}_1 \mathrm{d}\mathbf{r}_2 - E_{\mathrm{atoms}}
169 >        E_{\mathrm{coh}} = F[\rho] + \frac{1}{2} \sum_{i,j,i\neq j} \frac{Z_i Z_j}{R_{ij}} - \sum_{i} \int \frac{Z_i \rho(\mathbf{r})}{|\mathbf{r}-\mathbf{R}_i|}\mathrm{d}\mathbf{r} + \frac{1}{2} \int\int \frac{\rho(\mathbf{r}_1)\rho(\mathbf{r}_2)}{r_{12}} \mathrm{d}\mathbf{r}_1 \mathrm{d}\mathbf{r}_2 - E_{\mathrm{atoms}}
170          \label{eq:eamdensfunc}
171   \end{equation}
172 < where i and j represent the nuclei of the solid and $\mathbf{R}_i$ represents the charge and position of the $i$th nucleus. The collective energy of the isolated atoms is given by $E_{\mathrm{atoms}}$ and $G[\rho]$ is the kinetic, exchange, and correlation energy functional.
172 > where i and j represent the nuclei of the solid and $Z_i$, $\mathbf{R}_i$ represent the charge and position of the $i$th nucleus and $\rho$ is the electron density. The collective energy of the isolated atoms is given by $E_{\mathrm{atoms}}$ and $F[\rho]$ is the kinetic, exchange, and correlation energy functional.
173  
174 < If we assume that $G[\rho]$ can be described by
174 > If we assume that $F[\rho]$ can be described by
175   \begin{equation}
176 <        G[\rho] = \int g(\rho(\mathbf{r}),\nabla\rho(\mathbf{r}),\nabla^{2}\rho(\mathbf{r}),\ldots) \mathrm{d}\mathbf{r}
176 >        F[\rho] = \int F(\rho(\mathbf{r}),\nabla\rho(\mathbf{r}),\nabla^{2}\rho(\mathbf{r}),\ldots) \mathrm{d}\mathbf{r}
177   \end{equation}  
178 < where $g$ is the density of the solid and is described by the local electron density. This assumption has been justified by studies of response functions in nearly free-electron gasses. If we further assume that electron density in a solid can be described as a linear superposition of the electron density due to individual atoms such that $\rho_s(\mathbf{r}) = \sum_{i}\rho_{i}(\mathbf{r}-\mathbf{R})$. Lastly, we define an embedding energy for placing an atom
159 < $i$ in an constant density electron gas to be
160 < \begin{equation}
161 < G_i(\bar\rho_i) = G[\rho_i^a + \bar\rho_i] - G[\rho_i^a] - G[\bar\rho_i].
162 < \end{equation}
163 < Using the two assumptions and this definition for the embedding energy, Daw and Baskes derived an expression for the cohesive energy in metals as
178 > where $F$ is the density of the solid and is described by the local electron density. This assumption has been justified by studies of response functions in nearly free-electron gases. If we further assume that electron density in a solid can be described as a linear superposition of the electron density due to individual atoms such that $\rho_s(\mathbf{r}) = \sum_{i}\rho_{i}(\mathbf{r}-\mathbf{R}_i)$. Using the two assumptions, Daw and Baskes\cite{Daw84,DAW:1983ht} derived an expression for the cohesive energy in metals as
179  
180   \begin{equation}
181 <        V  =  \sum_{i} G_{i}\left[\rho_{i}\right] + \sum_{i} \sum_{j \neq i}
181 >        V  =  \sum_{i} F_{i}\left[\rho_{i}\right] + \sum_{i} \sum_{j \neq i}
182          \phi_{ij}( R_{ij}) + E_{\mathrm{err}}
183          \label{introeq:eam}
184   \end{equation}
# Line 174 | Line 189 | where $\rho_{i}$ is given by
189          \label{introeq:eamrho}
190   \end{equation}
191  
192 < The error associated with Equation \ref{introeq:eam} is represented by the term $E_{\mathrm{err}}$ and is a function of the background electron density $\bar\rho_i$. Daw derives an expression for the optimal background density by setting $E_{\mathrm{err}} = 0$.\cite{DAW:1989p1673} It is the embedding function that is the source of the many-body effects in {\sc EAM}. The embedding function is non-linear reflecting saturation in the metallic bond by increasing background electron density. A useful way of viewing the many body effect of $F[\rho]$ is that as an atom makes more bonds, each new bond constructed is weaker then the previous bond. This is reasonable because constructing a new bond increases the total bonding energy but decreases the average energy per bond. The weakening of successive bonds will occur if $F(\rho)$ has positive curvature ($d^2F/d\rho^2 > 0$). Puska et al. demonstrated this from first principle calculations for embedding an atom in a homogenous electron gas.\cite{Puska:1981p1755} The functional forms for the various parameters are plotted in \ref{introfig:eam_all} for the metallic elements that are the focus of this dissertation. Positive curvature in $F[\rho]$ can be seen in the right panel in this Figure.
192 > The error associated with Equation \ref{introeq:eam} is represented by the term $E_{\mathrm{err}}$ and is a function of the background electron density $\bar\rho_i$. Daw derives an expression for the optimal background density by setting $E_{\mathrm{err}} = 0$.\cite{DAW:1989p1673} It is the embedding function that is the source of the many-body effects in {\sc EAM}. The embedding function is non-linear, reflecting saturation in the metallic bond by increasing background electron density. A useful way of viewing the many body effect of $F[\rho]$ is that as an atom makes more bonds, each new bond constructed is weaker than the previous bond. This is reasonable because constructing a new bond increases the total bonding energy but decreases the average energy per bond. The weakening of successive bonds will occur if $F(\rho)$ has positive curvature ($d^2F/d\rho^2 > 0$). Puska et al. demonstrated this from first principle calculations for embedding an atom in a homogenous electron gas.\cite{Puska:1981p1755} The functional forms for the various parameters are plotted in \ref{introfig:eam_all} for the metallic elements that are the focus of this dissertation. Positive curvature in $F[\rho]$ can be seen in the right panel in this Figure.
193  
194   \begin{figure}[htbp]
195          %\centering
196                  \includegraphics[width=\linewidth]{images/eam_all.pdf}
197 <        \caption{FUNCTIONAL FORMS FOR EAM PARAMETERS.}
197 >        \caption{Functional Forms for {\sc EAM} Parameters.}
198          \label{introfig:eam_all}
199   \end{figure}
200  
201   The pair potential in {\sc EAM} is defined to be
202  
203   \begin{equation}
204 <        \phi_{ij}(R) = \left(\frac{1}{4\pi\epsilon}\right)\left(\frac{Z_i^a(R)Z_j^a(R)}{R}
204 >        \phi_{ij}(R) = \left(\frac{1}{4\pi\epsilon_0}\right)\left(\frac{Z_i^a(R)Z_j^b(R)}{R}
205          \right)
206          \label{eq:eamphi}
207   \end{equation}
208 < where the $Z^a(R)$ are effective screened charges of the nuclei corresponding to atom of identity $a$. In the original formulation of {\sc EAM}, the pair-interaction was purely repulsive in nature.\cite{PhysRevB.33.7983} Later formulations of {\sc EAM} include an attractive component in the pair-interaction potential.\cite{Voter:95} $Z(R)$ was then fit to an empirical parameterized form
208 > where the $Z^a_i(R)$ are effective screened charges of the nuclei ($i$) corresponding to atom of identity $a$. In the original formulation of {\sc EAM}, the pair-interaction was purely repulsive in nature.\cite{PhysRevB.33.7983} Later formulations of {\sc EAM} include an attractive component in the pair-interaction potential.\cite{Voter:95} $Z(R)$ was then fit to an empirical parameterized form
209  
210   \begin{equation}
211          Z(R) = Z_0(1+\beta R^\nu)e^{-\alpha R}
212   \end{equation}
213 < where $Z_0$ is given by the number of outer electrons in the atom. The three remaining parameters, $\alpha$, $\beta$,$\nu$ are determined empirically to give the desired screening to $Z(R)$. Likewise, $F[\rho]$ is defined by a universal function that relates the sublimation energy of a metal to the lattice constant.\cite{Rose:1984rw} The universal function
213 > where $Z_0$ is given by the number of outer electrons in the atom. The three remaining parameters, $\alpha$, $\beta$ and $\nu$ are determined empirically to give the desired screening to $Z(R)$. Likewise, $F[\rho]$ can be derived using a universal function that relates the sublimation energy of a metal $E(a)$, to the lattice constant, $a$.\cite{Rose:1984rw} The universal function
214  
215   \begin{equation}
216          E(a)=-E_{\mathrm{sub}}(1+a^*)e^{-a^*}
# Line 211 | Line 226 | $E_{\mathrm{sub}}$ in the first expression corresponds
226   {\sc EAM} is appropriate for metallic systems where directional bonding is not important and thus fails to describe the behavior of elements that are semiconductors or are in the middle of the transition series. A directional form of {\sc EAM} has been developed to account for directional bonding effects.\cite{BASKES:1987p1743,BASKES:1989p1746,BASKES:1992p1735} However, the directional bonding potentials tend to be more computationally expensive because of their more complex functional forms for the electron density.
227  
228  
229 < \subsection{\label{introSec:tightbind}TIGHT-BINDING FORMULATION}
229 > \subsection{\label{introSec:tightbind}Tight-Binding Formulation}
230  
231   Other potentials have been developed that share a similar functional form to {\sc EMT} and therefore to {\sc EAM}.~\cite{Finnis84,Ercolessi88,Chen90,Qi99,Ercolessi02} These are briefly reviewed by Voter and summarized in this section.\cite{Voter:95} Finnis and Sinclair developed a `N-body potential' based in part on the second moment approximation {\sc SMA} to tight-binding theory.\cite{Finnis84} In this theory, atomic orbitals interact via a one-electron Hamiltonian describing their interaction. Diagonalizing the system Hamiltonian matrix yields the set of orbital energies that can be populated with electrons. The distribution of energy levels can be represented by a density of states ({\sc DOS}) where the bond energy can be written as
232   \begin{equation}
233          E_{\mathrm{bond}} = \int_{-\infty}^{E_f} \epsilon n(\epsilon) \mathrm{d}\epsilon
234          \label{introeq:tb_dos}
235   \end{equation}
236 < where $n(\epsilon)$ is the density of states and $E_f$ is the Fermi level energy. Because it is relatively easy to compute Hamiltonian moments on an atom-by-atom basis, one can compute an approximate {\sc DOS} shape from lower moments of the distribution reducing the computational work-load.aAs an example do to Voter, assume a single s orbital basis per atom $i$. The second moment of the {\sc DOS} is then
236 > where $n(\epsilon)$ is the density of states and $E_f$ is the Fermi level energy. Because it is relatively easy to compute Hamiltonian moments on an atom-by-atom basis, one can compute an approximate {\sc DOS} shape from lower moments of the distribution reducing the computational work-load. As an example, Voter assumes a single s orbital basis per atom $i$. The second moment of the {\sc DOS} is then
237  
238   \begin{equation}
239 <        \mu_{2i} = \left<\chi_i|H^2|\chi_i\right> = \sum_{j} \left<\chi_i|H^2|\chi_j\right> \left<\chi_j|H^2|\chi_i\right> = \sum_{j} h^2_{ij}.
239 >        \mu_{2i} = \left<\chi_i|H^2|\chi_i\right> = \sum_{j} \left<\chi_i|H|\chi_j\right> \left<\chi_j|H|\chi_i\right> = \sum_{j} h^2_{ij}.
240          \label{introeq:tb_basis}
241   \end{equation}
242 < In Equation \ref{introeq:tb_basis} $\chi_i$ is the basis function on atom $i$ and $h^2_{ij}=\left<\chi_i|H^2|\chi_i\right>$. Note that summation in Equation \ref{introeq:tb_basis} is due to the complete expansion in a given basis functions set. One can show that for any basic shape for the {\sc DOS} on atom $i$, the width of the {\sc DOS} is proportional to $(\mu_{2i})^{1/2}$. Evaluation of Equation \ref{introeq:tb_dos} given $\mu_{2i}$ and setting $\mu_{1i}=0$ gives
242 > In Equation \ref{introeq:tb_basis} $\chi_i$ is the basis function on atom $i$ and $h^2_{ij}=\left<\chi_i|H^2|\chi_i\right>$. Note that summation in Equation \ref{introeq:tb_basis} is due to the complete expansion in a given basis set. One can show that for any basic shape of the {\sc DOS} on atom $i$, the width of the {\sc DOS} is proportional to $(\mu_{2i})^{1/2}$. Evaluation of Equation \ref{introeq:tb_dos} given $\mu_{2i}$ and setting the first moment of the DOS, $\mu_{1i}=0$, gives
243   \begin{equation}
244          E_{\mathrm{bond,i}} = A(\mu_{2i})^{1/2}
245          \label{introeq:tb_form}
246   \end{equation}
247   where A is constant that is determined by the shape of the given {\sc DOS} and by the fractional electronic occupation of the orbital.
248  
249 < In the spirit of {\sc EMT}, to obtain an interatomic-potential the tight-binding energy defined in Equation \ref{introeq:tb_form} should be augmented by a pairwise sum to account for ion-ion interactions. Using \ref{introeq:tb_basis} and \ref{introeq:tb_form} we can derive an expression for the metallic potential at atom $i$ do to all other atoms $j$ as
249 > In the spirit of {\sc EMT}, the tight-binding energy defined in Equation \ref{introeq:tb_form} should be augmented by a pairwise sum to account for ion-ion interactions. Using \ref{introeq:tb_basis} and \ref{introeq:tb_form} we can derive an expression for the metallic potential at atom $i$ due to all other atoms $j$ as
250  
251   \begin{equation}
252          E_{i} = \frac{1}{2}\sum_{j} \phi(r_{ij}) - A\left(\sum_{j} h^2_{ij}(r_{ij})\right)^{1/2}.
# Line 248 | Line 263 | and
263          \label{introeq:eamfrhoequiv}
264   \end{equation}
265  
266 < It should be noted that the negative square-root form for the embedding function has a positive second derivative for all $\rho$ and therefore is constant with the trend in metallic bond energies. This particular embedding function also has a negative slope and therefore is the total source of the cohesive force and must be balanced by a repulsive pair potential. For alloy systems there is a slight difference between methods based on {\sc EAM} and {\sc SMA} do the physical origin of the electron density in their respective functional forms. Terms in the density summation for atom $i$ in the {\sc EAM} formulation depend on the atom type for neighbor $j$, but {\em not} on the identity of atom $i$ itself. The fixed electron density on atom $j$ is independent of the electron density associated with atom $i$. Equation \ref{introeq:tb_basis} presents corresponding {\sc SMA} expression. Here the electron density is based on the square of matrix elements between atom $i$ and atom $j$ and therefore are dependent on the identity of both $i$ and $j$. {\sc SMA} therefore requires knowledge of $h_{ij}(r)$ to construct an alloy pair potential.
266 > It should be noted that the negative square-root form for the embedding function has a positive second derivative for all $\rho$ and therefore is consistent with the trend in metallic bond energies. This particular embedding function also has a negative slope and therefore is a totally cohesive force and must be balanced by a repulsive pair potential. For alloy systems there is a slight difference between methods based on {\sc EAM} and {\sc SMA} due the physical origin of the electron density in their respective functional forms. Terms in the density summation for atom $i$ in the {\sc EAM} formulation depend on the atom type for neighbor $j$, but {\em not} on the identity of atom $i$ itself. The fixed electron density on atom $j$ is independent of the electron density associated with atom $i$. Equation \ref{introeq:tb_basis} presents the corresponding {\sc SMA} expression. Here the electron density is based on the square of matrix elements between atom $i$ and atom $j$ and therefore are dependent on the identity of both $i$ and $j$. {\sc SMA} therefore requires knowledge of $h_{ij}(r)$ to construct an alloy pair potential.
267  
268  
269   The tight-binding model implemented in {\sc OOPSE} and used in this dissertation for the metallic glass simulations is due to Sutton and Chen ({\sc sc}).\cite{Chen90} The {\sc sc} potential takes a form similar to that of the Finnis-Sinclair potential in Equation \ref{introeq:finnsincair},
# Line 268 | Line 283 | once again accounts  for repulsion between ionic atomi
283          \rho_{i}=\sum_{j\neq i}\left(
284          \frac{\alpha_{ij}}{r_{ij}}\right) ^{m_{ij}}.
285   \end{equation}
286 < $a$ is a length parameter scaling the particle spacing, c is a dimensionless parameter that scales the attractive portion of the potential. $\epsilon$ sets the entire energy scale. $n$, $m$ are integer parameters where $n>m$. The exponential parameters and c are fit to the equilibrium lattice parameter and $\epsilon$ is determined by the cohesive energy. $n$,$m$ can be determined from the Cauchy pressure, $P_c$, such that
286 > $\alpha$ is a length parameter scaling the particle spacing, c is a dimensionless parameter that scales the attractive portion of the potential. $\epsilon$ sets the entire energy scale. $n$, $m$ are integer parameters where $n>m$. The exponential parameters and c are fit to the equilibrium lattice parameter and $\epsilon$ is determined by the cohesive energy. $n$,$m$ can be determined from the Cauchy pressure, $P_c$, such that
287  
288   \begin{equation}
289          n = 3\sqrt{\left(\frac{\Omega B}{E_{\mathrm{coh}}}\right)\left(\frac{B}{2P_c}+1\right)}
# Line 279 | Line 294 | where $\Omega$ once again is the volume per atom.
294   \end{equation}
295   where $\Omega$ once again is the volume per atom.
296  
297 < The {\sc SC} potential was re-parameterized by Kimura et al. to include in addition to experimental lattice parameter, cohesive energy and bulk modulus the phonon frequencies, vacancy formation energy and surface energies.\cite{kimura-quantum} The new parameterization was named Quantum Sutton-Chen ({\sc Q-SC}) because the new parameterization takes into account zero-point energy. The functional form for {\sc SC} and {\sc Q-SC} is computationally advantages because it somewhat resembles the Lennard-Jones potential presented in Section \ref{introSec:LJPot}. Parameters for the various parameterizations of Sutton-Chen  are presented in Table \ref{introtab:qsc}.
297 > The {\sc SC} potential was re-parameterized by Kimura et al. to include in addition to the experimental lattice parameter, the cohesive energy, bulk modulus, phonon frequencies, vacancy formation energy, and surface energies.\cite{kimura-quantum,PhysRevB.59.3527} The new parameterization was named Quantum Sutton-Chen ({\sc Q-SC}) because it takes into account zero-point energy. The functional forms for {\sc SC} and {\sc Q-SC} have computational advantages because they somewhat resemble the Lennard-Jones potential presented in Section \ref{introSec:LJPot}. In this dissertation the {\sc q-sc} formulation has been utilized for potential energies and forces. Combination rules for the alloy were taken to be the arithmetic average of the atomic parameters with the exception of $c_i$ since its
298 > value is only dependent on the identity of the atom where the density
299 > is evaluated.  For the {\sc q-sc} potential, cutoff distances are
300 > traditionally taken to be $2\alpha_{ij}$ and include up to the sixth
301 > coordination shell in FCC metals. Parameters for the various versions of Sutton-Chen  are presented in Table \ref{introtab:qsc}.
302  
303   \begin{table}
304 <        \caption{ {\sc SC} and {\sc Q-SC } PARAMETERS\cite{kimura-quantum,Chen90} TABLE 1.}
304 >        \begin{minipage}{\textwidth}
305 >                \renewcommand{\thefootnote}{\thempfootnote}
306 >                \caption[{\sc SC} AND {\sc Q-SC } PARAMETERS]{ {\sc SC} AND {\sc Q-SC } PARAMETERS\footnote{Reproduced from Kimera {\it et al.} \cite{kimura-quantum} Table 1.}}        
307          \centering
308   \begin{tabular}{llccccc}
309          \toprule
310          \toprule
311 < & & n & m & $\epsilon$(eV) & c & $\alpha$ a(\AA)\\
311 > & & n & m & $\epsilon$(eV) & c & $\alpha$ (\AA)\\
312   \midrule
313   Ni & ({\sc Q-SC}) & 10 & 5 & 7.3767E-3 & 84.745 & 3.5157\\
314  
# Line 322 | Line 343 | Au & ({\sc Q-SC}) & 11 & 8 & 7.8052E-3 & 53.581 & 4.06
343     & ({\sc SC}) & 10 & 8 & 1.2896E-2 & 34.428 & 4.0800\\
344   \bottomrule
345   \end{tabular}
346 + \renewcommand{\footnoterule}{}
347 + \end{minipage}
348   \label{introtab:qsc}
349   \end{table}
327 \section{\label{introSec:eom}INTEGRATING EQUATIONS OF MOTION}
350  
351 + \section{\label{introSec:eom}Integrating Equations of Motion}
352  
330 Once a potential model governing the interaction between particles has been selected for a given system, the next step is to propagate the system forward in time according to the classical equations of motion. At any particular time, the system is defined by the positions ($\mathbf{q}$), velocities ($\dot{\mathbf{q}}$) (or momenta ($\mathbf{p}$))  and the forces on each particle comprising the system. We can formulate the equations of motion in the Lagrangian form
353  
354 + Once a potential model governing the interaction between particles has been selected, the next step is to propagate the system forward in time according to the classical equations of motion. At any particular time, the system is defined by the positions ($\mathbf{q}$), velocities ($\dot{\mathbf{q}}$) (or momenta ($\mathbf{p}$))  and the forces on each particle comprising the system. We can formulate the equations of motion in the Lagrangian form
355 +
356   \begin{equation}
357   L = T - V,
358   \label{introeq:lagrang}
359   \end{equation}
360 < where $L$ is a function of the total kinetic ($T$) and potential energy ($V$) for each particle in the system. Using variational calculus and Hamilton's principle that the variation of the integral of the Lagrangian with respect to time is zero, we can derive a general set of equations of motion.\cite{Tolman:1938kl,Goldstein:2001uf,Allen87} Mathematically we represent this principle for a system of $f$ degrees of freedom by
360 > where $L$ is a function of the total kinetic ($T$) and potential energy ($V$). Using variational calculus and Hamilton's principle that the variation of the integral of the Lagrangian with respect to time is zero, we can derive a general set of equations of motion.\cite{Tolman:1938kl,Goldstein:2001uf,Allen87} Mathematically we represent this principle for a system of $f$ degrees of freedom by
361  
362   \begin{equation}
363   \delta\int_{t_1}^{t_2}L(q_1\ldots q_f,\dot{q_1}\ldots \dot{q_f},t)dt = 0.
364   \label{introeq:variatLag}
365   \end{equation}
366 < In terms of positions and velocities, we can rewrite \ref{introeq:variatLag} as
366 > In terms of derivatives with respect to positions and velocities, we can rewrite \ref{introeq:variatLag} as
367   \begin{equation}
368   \int_{t_1}^{t_2}\sum_{i=1}^{f}\left(
369          \frac{\partial L}{\partial \mathbf{q}_i}\delta \mathbf{q}_i
# Line 347 | Line 371 | In terms of positions and velocities, we can rewrite \
371                  \dot{\mathbf{q}}_i\right)dt = 0.
372                  \label{introeq:variatLag2}
373   \end{equation}
374 < Integrating the term in $\dot{\mathbf{q}}$ we can simplify \ref{introeq:variatLag2} to
374 > Integrating the term in $\dot{\mathbf{q}}$ (by parts) we can simplify \ref{introeq:variatLag2} to
375   \begin{equation}
376   \int_{t_1}^{t_2}\sum_{i=1}^{f}\left(
377          \frac{d}{dt}\frac{\partial L}{\partial \dot{\mathbf{q}}_i}
378          - \frac{\partial L}{\partial \mathbf{q}_i}\right)
379                  \delta {\mathbf{q}}_i dt = 0.
380   \end{equation}
381 < Now, we have achieved a separation variables in \ref{introeq:variatLag2}, we can further separate the contribution from each variable
381 > Now, we have achieved a separation of variables in \ref{introeq:variatLag2}, we can further separate the contribution from each variable
382   \begin{equation}
383   \frac{d}{dt}\frac{\partial L}{\partial \dot{\mathbf{q}}_i}
384          - \frac{\partial L}{\partial \mathbf{q}_i} = 0
385                  \quad\quad(i=1,2,\dots,f).
386   \label{introeq:eom1}
387   \end{equation}
388 < Newtons equations of motion can be recovered from the above formulation by
388 > Newton's equations of motion can be recovered from the above formulation by
389   substituting Equation \ref{introeq:lagrang} into \ref{introeq:eom1} and substituting the definition for kinetic energy,
390 < $m\dot{\mathbf{r}}^2/2$, gives
390 > $m\dot{\mathbf{q}}^2/2$, gives
391   \begin{equation}
392 < \frac{d}{dt}(m\dot{\mathbf{r}})+\frac{dV}{d\mathbf{r}}=0,
392 > \frac{d}{dt}\left(\frac{m\dot{\mathbf{q}}}{2}\right)+\frac{dV}{d\mathbf{q}}=0,
393   \end{equation}
394   or in the form of Newton's famous second law,
395   \begin{equation}
# Line 373 | Line 397 | or in the form of Newton's famous second law,
397   \end{equation}
398  
399  
400 < \subsection{\label{introSec:verletdlm}VERLET AND DLM METHODS OF INTEGRATION}
401 < The evolution of time in Molecular Dynamics requires an integrator that is symplectic and preserves certain properties of phase space during the transformation.\cite{McQuarrie:2000yt,Tolman:1938kl} Given a Hamiltonian that is separable with respect to  $\mathbf{q}$ and ${\mathbf{p}}$
400 > \subsection{\label{introSec:verlet}Verlet Method of Intergration}
401 > The evolution of time in Molecular Dynamics requires an integrator that is symplectic, meaning that it conserves the canonical circular structure of the Hamiltonian and guarantees that the error in the energy is bounded at each time step.\cite{McQuarrie:2000yt,Tolman:1938kl,Goldstein:2001uf} Given a Hamiltonian that is separable with respect to  $\mathbf{q}$ and ${\mathbf{p}}$
402   \begin{equation}
403 <        H(\mathbf{q},\mathbf{p}) = T(\mathbf{p}) + V(\mathbf{p})
403 >        H(\mathbf{q},\mathbf{p}) = T(\mathbf{p}) + V(\mathbf{q})
404   \end{equation}
405   and is described by the phase space $\mathbf{q}=\{q_\alpha\}$,$\mathbf{p}=\{p_\alpha\}$, $\alpha = 1,\ldots,f$. Our goal is to produce a transformation the preserves the flow in phase space generated by H such that
406   \begin{equation}
# Line 384 | Line 408 | and where we denote time step as $t-t_0 = \delta t$. H
408   \end{equation}
409   and where we denote time step as $t-t_0 = \delta t$. Hamilton's equations in canonical form are
410   \begin{equation}
411 <        \dot{p} = -\frac{\partial H}{\partial q}
411 >        \dot{\mathbf{p}} = -\frac{\partial H}{\partial \mathbf{q}}
412          \label{eq:cannon1}
413   \end{equation}
414   and
415   \begin{equation}
416 <        \dot{q} = -\frac{\partial H}{\partial p}.
416 >        \dot{\mathbf{q}} = -\frac{\partial H}{\partial \mathbf{p}}.
417          \label{eq:cannon2}
418   \end{equation}
419 < Liouville's theorem describes phase space density,$\rho$ at a point of interest changes as a function of time
419 > Liouville's theorem describes how the phase space density, $\rho$ at a point of interest changes as a function of time
420   \begin{equation}
421 <        \left(\frac{\partial \rho}{\partial t}\right)_{p,q}=-\sum_i \left(\frac{\partial \rho}{\partial q_i}\dot{q_i}+\frac{\partial \rho}{\partial p_i}\dot{p_i}\right).
421 >        \left(\frac{\partial \rho}{\partial t}\right)_{\mathbf{p},\mathbf{q}}=-\sum_i \left(\frac{\partial \rho}{\partial \mathbf{q}_i}\dot{\mathbf{q}_i}+\frac{\partial \rho}{\partial \mathbf{p}_i}\dot{\mathbf{p}_i}\right).
422          \label{eq:liouville}
423   \end{equation}
424   Substituting \ref{eq:cannon1} and \ref{eq:cannon2} into \ref{eq:liouville}, we can rewrite Liouville's theorm as
425   \begin{equation}
426 <        \left(\frac{\partial \rho}{\partial t}\right)_{p,q}=-\sum_i \left(\frac{\partial \rho}{\partial q_i}\frac{\partial H}{\partial p_i}
427 <        +\frac{\partial \rho}{\partial p_i}\frac{\partial H}{\partial q_i}\right).
426 >        \left(\frac{\partial \rho}{\partial t}\right)_{\mathbf{p},\mathbf{q}}=-\sum_i \left(\frac{\partial \rho}{\partial \mathbf{q}_i}\frac{\partial H}{\partial \mathbf{p}_i}
427 >        +\frac{\partial \rho}{\partial \mathbf{p}_i}\frac{\partial H}{\partial \mathbf{q}_i}\right).
428          \label{eq:liouville2}
429   \end{equation}
430   If we define the Poisson bracket for two quantities $M$,$N$ as
431   \begin{equation}
432 <        \{M,N\}= \sum_i \left(\frac{\partial M}{\partial q_i}\frac{\partial N}{\partial p_i}
433 <        -\frac{\partial N}{\partial q_i}\frac{\partial M}{\partial p_i}\right).
432 >        \{M,N\}= \sum_i \left(\frac{\partial M}{\partial \mathbf{q}_i}\frac{\partial N}{\partial \mathbf{p}_i}
433 >        -\frac{\partial N}{\partial \mathbf{q}_i}\frac{\partial M}{\partial \mathbf{p}_i}\right).
434          \label{eq:poisson}
435   \end{equation}
436   Equation \ref{eq:liouville2} is then
437   \begin{equation}
438 <        \left(\frac{\partial \rho}{\partial t}\right)_{q,p}=-\{\rho,H\}.
438 >        \left(\frac{\partial \rho}{\partial t}\right)_{\mathbf{q},\mathbf{p}}=-\{\rho,H\}.
439          \label{eq:liouville3}
440   \end{equation}
441   Using the Liouville operator, $L$, we can write the solution to Equation \ref{eq:liouville3} as
442   \begin{equation}
443 <        \rho(\mathbf{\dot{q}},\mathbf{q},t) = e^{-iLt}\rho(\mathbf{\dot{q}},\mathbf{q},0).
443 >        \rho(\mathbf{\dot{\mathbf{q}}},\mathbf{q},t) = U(t)\rho(\mathbf{\dot{\mathbf{q}}},\mathbf{q},0)
444   \end{equation}
445 + where
446 + \begin{equation}
447 +        U(t) = e^{iLt}
448 +        \label{eq:propagator}
449 + \end{equation}
450 + is the {\em classical propagator}. In relation to the Liouville equation,
451 + \begin{equation}
452 +        iL = \{\cdots,H\}=\left[\frac{\vec{p}}{m}\frac{\partial}{\partial \vec{q}} + F(\vec{q})\frac{\partial}{\partial \vec{p}}\right].
453 + \end{equation}
454 + In numerical simulations $t$ is broken into discrete time steps $\Delta t$. To generate a discrete form for Equation \ref{eq:propagator}, we perform an operator expansion
455 + \begin{equation}
456 +        e^{iLt}=[e^{iL\Delta t}]^P,
457 +        \label{eq:prop_expans1}
458 + \end{equation}
459  
460 + \begin{equation}
461 +        e^{iL\Delta t}=e^{iL_1(\Delta t/2)}e^{iL_2(\Delta t)}e^{iL_1(\Delta t/2)}+O(\Delta t^3)
462 +        \label{eq:prop_expans3}
463 + \end{equation}
464 + where the operator $iL = iL_1+iL_2$ and $\Delta t=t/P$. The {\em Position} form of the Verlet integrator can be generated by substituting $iL_1=F(\mathbf{q})\partial/\partial \mathbf{p}$ and $iL_2=(\mathbf{p}/m)\partial/\partial \mathbf{q}$ and evaluating the evolution operator.\cite{swope:637,Verlet67,Allen87,Leach01,tuckerman:2278} The resultant integrator for the position is
465 + \begin{equation}
466 +        \mathbf{q}(\Delta t)=\mathbf{q}(0) + \frac{\Delta t}{m} \mathbf{p}(0) + \frac{\Delta t^2}{2m}F[\mathbf{q}(0)],
467 + \end{equation}
468 + and momentum
469 + \begin{equation}
470 +        \mathbf{p}(\Delta t) = \mathbf{p}(0)+\frac{\Delta t}{2}(F[\mathbf{q}(0)]+F[\mathbf{q}(\Delta t)]).
471 + \end{equation}
472 + The velocity Verlet algorithm was designed to improve the error associated with the calculation of velocities and therefore improve overall energy conservation. Velocity Verlet involves a half time step propagation of momenta
473 + \begin{equation}
474 + \mathbf{p}\left(t+\frac{1}{2}\Delta t\right) = \mathbf{p}(t)
475 +                                        + \frac{1}{2}\Delta t m \mathbf{a}(t).
476 + \end{equation}
477 + and a full time step propagation of the positions
478 + \begin{equation}
479 + \mathbf{q}(t+\Delta t) = \mathbf{q}(t) + \frac{\Delta t}{m}\mathbf{p}(t)
480 +                                + \frac{1}{2}\Delta t^2\mathbf{a}(t),
481 + \end{equation}
482 + In many algorithms, this is referred to as {\tt moveA:}. Next, forces are calculated based on the new positions and then the momentum is propagated to a full time step in {\tt moveB:}
483 + \begin{equation}
484 + \mathbf{p}(t+\Delta t) = \mathbf{p}\left(t+\frac{1}{2}\Delta t\right)
485 +                                + \frac{m}{2}\Delta t\mathbf{a}(t+\Delta t).
486 + \end{equation}
487 + Velocity verlet reduces the error in calculation of the velocities to $\mathcal{O}(\Delta t^3)$, but at the expense of the error associated with the positions which increases to
488 + $\mathcal{O}(\Delta t^3)$.
489  
490 < \subsection{LANGEVIN DYNAMICS}
490 > \subsection{\label{introSec:LD}Langevin Dynamics}
491 > Langevin dynamics is simplified model for the interaction of a particle with a surrounding solvent. This model uses a stochastic differential equation to approximate the effects of a surrounding solvent thus eliminating the need for an explicit solvent.\cite{Allen87,Frenkel02,Leach01,BROOKS:1985kx,BROOKS:1983uq,BRUNGER:1984fj,Schlick:2002hc} The Langevin equation of motion is given by
492 > \begin{equation}
493 >        m\ddot{q} = -\mathbf{\nabla}_q V - \gamma\dot{q}(t) + \mathbf{R}(t)
494 >        \label{introeq:langevin}
495 > \end{equation}
496 > where $-\mathbf{\nabla}_q V$ is the force due the the potential acting on a particle, $\gamma\dot{q}(t)$ is a frictional force and $\mathbf{R}(t)$ is a stochastic force. If the form of the potential, $V$ is quadratic or absent a potential, then Equation \ref{introeq:langevin} is known to be exact. For other potentials, the Langevin equation may not be exact. A more general form of the Langevin equation, is the {\em Generalized Langevin Equation}
497 > \begin{equation}
498 >        m\ddot{x} = -\frac{\partial U(x)}{\partial x} - \int_0^t \gamma(t)\dot{x}(t-\tau)\mathrm{d}\tau \mathbf+{R}(t)
499 >        \label{introeq:GLE}
500 > \end{equation}
501 > where $\gamma(t)$ is a friction kernel containing all solvent responses to a moving solute and $\mathbf{R}(t)$ is a random fluctuating force. $\mathbf{R}(t)$ often represents a Gaussian random process, so will have a time average of $0$
502 > \begin{equation}
503 >        \left<R(t)\right> = 0
504 > \end{equation}
505 > and the correlation between any two times must satisfy the fluctuation dissipation theorem:
506 > \begin{equation}
507 >        \left<R(t)R(t')^T\right> = 2 m \gamma k_B T \delta(t-t')
508 > \end{equation}
509 > where $T$ is the temperature, $k_B$ is the Boltzmann constant and $\delta$ is the Dirac delta function. The strength of the inertial forces relative to the random force is determined by the magnitude of $\gamma$. $\gamma$ can be related to the viscosity of an implicit solvent, $\eta$, by Stokes' law
510 > \begin{equation}
511 >        \gamma = \frac{6\pi\eta a}{m}
512 > \end{equation}
513 > where $a$ is the hydrodynamic radius of the particle and $m$ is the particle's mass.
514  
515 < \section{PARALLEL MOLECULAR DYNAMICS}
515 > \section{Parallel Molecular Dynamics}
516 > Although processor power is continually improving, it is still
517 > difficult to simulate systems of more than 10,000 atoms on a single
518 > processor. To facilitate study of larger system sizes or smaller
519 > systems for longer time scales, parallel methods were developed to
520 > allow multiple CPUs to share the simulation workload. Three general
521 > categories of parallel decomposition methods have been developed:
522 > these are the atomic,\cite{Fox88} spatial~\cite{plimpton95} and
523 > force~\cite{Paradyn} decomposition methods.
524 >
525 > Algorithmically simplest of the three methods is atomic decomposition,
526 > where $N$ particles in a simulation are split among $P$ processors for
527 > the duration of the simulation. Computational cost scales as an
528 > optimal $\mathcal{O}(N/P)$ for atomic decomposition. Unfortunately, all
529 > processors must communicate positions and forces with all other
530 > processors at every force evaluation, leading the communication costs
531 > to scale as an unfavorable $\mathcal{O}(N)$, \emph{independent of the
532 > number of processors}. This communication bottleneck led to the
533 > development of spatial and force decomposition methods, in which
534 > communication among processors scales much more favorably. Spatial or
535 > domain decomposition divides the physical spatial domain into 3D boxes
536 > in which each processor is responsible for calculation of forces and
537 > positions of particles located in its box. Particles are reassigned to
538 > different processors as they move through simulation space. To
539 > calculate forces on a given particle, a processor must simply know the
540 > positions of particles within some cutoff radius located on nearby
541 > processors rather than the positions of particles on all
542 > processors. Both communication between processors and computation
543 > scale as $\mathcal{O}(N/P)$ in the spatial method. However, spatial
544 > decomposition adds algorithmic complexity to the simulation code and
545 > is not very efficient for small $N$, since the overall communication
546 > scales as the surface to volume ratio $\mathcal{O}(N/P)^{2/3}$ in
547 > three dimensions.
548 >
549 > The parallelization method used in our codes is the force
550 > decomposition method.\cite{hendrickson:95} Force decomposition assigns
551 > particles to processors based on a block decomposition of the force
552 > matrix. Processors are split into an optimally square grid forming row
553 > and column processor groups. Forces are calculated on particles in a
554 > given row by particles located in that processor's column
555 > assignment. One deviation from the algorithm described by Hendrickson
556 > {\it et al.} is the use of column ordering based on the row indexes
557 > preventing the need for a transpose operation necessitating a second
558 > communication step when gathering the final force components.  Force
559 > decomposition is less complex to implement than the spatial method but
560 > still scales computationally as $\mathcal{O}(N/P)$ and scales as
561 > $\mathcal{O}(N/\sqrt{P})$ in communication cost. Plimpton has also
562 > found that force decompositions scale more favorably than spatial
563 > decompositions for systems up to 10,000 atoms and favorably compete
564 > with spatial methods up to 100,000 atoms.\cite{plimpton95}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines