Niels here! I will try to answer the question “how do we use computers to do theoretical chemistry calculations?“, by motivating why we use computers for chemistry calculations, and providing some examples of specific methods.
The motivation for incorporating computers into solving theoretical chemistry calculations is simple: Almost all quantum many-body problems cannot be solved analytically, and we instead must rely on numerical calculations to obtain certain properties.
Examples of properties of interest could be: The energy of a system, charge density distributions, vibrational frequencies, dipoles, reactivity, and so on.
Computational chemistry boils down to describing the interaction between different atoms or molecules in way that accurately approximates the proper quantum mechanical theory, but in a way that is not so demanding in terms of computer resources that the computation times becomes too prolonged. For example, highly accurate methods are only feasible for smaller system, or even single molecules, as the computer resources needed increase quickly with the size of the system.
The trade-off between resources needed and accuracy of results means there exists a wealth of different computational chemistry methods, with vastly different levels of theory used. All these methods have different use cases depending on the system size, and can broadly be put in to two categories: Empirical/semi-empirical methods that use empirical parameters in addition to quantum mechanical theory, and Ab initio methods that are solely based on quantum mechanics and physical constants. I will describe some of the most popular methods belonging to each category.
Density functional theory
Perhaps the most widely used method for high accuracy calculations is density functional theory (DFT). DFT is an ab initio method, and is defined mainly by its use of one-electron density n rather than the wavefunction of the electron \Psi. The many-electron time-independent Schrödinger equation can be written in terms of a kinetic energy \hat{V}, an electron-electron interaction energy \hat{U}, and a potential energy \hat{V} due to the positively charged nuclei:
\hat{H} = \left[\hat{T}+\hat{V}+\hat{U}\right]\Psi = \left[\sum_{i=1}^N\left(-\frac{\hbar^2}{2m_i}\nabla^2_i\right)+\sum_{i=1}^NV(\bm r_i)+\sum_{i< j}^NU(\bm r_i, \bm r_j)\right]\Psi = E \Psi
DFT maps this many-body problem onto a single-body problem without \hat{U}, by considering the electron density n the main variable instead of the wavefunction. All expectation values of an observable of interest \hat{O} can be written as a functional of the electron density:
O[n] = \braket{ \Psi[n] | \hat{O} | \Psi[n] }
A functional in this case simply refers to a function of another function. The mathematical theory behind functional turns out to be very important for the calculations done using DFT, as mentioned later. Both T[n] and U[n] are universal functionals that do not depend on the specific system, meaning the only system-specific term in the energy expression of a system is the term containing V[n]:
E[n] = T[n] + U[n] + \int V(\bm r) n(\bm r) \text{d}^3\bm r
To get a ground-state energy, a minimization of this expression is needed. This can be done using results from functional theory and other fancy math, where the most interesting part you end up with is the Kohn-Sham equations, that contain the non-interacting parts of the system:
\left[-\frac{\hbar^2}{2m}\nabla^2+V_s(\bm r)\right]\varphi_i(\bm r) = \epsilon_i\varphi_i(\bm r)
Solving these equations yields the electron-density \psi_i. Once the electron density is obtained, all other ground-state observables can be calculated! There is one problem though; The effective potential in the Kohn-Sham equation contains the only unknown in DFT, the exchange-correlation potential V_{\text{XC}}:
V_s(\bm r) = V(\bm r) + \int \frac{n(\bm r')}{|\bm r - \bm r'|}\text{d}^3\bm r' + V_{\text{XC}}[n(\bm r)]
The exchange-correlation potential has many different forms, depending on the system one is working with. Accurately describing certain physical phenomena requires an appropriate exchange-correlation potential, and finding the best form of these is an ongoing field of study.
The attentive reader might have noticed that the exchange-correlation potential depend on the electron density. But the entire point of the Kohn-Sham equation is to calculate the electron density! The Kohn-Sham equation makes no sense if we do not already have an electron density. This is dealt with by using a iterative approach to solve the Kohn-Sham equation, where one starts with an initial guess for n(\bm r), inserts this into the Kohn-Sham to get a new electron density expression, which is then done again and again to further improve the validity of the electron density expression until a convergence is reached. It is this so-called self consistent cycle that makes the use of computers mandatory, as all the calculations needed to obtain convergence are time-consuming.
Force fields
Force fields belong to the empirical/semi-empirical class of methods used in computational chemistry. Force fields use simple functional forms to describe the interaction between different molecules, where the energy expression is usually written in terms of bonded and non-bonded terms. These terms can be broken down even further by expression them in terms of different energy contribution. An example of the total energy expression for a force field could be:
E_{\text{total}} = E_{\text{non-bonded}} + E_{\text{bonded}} = E_{\text{van der Waals}} + E_{\text{electrostatic}} + E_{\text{bond}} + E_{\text{angle}} + E_{\text{dihedral}}
The expressions for each term are usually quite simple, with variables included that allow for parametrization based on experimental results, or even theoretical results using more sophisticated methods. For example, E_{\text{bond}} could simply be a quadratic energy expression resembling Hooke’s law:
E_{\text{bond}} = \frac{k}{2}(x-x_0)^2
A fancier choice could be a Morse potential:
E_{\text{bond}} = D_e\left(1-e^{-a(r-r_e)}\right)^2
Similarly, the non-bonded terms are usually modeled using basic physics. The van der Waals contribution could be computed using a Lennard-Jones potential, and the electrostatic term could be done with Coulomb’s law:
E_{\text{van der Waals}} = 4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right] \qquad\qquad E_{\text{electrostatic}} = \frac{1}{4\pi\epsilon_0}\frac{q_iq_j}{r_{ij}}
Even more terms can be added to the total energy expression to provide a more accurate description of the problem at hand, at the cost of increased computational cost. For example, an explicit term for hydrogen bonding could be added.
The benefit of using force fields to describe systems is their relatively cheap computational cost. Compared to DFT, the cost is so much lower, such that larger systems can be considered, or calculations can be run for larger timescales. This is especially important when the fine details of the electronic structure is less important than the long-time behavior of molecules. For example, this is the case when studying proteins or drug molecules.
The downside of using force fields is of course the fact that the usually straightforward energy expressions means the method does not have as high accuracy when describing complex phenomena as something like density functional theory.
Machine learning potentials
With the steady improvement of machine learning methods throughout the 21st century, machine learning potentials (MLPs) have become a favorable alternative to both DFT and force fields methods. Generally, MLPs try to achieve DFT-like accuracy at a much lower computational cost, by training readily available machine learning algorithms using relevant structures and their DFT energy. In other words, potentials are trained to mimic the behavior of DFT, by somehow uncovering patterns in the structures it is supplied. MLPs have the advantage that while training the model might be slow, predicting energies for new structures is extremely fast.
MLPs are hard to categorize as either an empirical or ab initio method, since it depends on the data you use for training, and the specific model used. If you are training a neural network on DFT data, it could be thought of as an ab initio method, as you are using DFT results. On the other hand, training on data obtained from an empirical method means it certainly does not belong to the ab initio category.
Clearly, MLPs are a very special case within computational chemistry. This is also very apparent when it comes to the considerations you have to do when using a MLP. For example, using atomic coordinates of the training data as the input for your model might be a very bad idea, as these coordinates do not have translational, rotational, and permutational invariance. Usually these coordinates are instead converted into symmetry functions centered on the atoms, to get the proper invariances. Finding the appropriate model for the system you want to consider can also be tough, as many machine learning models act as a black box. This means it is hard to interpret how and why a model work well for some systems but poor for others, resulting in finding the correct model involving a lot of trial and error.
The upsides of MLPs can not be understated. Their computational efficiency combined with their high accuracy means complex phenomena for large systems not suitable for DFT can be studied, such as reactions involving large molecules, or molecular dynamic simulations involving many atoms.
Conclusion
Hopefully this post have given you an idea of how computers can be used to do theoretical chemistry calculations, and why this is a very good idea.
The next question is: What is the role of physics in understanding the universe?