Simple Computational Quantum Mechanics: Basis Functions

In this article, I'll introduce the concept of basis functions, and then we'll apply our new-found knowledge to a classic problem.

In the previous post on this site, I discussed a simple way to discretize a space to make the Schrödinger equation solvable on the computer. In that article, the Hamiltonian was constructed simply by expressin the Hamiltonian on a discrete grid.

This is a nice method, since it is quite general: you could imagine taking just about any system, one-dimensional or not, and discretizing the space in a similar way. Indeed, some codes work by doing just this (called the finite-difference method). However, one unfortunate aspect is the tremendous number of points required to discretize the space. For example, in that article we used N=1000 points. If we had a 3-dimensional space and we again used 1000 points on each axis, we would get \( 1000^3 \) points on our grid, which is a tremendously big number. In some sense, discretizing space in this way is robust and general, but also a "dumb man's game", since it includes no information about the system under study.

However, if we do know something about the system, we might consider a different method: the use of basis functions. This means expressing the wave function in terms of some other, known functions, so that

\begin{align} \psi (x) = \sum _{i=1}^N c_i \phi _i (x) \end{align}

Here, the \( c_i \) are the parameters to be solved, and the \( \phi \)s are the basis functions. If the functions \( \phi (x) \) can be chosen appropriately so that they model the solution \( \psi (x) \) well, it is intuitively obvious that this results in much smaller number of parameters to be solved than just discretizing the space would. (In the trivial case, if one of our basis functions happens to be the real solution, so that \( \psi = \phi _i \), we can just set \( c_i =1, c_{i\neq j}=0\) !)

Suppose we want to use the basis functions and we have a Hamiltonian in mind, and we want to solve the Schrödinger equation (I use the Dirac notation):

\begin{align} \hat{H}|\psi \rangle = E|\psi \rangle \end{align} We now use our basis functions: \begin{align} \hat{H}\sum _{i=1}^N c_i |\phi _i \rangle = E \sum _{i=1}^N c_i |\phi _i \rangle \end{align}

Now, since we have N parameters \( c_1 .. c_N \) to be solved, our Hamiltonian should be a NxN matrix. Say our Hamiltonian is \( \hat{H} = \frac{-1}{2}\frac{d^2}{dx^2} +V(x) \) - how do we find these components in terms of our new basis functions? After all, \( V(x) \) is defined on the real numbers, of which there are infinitely many in any given interval, and we already said we don't want to discretize the space for this purpose!

Let us recall to mind some elementary geometry. Suppose we have a point in a x-y plane, say, the vector (0.5, 0.5). In Euclidean geometry, we typically use the basis vectors (1,0) and (0,1). How do we find the component along the first basis vectors? Generally, by the dot product:

\begin{align} v_{basis} \cdot \vec{v} = (1,0) \cdot (0.5, 0.5) = 1\cdot 0.5 + 0\cdot 0.5 = 0.5. \end{align}

This is called a "projection": we get the length of our vector along the basis vector. Now, we may apply a similar notion to our quantum mechanics problem. In our case, the "basis vectors" are our basis functions, which are of course infinite-dimensional, as they are defined on the real numbers (so you need to specify their value on an infinite number of points!) Thus, our projection won't have a simple sum of components: rather, we use an integral. So by a projection of \( \psi \) along the basis vector \( \phi _i \), in Dirac notation \( \langle \phi _i | \psi \rangle \), we mean: \begin{align} \langle \phi _i | \psi \rangle = \int _{0}^L \phi _i (x) \psi (x) dx \end{align}

Great! What does this mean for our Schrödinger equation? Well, let's suppose our basis functions are orthonormal so that \( \langle \phi _i | \phi _j \rangle = 0, i\neq j \). Then we can multiply the Schrödinger equation from the left with \( \langle \phi _j| \): \begin{align} \langle \phi _j |\hat{H}\sum_{i=1}^N c_i|\phi _i \rangle = E c_j \\ \langle \phi _j |\hat{H}\sum_{i=1}^N c_i\hat{H}|\phi _i \rangle = E c_j\\ \langle \phi _j | \sum_{i=1}^N c_i|\phi _i '\rangle = E c_j \end{align}

On the last line, I operated with the Hamiltonian to our basis vector (by matrix-vector multiplication), creating a different vector \( \phi _i ' \). I did this to make obvious what we should do - calculate projections of those vectors along different basis vectors \( |\phi _j \rangle\) ! That is, \begin{align} H_{ji} &=\langle \phi _j |\phi _i '\rangle\\ &= \int _0^L \phi _j (x) \phi _i'(x) dx\\ &= \int _0^L \phi _j (x) H(x) \phi_i (x) dx\\ &= \int _0^L \phi _j (x)\bigg[ \frac{-1}{2}\frac{d^2}{dx^2}+V(x) \bigg]\phi _i(x)dx \end{align}

Now, our problem has been transformed in to the form

\begin{align} \sum _{i=1}^N H_{ji}c_i = Ec_j \end{align}

which is again an eigenvalue problem.So, to solve our Schrödinger equation, we need to first calculate the matrix elements in our chosen basis, and then find the eigenvectors and eigenvalues of that matrix -- just like in the last post!

We will now apply our new approach to a real classic of physics education, the Kronig-Penney model. The Kronig-Penney model is meant to act as a one-dimensional toy model for a solid. Solids have a repeating structure, and thus they have a periodically repeating Hamiltonian. The Kronig-Penney model models this by creating a repeating constant potential, that is, the potential is \( V(x) = V_0 \) in some (regularly repeating) intervals, but 0 everywhere else. See this picture on Wikipedia if you don't know what I'm talking about. So the physical point of the Kronig-Penney model is just to see what happens in a repeating structure like a solid.

Rather than dealing with the infinitely repeating Kronig-Penny potential, we will embed, say, 10 repetitions of the KP potential in to an infinite square well, so that we can use the solutions of the particle in a box problem as our basis functions. Then our basis functions are: \begin{align} \psi _n (x)&= \sqrt{\frac{2}{L}}\sin (\frac{n \pi x}{L} )\\ E_n &=\frac{n^2\hbar ^2 \pi ^2}{2mL ^2} \end{align}

Now, for these functions the kinetic energy part (\( -1/2 d^2/dx^2\) ) of our Hamiltonian just gives us the energy levels of the particle in a box, \( E_n \) (can you see why?). As for the repeating constant potential, the integrals can be performed (and if you want to practice integration, you might want to do it by hand), but I will just give the result (notation lifted from [1]): \begin{align} H _{ji} &= E_j\delta _{ji}+V_0\sum _{r=1}^{10}h_{ji}(x_r,b) \\ h_{ji}(s,b) &= \frac{2}{L}\int _{barriers}\sin \bigg( \frac{j\pi x}{L} \bigg)\sin\bigg( \frac{i\pi x}{L} \bigg) \end{align}

The last integral is just done over the barriers where the potential is not zero. This gives, if barrier width is b and the locations of barrier centers are \( x_r \), the result:

\begin{align} h_{ji}(s,b) &= F_{ji}(s+b/2)-F_{ji}(s-b/2)\\ F_{ii}(x) &= \frac{x}{L}-\frac{\sin (2\pi i x/L)}{2\pi n}\\ F_{ji}(x) &= \frac{\sin [(j-i)\pi x/L]}{\pi (j-i)} - \frac{\sin [(i+j)\pi x/L]}{\pi (j+i)}, i\neq j \end{align}

We are now ready to implement a code to calculate these matrix elements. Here's a link to mine. Once again, even though the explanation is lengthy, we end up with a code that is under 50 lines long. Here's the result from running that code -- the band diagrams (energy as a function of wave number \( k_n=n\pi /L \)):

This illustrates why the Kronig-Penney is a favorite teaching tool for physics classes: we can see the formation of band gaps, energy levels where no particles may be placed. This phenomenon also exists in real 3-dimensional solids, and the existence of these gaps is the reason for the fact that some solids are insulators and others are conductors: materials with large band gaps don't contribute electrons to the flow of current as readily, and are thus insulators.

Usually, the KP model is solved analytically, since an analytical solution exists. However, now that we have a numerical solution, I once again invite the reader to play around with it by changing the potential. If you want a different potential, you can do an integral numerically as well. The easiest way to do this is by creating first a discrete x-axis:

 import numpy as np
#x goes from 0 to 10 with 1000 points linearly spaced
x = linspace(0,10,1000)

Then create our potential, say, \( x^2 \):

 V = x**2

And now integrate by summing multiplied by the step size on the grid:

 dx = x[1]-x[0] 
integral = np.sum(V[:]*dx)

If you want some ideas for what to try, see the this link (ref 1 below).

As a final note, I would like to mention that the basic idea behind this technique (using basis functions) is not a joke at all, though we applied it to a simple problem. Even professionally-made quantum chemistry packages typically use basis functions. They're typically chosen so that they are already "close" to the real solution of the problem, so that you can survive with relatively few basis functions and have a smaller Hamiltonian. Here's a wiki link to some relevant information.

See the main site for links on content, RSS feed, etc. There's a Twitter account, if you want to follow that.

[1] This link contains some ideas for trying out your new KP code.

[2]Here's a link to some more info about basis change in QM