The grid approach

From this point on, we move to atomic units, in which \( \hbar = 1 \) and \( m = 1 \). This means, for example, that we measure mass in units of electron mass ("this body is 200 electron masses") instead of, say, kilograms, and our "quantums"\ in terms of \( \hbar \). We could use SI units as well, but the manipulations are easier this way. The units of energy are, in this system, called "Hartrees"\ and the units of length are "Bohrs".

We concluded in the last section that the Hamiltonian is

\begin{equation} \hat{H} = -\frac{1}{2}\frac{d^2}{dx^2} + V(x). \end{equation}

I also said this is supposed to be a matrix. Well, what do I mean by that? What do derivatives have to do with matrices? To understand this, we now make an approximation, the same one as I made at the end of the last section: we separate our wave function to a number of \textit{grid points}, so that, instead of being a continuous function (as it really is), it is defined on some discrete set of points.

In such a discrete world, what is the derivative? Recall the formal definition of derivative:

\begin{equation} \frac{df}{dx} = \lim _{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}. \end{equation}

>In our discrete universe, though, it doesn't make sense for \(h\) to go to zero. After all, the minimum distance two objects can have in our universe is the separation between our grid points, which we'll call \( \Delta x \). For instance, if the length of our system was 10 Bohrs and we divided it to a 1000 grid points, the distance would be \( \frac{10}{1000}=0.01 \) Bohrs. Hence, in our discrete universe, \( h \) approaches \( \Delta x\) in the limit, and the derivative is

\begin{equation} \frac{df}{dx} = \frac{f(x+\Delta x) - f(x)}{\Delta x}. \end{equation}

Now, how do we cast this derivative in to the form of a matrix? First, in our universe, also the wave function \( \psi \) will be discrete, being defined on the finite intervals, so that we may write it as a column vector

\begin{equation} \psi = \begin{bmatrix} \psi (x_1) \\ \psi (x_2) \\ \vdots \\ \psi (x_f) \end{bmatrix} \end{equation}

If we were to cast the first derivative in matrix form, it would need to be such a matrix (call it A) that when we operate to this column vector, we get the result

\begin{equation} A\psi = \frac{1}{\Delta x}\begin{bmatrix} \psi (x_2)-\psi (x_1) \\ \psi (x_3)-\psi (x_2) \\ \psi (x_4)-\psi (x_3) \\ \vdots \\ \end{bmatrix} \end{equation}

since in our universe \( \psi(x_i+h) = \psi (x_i+\Delta x) = \psi (x_{i+1} )\). To find such a matrix, we need to remember the multiplication rule for matrices, which is for the matrix A and column vector \(\psi \): \( (A\psi)_i = \sum _{j=1}^f A_{ij}\psi _j\). So for the first element, we need to have \( (Av)_1 = \sum _{j=1}^f A_{1j}\psi_j =\frac{\psi (x_2) - \psi (x_1) }{\Delta x} \). This means that \( A_{11} = -1/\Delta x \) and \( A_{12} = 1/\Delta x \) and the rest of the elements on that row must be zero, as then

\begin{equation} \sum _{j=1}^f A_{1j}\psi_j = A_{11}\psi _1 + A_{12}\psi _2\\ = A_{11}\psi(x_1 ) + A_{12}\psi (x_2 ) \\ = \frac{1}{\Delta x} \bigg( \psi (x_2 ) - \psi (x_1 ) \bigg) \end{equation}

Using similar reasoning, we deduce also the rest of the elements, which creates a matrix like

\begin{equation} A = \frac{1}{\Delta x}\begin{bmatrix} -1 & 1 & 0 &\cdots \\ 0 & -1 & 1 & 0 & \cdots \\ 0 & 0 & -1 & 1 & 0 & \cdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots \end{bmatrix}. \label{derivativematrix} \end{equation}

Now, notice that there is a problem at the edges of our system, specifically at the end point \(x_f\). There (as you can quickly verify on pen and paper!) the approximation to the derivative is mistakenly calculated as being \(\frac{\psi (x_f)}{\Delta x}\). This is not such a big problem --- after all, it's just one point. Nevertheless, when taking the second derivative, we correct for it with the following logic. Another way to define our derivative is

\begin{equation} \frac{df}{dx} = \frac{f(x)-f(x-\Delta x)}{\Delta x}, \end{equation}

as this is just the same thing in reverse. Here, instead of having a problem at the last point, we have a problem at the first point. Now, we can construct a matrix for this version of the derivative with the same logic as above in eq. \eqref{derivativematrix}. To get the second derivative, then, we multiply these two matrices together, getting in total the second derivative matrix \(D \):

\begin{equation} D =\frac{1}{(\Delta x) ^2} \begin{bmatrix} -2 & 1 & 0 &\cdots \\ 1 & -2 & 1 & 0 & \cdots \\ 0 & 1 & -2 & 1 & 0 & \cdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots \end{bmatrix}. \label{secondderivativematrix} \end{equation}

Hopefully, using this approximation, there is a "little bit"\ of error at the end point, and a "little bit"\ of error at the beginning, but nothing too severe. Now, since our kinetic energy was \(-\frac{1}{2}\frac{d^2}{dx^2}\), it can be written as

\begin{equation} \hat{T} = -\frac{1}{2}D. \label{kineticenergymatrix} \end{equation}

We can now turn our attention to the potential energy. Let's take the example of a harmonic potential, \(V(x) = x^2\). What does this mean? It just means that we multiply the wave function at each point \(\psi (x_i)\) with the value \(x_i ^2\). To perform this multiplication, we can just add the elements to the diagonal of the matrix, like this:

\begin{equation} \hat{H} = -\frac{1}{2}D + \begin{bmatrix} x_1 &0& 0& \cdots \\ 0& x_2& 0& 0& \cdots \\ 0 & 0 & x_3 & 0 & 0 & \cdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots \end{bmatrix}. \end{equation}

The really difficult part, then, is the kinetic energy, which is off-diagonal.

What happens if we solve the eigenvalues of just the kinetic operator \(\hat{T}\) with zero potential? Is it just a free particle? Perhaps surprisingly, no. If we look at a single term at some point \(i\) in our equation \(\hat{H}\psi = E \psi\), we get the general form:

\begin{equation} E\psi (x_i) = -\frac{1}{2(\Delta x)^2}\bigg( \psi (x_{i+1}) - 2\psi (x_i) + \psi (x_{i-1}) \bigg). \label{formofequation} \end{equation}

What happens at the edge of our grid, when \(i=1\)? Then, the last term involving \(\psi (x_{i-1})\) is not defined: it vanishes from the equation. But vanishing from the equation is the same as being zero! That means our wave function is zero at the edges of our grid. This system is familiar from all introductory quantum mechanics textbooks, and is called "particle in a box". It can be solved analytically on pen and paper, so as we next build our first quantum simulation, we can check our results with certainty, a rare luxury when doing computational quantum mechanics.

What if we want something else to happen at the boundaries, though? Perhaps we want our wave function to keep repeating indefinitely in both directions, for example. In that case, we want our wave function to be the same at the beginning and the end of the grid. In other words, at the first point, we want

\begin{equation} E \psi (x_1) = -\frac{1}{2(\Delta x)^2}\bigg( \psi (x_{2}) - 2\psi (x_1) + \psi (x_{f}) \bigg). \end{equation} That is, we claim that \(\psi(x_{i-1}) = \psi (x_{f})\), and vice versa at the end point. What sort of a matrix accomplishes this? We must have, at the first point, \begin{equation} (A\psi )_1 = -\frac{1}{2\Delta x ^2}\sum _{j=1}A_{1j}\psi _j = -\frac{1}{2(\Delta x)^2}\bigg( \psi (x_{2}) - 2\psi (x_1) + \psi (x_{f}) \bigg) \end{equation}

which we see is true if \(A_{11} = -2,A_{12} = 1, A_{1f}=1\) by the rules of matrix multiplication. Hence, we place the number 1 at the upper right corner of our matrix and multiply it by \(\frac{-1}{2\Delta x ^2}\), otherwise it is unchanged. The same logic applies also to the end point: this time, the 1 is just at the lower left corner. This sort of system is said to obey "periodic boundary conditions" --- periodic because the wave function repeats with a period equivalent to the length of the system.

PrefaceIndex Chapter 1.3