6

A rather computational question:

Suppose you have a very simple tight-binding Hamiltonian in matrix form, i.e. for example something like this for a 1D chain with open ends: $$H =\begin{bmatrix} 0 & t & 0 & \dots & 0 \\ t & 0 & t & \dots & 0 \\ 0 & t &\ddots &\ddots &0 \\ \vdots&0&\ddots&0&t\\ 0 & \dots & 0 & t& 0 \end{bmatrix}$$ How do you numerically get this Hamiltonian into $k$-space (to eventually extract its band structure via diagonalization)? I presume it involves a suitable Fourier transform of the matrix but I can't seem to quite get behind how to do it.

And then, for the more interesting part, how does it work when you have to deal with a 2D system (where the procedure then must depend on the mapping of the 2D system into this matrix form)?

Qmechanic
  • 220,844
benfisch
  • 103

2 Answers2

8

Fourier transform as mentioned in the question is indeed the transformation to get the Hamiltonian from real space to momentum space. To achieve this transformation numerically, define a unitary matrix $U$ with elements $$U_{ab} = \exp(2 i \pi x_a k_b) \cdot \textrm{normalization}$$ where $x$ is the vector of finite length of sample points (or lattice sites) in real space and $k$ the corresponding vector of same length in momentum space. Say you have $N$ $x$-values from -1/2 to 1/2, and $k$-values from -N/2 to N/2, then the normalization in the above equation is the square root of the constant spacing of the $x$-values (in the context of a tight-binding lattice Hamiltonian this spacing is the lattice spacing).

The transformation of the Hamiltonian $H$ to $k$-space then is given as the matrix product $$U^\dagger \cdot H \cdot U$$ with $U^\dagger$ being the Hermitian conjugate of $U$.

Here is a numerical example in Python:

import numpy as np

#constants Nhalf = 150 t = 0.1

N = 2*Nhalf + 1

#define real space and momentum space variables x = np.linspace(-0.5,0.5,N) dx = 1./(N-1) k = np.linspace(-Nhalf,Nhalf,N)

#define Hamiltonian H = np.zeros((N,N)) for i in range(N-1): H[i][i+1] = t H[i+1][i] = t #the following line renders the tight-binding Hamiltonian periodic #little overall effect on eigenvalues with or without #periodic boundary conditions for sufficiently large N H[N-1][0] = H[0][N-1] = t

#define unitary matrix for Fourier transform U = np.empty((N,N), complex) for i in range(N): U[i][:] = np.exp(2jnp.pix[i]*k)

U = dx*0.5 #multiply with sqrt(dx) for normalization

#perform transform Hkspace = np.dot(np.dot(U.conjugate().T,H),U)

#plot diagonal values import pylab pylab.plot(k, 2abs(t)np.cos(2np.pik/N), linewidth=7, label = 'analytical solution to eigenvalues or\ndispersion of (infinite) 1D TB-model') pylab.plot(k, np.diagonal(Hkspace).real, '.', label = 'diagonal elements of $H$ in $k$-space') pylab.xlabel('$k$') pylab.ylabel('Energy') pylab.legend(loc='best') pylab.show()

The code generates the following figure: Comparison of numerical results to analytical 1D tight binding model solution

For the 2D case, define 2D vectors $\vec r$ and $\vec k$ and the unitary matrix for the 2D Fourier transform as: $$U^\textrm{2D}_{ab} = \exp(2 i \pi \vec r_a \cdot \vec k_b) \cdot \textrm{normalization}$$ The indices $a$ and $b$ now account for how the 2D space is traversed. One could e.g. have $$a = \mu + \nu \cdot N_x$$ which corresponds to an inner loop over the $N_x$ $x$-coordinates labeled by $\mu$ and an outer loop over the $y$-coordinates labeled by $\nu$ (enumeration in $k$-space analogously). The Hamiltonian $H$ in real space would correspondingly have to be extended to account for the hopping matrix elements of value $t$ also for neighbors in $y$-direction.

In any case, one can also very simply diagonalize $H$ numerically to get a unitary matrix $U$ that diagonalizes $H$:

import numpy as np
e,U = np.linalg.eigh(H)
#note this U matrix is real-valued
#unlike the complex U in the examples above;
#for a real symmetric matrix such as H, one can
#choose real-valued eigenvectors
v-joe
  • 601
3

With open boundary conditions, you don't. The notion that there is some k-space wavefunction $u(k)$ is based on Bloch's theorem, whoich only works if the Hamiltonian is translation invariant. It is possible to 'cheat' and assume periodic boundary conditions, but ultimately, the Hamiltonian is only approximately translation invariant, and $k$ eigenstates so not coincide with energy eigenstates - $k$ is not a good quantum number.

An arbitrary real space Hamiltonian for the 1D chain with $M$ sites, for example, is written with respect to the basis of Fock states $$\mathcal{F} = \{ |n_1, n_2, ... n_M\rangle | n_j \in \{0,1,2, ...\}\} $$ where $n_j$ is the number of (spin 0) bosons on the site $j$.

Note that this basis is infinite. However, matters are simplified by the fact that the Hamiltonian commutes with the total number operator $\hat{N} = \sum_j a^\dagger_j a_j$, meaning that its eigenvalue $N$ is a good quantum number.

It's therefore sufficient to diagonalise within an $N$-eigenspace, i.e. over the subspace $$\left\{ |\mathbf{n} \rangle \in \mathcal{F} \Bigg| \hat{N} |\mathbf{n} \rangle = \sum_{j=1}^M n_j = N \right\}$$ This is the canonical "real space" basis, for example with $M=4, N=1$ you would have a 4D basis $\{|0001\rangle, |0010\rangle, |0100\rangle, |1000\rangle \}$. Higher-$N$ fillings consist of the $N$th tensor powers of this single particle basis.

Assuming periodic boundary conditions (PBC) and the notational convention $a_{M+1} =a_1$ one can show that the (unitary) lattice translation operator $\hat{T}_1 = \sum_{j=1}^M a^\dagger_{j+1}a_j$ commutes with its inverse, and therefore with the Hamiltonian. Writing $\hat{T}_n$ to mean $(\hat{T})^n$, where $n\in\mathbb{Z}$, the eigenvalues of $\hat{T}_n$ must have the form $\exp(ik n)$ for a constant real number $k$. Since $T$ is unitary, its eigenkets are orthogonal - these orthogonal eigenkets are what you're trying to find, since they should render the Hamiltonian diagonal (or at least greatly simplified). This is Bloch's theorem.

I want to stress that this $k$ quantum number is not in itself defined to be the eigenvalue of a Hermitian operator - this is only possible in non-lattice systems, where you can shrink the translation operator to be infinitesimal, in which case it is precisely momentum.

The only condition on $k$ comes from the aforementioned periodic boundary conditions: since $T_{M} = \hat{1}$,$e^{ikM} = 1$, meaning that the $M$ distinct $k$ values are $k = 2\pi m/M, m \in \mathbb{Z}_M$. Our job is to numerically find a way of expressing the real space (single particle) eigenkets $\{|100...0\rangle, ... ,|000...01\rangle \}$ in a basis that is diagonal with respect to the elementary translation operator $T_1$.

"Fourier transforming" the Hamiltonian then corresponds to re-expressing the Hamiltonian in the basis of $\hat{T}_1$ eigenstates. This is easy enough: write $|n_j\rangle$ to mean the Fock state with a particle on site $j$. We seek coefficients $A_{pj}$ such that $$\hat{T}_1 \sum_{j=1}^M A_{pj}|n_j \rangle = \exp(2\pi i p /M) \sum_{j=1}^M A_{pj}|n_j \rangle$$ $$\Rightarrow \sum_{j=1}^M A_{p,j}|n_{j+1} \rangle = \exp(2\pi i p /M) \sum_{j=1}^M A_{pj}|n_j \rangle$$

This is solved by $A_{pj} = \exp(-\frac{2\pi i p j }{M})$, which (with some massaging) looks like the Fourier transform used by v-joe. Then the new basis of "crystal momentum" eigenstates can be written (via $k_p = 2\pi p/m$) as $|k_p\rangle = A_{pj}|n_j\rangle = \sum_j \exp(-ikj)|n_j \rangle$. Changing the Hamiltonain basis is then the same as for any other matrix, using the change of basis matrix $A$. (v-joe's answer uses this, and is probably more practically helpful.)

To do this, we have used no assumptions other sites being regularly spaced. In fact, $T_1$ is still well defined for open boundary conditions (it just "teleports" the last site to the first). The catch is that $T_1$ no longer commutes with the Hamiltonian, meaning that the idea of an energy vs. k "band structure" is no longer well defined.

So what's going on? Experiments can clearly measure band structures on finite crystals. The standard argument is that the bulk of a material is locally translation invariant, and experimental probes of $k$ space can usually only see above some finite minimum value of $k$ of order $1/L$, the length scale of the crystal in question. So even if $T_1$ is not an exact symmetry of the system, it's close enough for most practical purposes.

However, there are important exceptions to this hand-waving argument. In so-called "topological" materials, there can also be non-trivial states located near the edge of a material that the $k$ space approximation misses. Such states can only be seen by directly diagonalising the real-space Hamiltonian.