3

In an introductory classical algorithms class, one learns of a couple of ways to represent an unweighted, $d$-regular graph $G=(V,E)$ on $n=|V|$ vertices and $m=|E|$ edges.

  • A first way might be the adjacency list representation, which, for each vertex $v_i\in V$, lists all other vertices $v_j$ connected to $v_i$ through some edge $e_{ij}\in E$.
  • A second approach might be the adjacency matrix representation of said graph, which is a $n\times n$-dimensional matrix $A$ with entries $A_{ij}$ being $1$ iff vertex $v_i$ is connected to vertex $v_j$ through an edge $e_{ij}\in E$ while $A_{ij}=0$ otherwise.

With the adjacency matrix, it's natural to ask about, e.g., the discriminant or permanent of the graph, its spectrum, etc. But the adjacency list representation has some other advantages for efficiency, especially when the graph is sparse. For example, for a $d$-regular unweighted graph of fixed sparsity, the adjacency matrix uses about $O(n^2)$ bits while the adjacency list uses only $O(nd\log_2 n)$ such bits.

As an example of both, refer to the $d=3$-regular Petersen graph on $n=10$ vertices and $m=15$ edges and labelling the vertices $0$ to $9$ from left to right; this graph has an adjacency matrix:

$$\begin{pmatrix} 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \end{pmatrix},$$

while having an adjacency list of:

(0:2,3,5);(1:3,4,6);(2:0,4,7);(3:0,1,8);(4:1,2,9);(5:0,6,9);(6:1,5,7);(7:2,6,8);(8:3,7,9);(9:4,5,8)


In an introduction to quantum chemistry and quantum many-body physics, one learns of a couple of ways to represent a wavefunction $|\psi\rangle$ of $d$ fermions in $n$ different orbitals.

  • The first quantization lists the wavefunction $|\psi\rangle$ by recording which of the $n$ orbitals each of the $d$ fermions occupy.
  • The second quantization describes the wavefunction $|\psi\rangle$ more explicitly by listing all $n$ orbitals and recording whether the orbital is occupied or not.

Second quantization can use $O(n)$ qubits to span the $n\choose d$-dimensional Fock space, while first quantization is a little more friendly, especially if $d\ll n$, as it only uses $O(d\log_2 n)$ qubits.

As an example of both, calling the lowest orbital $0$ and labelling the orbitals $0$ to $9$ from left to right, a representation using the second quantization of $d=3$ fermions on $n=10$ vertices might be:

$$|\psi\rangle=|0011010000\rangle,$$

while a first quantization of the same might be:

$$|\psi\rangle=|2,3,5\rangle.$$


Second quantization of a wavefunction $|\psi\rangle$ feels similar to a single row or column of an adjacency matrix representation of a graph $G$, while first quantization looks akin to a single list in the adjacency list representation. Both first quantization and adjacency lists favor sparsity, while both second quantization and adjacency matrices favor other properties and operations such as addition and deletion of particles or vertices.

Surely this has been noted before? Or does the analogy really fail to capture important properties of graphs and/or of wavefunctions? Is there any advantage to thinking of the basis vectors of a Fock space as rows on a corresponding adjacency matrix (second quantization) or as a list of adjacent vertices (first quantization?)

For example, we know a lot of algorithms in graph theory, where it's more advantageous to use adjacency matrices than adjacency lists (and vice-versa) - can we port these ideas over to many-body physics and say that it's advantageous to use second quantization in lieu of first quantization (or vice-versa)?

I think with second quantization the anticommutation relations for fermions are not so easily captured in the wavefunction itself, but only in the operators when applied.

Mark Spinelli
  • 15,378
  • 3
  • 26
  • 83

1 Answers1

1

This sounds like an okay analogy to me. Though I think it's odd you're going 2d with matrices instead of staying 1d with lists. Why not make it hashset vs bitset instead of adjacency matrix vs adjacency list?

Craig Gidney
  • 44,299
  • 1
  • 41
  • 116