4

I'm reading these notes about second quantization. In section 1.4 the author introduces many-particle wavefunctions. But I can't understand how basis are defined here.

I know that if $\{\chi_i | i=1, \dots, N\}$ are single-particle wavefunctions (let's choose fermions) then

$$ \Psi = \frac{1}{\sqrt{N!}} \sum_\sigma \mathrm{sgn} (\sigma) \prod_{j=1}^N \chi_j (\sigma(j)) = \frac{1}{\sqrt{N!}} \left| \begin{matrix} \chi_1(\mathbf{x}_1) & \chi_2(\mathbf{x}_1) & \cdots & \chi_N(\mathbf{x}_1) \\ \chi_1(\mathbf{x}_2) & \chi_2(\mathbf{x}_2) & \cdots & \chi_N(\mathbf{x}_2) \\ \vdots & \vdots & \ddots & \vdots \\ \chi_1(\mathbf{x}_N) & \chi_2(\mathbf{x}_N) & \cdots & \chi_N(\mathbf{x}_N) \end{matrix} \right| \tag{1}$$

is a valid N-electron state. Here $\sigma$ denotes some permutation and $\mathrm{sgn} (\sigma)$ is its signature which ensures the antisymmetry of $\Psi$. This is widely known as Slater determinant.

Now, my question is, how does one choose basis states accordingly? I think that $(1)$ denotes only one state. One would need to find $N-1$ more states that are orthogonal. The notes I mentioned define many-particle wavefunction in some obscure way $(1.113)$ and then use the antisymmetrization procedure on those states $\Psi_j$ to yield a tensor product which they call basis. So I am confused about this explanation.

Minethlos
  • 1,031

1 Answers1

3

Consider a $N$-dimensional vector space $V$ and let $\{\chi_1, \cdots, \chi_N\}$ be basis of $V$.

Next focus attention on the anti symmetric space $(V\otimes\cdots \otimes V)_A$ where $V$ occurs $M\leq N$ times.

A basis of $(V\otimes\cdots \otimes V)_A$ can be constructed out of $\{\chi_1, \cdots, \chi_N\}$ making use of the projector $$A: V\otimes\cdots \otimes V \to (V\otimes\cdots \otimes V)_A$$ defined as the unique linear extension of $$A (v_1\otimes \cdots \otimes v_M) := \frac{1}{\sqrt{M!}}\sum_{\sigma \in P_M} \mbox{sgn}(\sigma)\: v_{\sigma^{-1}(1)} \otimes \cdots \otimes v_{\sigma^{-1}(M)}\:.$$ Above $P_M$ is the symmetric group of $M$ elements.

The said basis of $(V\otimes\cdots \otimes V)_A$ ($M$ times) is made of the elements for $i_1,\ldots, i_M \in \{1,\ldots,N\}$ $$\chi_{i_1}\wedge \cdots \wedge \chi_{i_M} := A(\chi_{i_1}\otimes \cdots \otimes \chi_{i_M})\mbox{ where} \quad i_1 < i_2< \ldots < i_M\:.$$ In view of the last constarint, these elements are ${N \choose M}$ so $1$ for $N=M$. If instead the constraint $i_1 < i_2< \ldots < i_M$ is dropped $\chi_{i_1}\wedge \cdots \wedge \chi_{i_M}$ results to be the zero vector or a vector already counted up to the sign.

Notice that $A (v_1\otimes \cdots \otimes v_M)$ is nothing but the Slater determinant of the $M$ elements $v_k$.

If $V=H$ is a ($N$-dimensional) Hilbert space and $V\otimes\cdots \otimes V$ is equipped with the analogous induced structure, $(V\otimes\cdots \otimes V)_A$ turns out to be a closed subspace, $A$ an orthogonal projector and, starting form an orthonormal basis $\{\chi_1, \cdots, \chi_N\}$ of $H$, the elements $\chi_{i_1}\wedge \cdots \wedge \chi_{i_M}$ form an orthonormal basi $(H\otimes\cdots \otimes H)_A$ as well. The result easily extends to the case of an infinte-dimensional Hilbert space.