12

This might be something basic, but it is unclear to me. So I am used to work with representations of groups as matrices. These matrices represent the structure of the Lie algebra by satisfying the commutation relations:

$$ [T_i,T_j]=f_{ijk}T_k $$

but I read particle physics texts where the vectors upon which the matrices act, and not the matrices themselves, were referred to as representations. For example, in SU(3) group, after we find all the weights of a representation we say that "we found the representation", even though we did not find the generator matrices. My question is, in which sense are the vectors representing the group, vectors look like passive elements on which the group matrices act, and do not contain the structure of the group. I hope my question is clear.

EDIT: As the title states, are these vector representations, together with the weights, isomorphic to the group generators?

Lonkar
  • 121
  • 3

3 Answers3

3

I think your confusion (like mine) is simply over technical English usage. As you rightly state "vectors look like passive elements on which the group matrices act, and do not contain the structure of the group".

To my mind, a representation of a group is a triple $(\mathfrak{G},\,V,\,\rho:\mathfrak{G}\to GL(V))$: the group $\mathfrak{G}$ being represented, the target vector space $V$ being acted on in a group action of the matrix group of endomorphisms $GL(V)$, and the homomorphism $\rho:\mathfrak{G}\to GL(V)$ between them.

As long as it is clear from the context, it is OK to think of the vector space as the representation, and this seems to be the standard physcist's usage of the word. Indeed, it is a particularly physical point of view: in the standpoint I first cited, you're watching the "system" from afar; in the physicist's viewpoint, you are, like any good experimentalist or careful obsverer, getting as near as you can to the action (i.e. sitting at the business end of the arrow $\rho$) and looking at what the actors who come out of it (the matrices in $GL(V)$) do through their actions on their playthings (the vectors in $V$). My mind picture, in this context, is literally of someone in a white coat, seated right at the end of the pipe $\rho$ (which for me somehow is always a bronze colour) and carefully writing down their observations on what's happenning at the end of the pipe $\rho$ when the studied creatures come out!

In physics, the vectors in $V$ are often quantum states in a quantum state space $V$ (these are the most wonted to me), and we sometimes we seek linear unitary transformations on them that are "compatible with" (i.e. homomorphic images of) the group $\mathfrak{G}$ of "physical happennings" (I'm thinking of $\mathfrak{G}$ as the Poincaré group, or $SO(1,\,3)$ or the latter's double cover $SL(2,\,\mathbb{C})$). The vector space contains the physical things (the quantum states), so it's natural to think of them as the "representation".

Selene Routley
  • 90,184
  • 7
  • 198
  • 428
2

The vectors (kets) upon which the matrices act should be referred to as the carrier space of the representation. As you said, the matrices are the representation of the abstract generators. It is just lazy talking to refer to the vector space as the representation. A maximum weight vector labels the irreducible representation and tells you the dimension of the carrier space.

It easy to label an orthogonal set of basis vectors for the carrier space using the Gelfand pattern. This is an upside down triangle of numbers with the weight vector numbers along the topside base and using the betweeness rule to fill in the rest of the integers of the triangle. The matrices of the representation my also be calculated from the maximum weight vector, though it is more difficult.

Except, at least for GL(N), the abstract generators themselves can serve as the carrier space vectors for one of the irreps. That is when the group acts on the generators by conjugation. Examples are, the O(3) generators $\vec{J}$ transforming like a 3 vector by 3x3 matrices, or the SU(3) generators transforming like an octet by 8x8 matrices.

Gary Godfrey
  • 3,428
1

No, this is not an ambiguous terminology issue. I suspect seeking an "isomorphism" might be too hidebound... you might as well seek a "functor"! The basic answer is that, yes, possession of the generator matrices T of dimension dxd is basically equivalent to characterization of the weights of states v in the d-dimensional vector space on which such matrices act, except the latter normally gets you more directly to what you want to know in QM in the latter case, by dint of the relevant labels/roots/weights.

Think of SU(2) for simplicity, but you might choose to generalize to SU(3), once the game is evident. To rotate an arbitrary d-dim v by an angle θ, you operate on it by v→exp(iθJ)v in classical physics. By a change of basis of the 3 generators J to their raising and lowering ladder versions $J_+, J_-$ and $J_0$ and the marvelous eigenvalue equations their Lie algebra commutators satisfy, you may organize these rotations much more usefully in QM, and also computationally--this is of course how these higher-dim matrices in the SU(2) WP-article were found, in the first place!

That is, once you have eigenvectors of J.J, with eigenvalues j(j+1), up to normalization, you have characterized the dimensionality of the eigenvector v by 2j+1, and its component by the eigenvalue m of J0 on it, while you know how the raising and lowering Js will send entries to their neighboring slot. So writing the states in the |j,m> convention is tantamount to posing them ready for rotations by simple shifts of their m and multiplications by numbers. For small angles θ,this amounts to transitioning to v+iθJ v for J any linear combination of these 3 ladder operators.

Conversely, the structure of these operators specifies the Lie algebra of the matrices you started with, uniquely (Cartan). Here, $J^2 |j,m\rangle = j(j + 1) |j,m\rangle$, $ J_0 |j,m\rangle = m |j,m\rangle $ and $J_\pm |j,m\rangle = \sqrt{j(j+1)-m(m\pm 1)} |j,m\pm 1 \rangle$: if we act on them with arbitrary bras on the left, $\langle j\,m'|$ they yield the matrix elements of the matrices in question, guaranteed to satisfy the SU(2) algebra.

But this language is simpler than daft matrix multiplication for the purpose of transitioning between states by QM operators, the Wigner-Eckart theorem, etc. The transition is just linear-algebraic change of language.

For SU(3), there are more such eigenvalues: not just the analog of the j (isospin), but also the hypercharge, eigenvalue of Y=B+S, related to the strangeness quantum number. And, indeed, more ladder operators, V+ or U+ (e.g. U-spin interchanges d and s quarks) move you among components of the vectors in suitable ways---they are labelled in plane patterns with triangular symmetry, rather than lines for plain rotations. Again, each dot on these weight diagrams for the octet, decuplet, etc... corresponds to an entry of the d-dim v and you know exactly, virtually by inspection, how these are going to respond to a exp(iθT) rotation, by the clever way they were labelled; so, really, yes!, an equivalent to matrix multiplication. The moment you have drawn the downwards-pointing triangular weight diagram for the baryon decuplet, you have implicitly specified the 10x10 generator matrices of SU(3).

The preponderance in physics of the 2nd language over just writing monster dxd matrices tells you something about its compactness and utility.

Cosmas Zachos
  • 67,623