1

If we consider a matrix of the form: $T_{ij}=v_iw_j$, where $\vec v$ and $\vec w$ are 3D vectors, that under 3D rotation transform.

One can show that the above mentioned matrix can be written in the following way:

$$T_{ij} = \frac{1}{3} \text{Tr}(T) \delta_{ij} + \frac{1}{2} (T_{ij} - T_{ji}) + \frac{1}{2} \left( T_{ij} + T_{ji} - \frac{2}{3} \text{Tr}(T) \delta_{ij} \right)$$

And the reason why we can write the matrix in the above expression, according to what was told in my lecture,is:

This identity can be trivially checked by expanding the rhs of the above equation. A second-order tensor operator can be decomposed into its:

1 - Isotropic part: $\frac{1}{3} \text{Tr}(T) \delta_{ij}$

2 - Antisymmetric traceless part: $\frac{1}{2} (T_{ij} - T_{ji})$

3 - Symmetric traceless part: $ \frac{1}{2} \left( T_{ij} + T_{ji} - \frac{2}{3} \text{Tr}(T) \delta_{ij} \right)$

The problem is that I have no idea what this components are, what they mean and their significance/ interpretation. How does one argue their existence ? Are these present in every matrix? Or only certain ones? If so, in which?

Qmechanic
  • 220,844
imbAF
  • 1,912
  • 13
  • 30

4 Answers4

2

A rank-2 tensor has:

$$ 3 \times 3 = 9 $$

components. Your decomposition has:

$$ 1 + 3 + 5 = 9 $$

Moreover, under rotation, each rotates differently, and is closed under rotations. For convenience, I'll call them $S_{ij}, V_{ij}, N_{ij}$. Then:

$$ S = S_{ii}/3 $$

rotates like a scalar.

$$ V_i = \frac 1 2\epsilon_{ijk}V_{jk} $$

rotates like a vector, and finally $N_{ij}$ rotates like a rank-2 tensor.

Moreover, various complex combinations of the components of each can be put in a one-to-one correspondence with:

$$ S \rightarrow Y_0^0 $$

$$ V_i \rightarrow (Y_1^{-1}, Y_1^0, Y_1^{+1}) $$

$$ N_{ij} \rightarrow (Y_2^{-2},Y_2^{-1}, Y_2^0, Y_2^{+1},Y_2^{+2}) $$

(I've posted the linear combinations elsewhere, but don't know how to find them on the site).

So generally when rank-2 tensors appear in physics, we usually don't deal with the full 9 set of components, for instance:

  1. Pressure: it's the scalar trace of the stress tensor.

  2. Angular Momentum: it's antisymmetric part of $r_ip_j-r_jp_i$.

  3. Tidal Tensor: it's the trace-free part of the 2nd derivative of the gravitational potential ($\partial_i\partial_j U$).

To use the au currant lingo, this is a rabbit-hole leading to representation theory with applications in: quantum addition of angular momentum, spherical tensor operators and the Wigner-Eckart theorem, Wigner d-matrices, the Eightfold-Way, color in QCD, Schur-Weyl duality, the Robinson-Schensted correspondence and Young tableaux and representations of the symmetric group.

Moreover, the Young tableaux are dimension and field agnostic. Thanks to the remarkable hook length formula (https://en.wikipedia.org/wiki/Hook_length_formula), they can be applied to all-of-the-above in ${\mathbb R}^3$ Euclidean space, Minkowski space, ${\mathbb C}^3$ color space, or your favorite GUT Lie group.

JEB
  • 42,131
2

After fixing a basis a tensor with two indices is associated with a matrix, so let us discuss matrices here. Let $A$ be some $n\times n$ matrix. The first thing I want to point out is that the kind of decomposition you are studying is always possible because it is a trivial identity:

$$A = \frac{A+A^T}{2} + \frac{A-A^T}{2} = \left(\frac{A+A^T}{2}- \frac{1}{n}\operatorname{Tr}(A)\mathbf{1}_n\right)+\frac{A-A^T}{2}+\frac{1}{n}\operatorname{Tr}(A)\mathbf{1}_n.$$

In the first equality we simply rewrite $A = \frac{A}{2}+\frac{A}{2}$ and then add and subtract $\frac{A^T}{2}$. In the second equality we add and subtract $\frac{1}{n}\operatorname{Tr}(A)\mathbf{1}_n$. So note this is always a true identity and you can check by simplifying the RHS which gives you $A$ back.

The first term is a symmetric matrix with zero trace, the second is a skew-symmetric matrix, and the third is something proportional to the identity matrix. Let $M_n(\mathbb{R})$ denote the space of $n\times n$ matrices with real entries. Then we have decomposed this vector space as

$$M_n(\mathbb{R}) = S_n(\mathbb{R})+A_n(\mathbb{R})+T_n(\mathbb{R}),$$

where $S_n(\mathbb{R})\subset M_n(\mathbb{R})$ is the subspace of symmetric matrices with zero trace, $A_n(\mathbb{R})\subset M_n(\mathbb{R})$ is the subspace of anti-symmetric matrices and $T_n(\mathbb{R})\subset M_n(\mathbb{R})$ is the subspace of matrices proportional to the identity. It is also very easy to see that $$S_n(\mathbb{R})\cap A_n(\mathbb{R})=\{0\},\quad S_n(\mathbb{R})\cap T_n(\mathbb{R})=\{0\},\quad A_n(\mathbb{R})\cap T_n(\mathbb{R})=\{0\}.$$

As a result the decomposition is really a direct sum decomposition:

$$M_n(\mathbb{R}) = S_n(\mathbb{R})\oplus A_n(\mathbb{R})\oplus T_n(\mathbb{R}).$$

This already shows a nice aspect of this decomposition. Recall that whenever a vector space $V$ decomposes as $V = V_1\oplus V_2$ we have that if $v = v_1+v_2 = v_1'+v_2'$ where $v_1,v_1'\in V_1$ and $v_2,v_2'\in V_2$, then $v_1=v_1'$ and $v_2=v_2'$.

This already shows you that if you have two matrices and you break them up like that, they are equal if and only if the terms in that decomposition are equal. So they are smaller, simpler building blocks of the matrix.

More than that we have the representation theory aspect. Let $R\in {\rm SO}(n)$, a 2-covariant tensor transforms like

$$A'_{k\ell} = R^i_{\phantom i k}R^j_{\phantom{j}\ell} A_{ij} = (R^T)_k^{\phantom k i}A_{ij} R^j_{\phantom j \ell} = (R^T A R)_{k\ell}.$$

As such, the transformation by $R$ can be summarized in index-free notation as $R^T A R$. This means that rotations act linearly on such matrices: meaning that the space carries a representation of the rotation group:

$$D(R)\cdot A = R^T A R.$$

A group representation is an assignment $g\to D(g)$ of a linear operator $D(g)\in {\rm GL}(V)$ acting on some vector space to each group element $g\in G$ such that $D(R_1R_2)=D(R_1)D(R_2)$ and $D(1)=1$. This means $D$ is a group homomorphism and therefore it realizes the group composition pattern as linear maps on $V$.

Now we have just seem that rotations act on these tensors. It is now straightforward to show that: if $A\in S_n(\mathbb{R})$ then $D(R)\cdot A\in S_n(\mathbb{R})$, if $A\in A_n(\mathbb{R})$ then $D(R)\cdot A\in A_n(\mathbb{R})$ and if $A\in T_n(\mathbb{R})$ then $D(R)\cdot A\in T_n(\mathbb{R})$. These are trivially true and you should check for yourself.

What this means is that the direct sum components are invariant subspaces. This means the representation is reducible, and when you break up the tensor like that you are actually picking up the irreducible parts. The irreducible parts are the simplest ones, since they can't be further broken down. As a result, it means that you are just focusing on the elementary building blocks.

Gold
  • 38,087
  • 19
  • 112
  • 289
1

Physical meaning: example. The physical meaning may depend on the tensor you're considering. As an example, in continuum mechanics, in displacement gradient (or in the analogous time-derivative, velocity gradient) these three contributions are:

  1. isotropic part: scaling deformation. This contribution represents the change in the dimension of a body, without changes in shape
  2. asymmetric part: infinitesimal average rigid rotation.
  3. symmetric traceless part: volume-preserving deformations. This contribution represents changes in shape, without changes in volume. Changes in volume are associated with the trace of the gradient tensor, namely the divergence of displacement field: this contribution is traceless - as the asymmetric part is -, thus it produces no change in volume.

Please remember that rigid rotations do not contribute to deformation and thus strain. Strain tensor thus is the symmetric part of the gradient, or the sum of term 1. and 3.

Decomposition.

Are these present in every matrix?

You can always write a 2-nd order tensor as the sum of these three terms, as you can easily prove with the identity written in your question, that is just a more tricky version of

$$A = (A - B) + B \ .$$

Note: tensors are NOT matrices. Please, remember that a matrix is not a tensor. A tensor is the mathematical object meant to represent the independence of the problem from the point of view of the observer. Once you have definer your point of view, usually a basis of a vector or tensor space, you can write your tensor as a linear combination of the elements of the basis: the coefficients of the linear combination are the components of the tensor in the basis you've chosen. Then, you can collect these components in matrices, the components of a 2-nd order tensor in a "standard matrix". As an example, you can use 2 different bases to represent the same vector (rank-1 tensor): the components in these two different bases are different, but the vector is still the same vector. Please use the word "matrix" when you refer to a matrix, use the word "tensor" when you refer to a tensor.

basics
  • 13,368
0

I think the other answers already address the question well, but I want to provide some more group theoretical insights. The rotation group in $d=3$ is $\mathrm{SO}(3)$. We are looking at a tensor made up of two vectors, i.e. the tensor is formed by the tensor product of two fundamental representations. Now it turns out that instead of $\mathrm{SO}(3)$ we get a more complete picture of the representations involved by looking at the universal cover of $\mathrm{SO}(3)$, which is $\mathrm{SU}(2)$. The finite dimensional representations of $\mathrm{SU}(2)$ are the spin $j=0, 1/2, 1, \dots$ representations acting on $V_j = \mathbb{C}^{2j+1}$ with associated character $$\chi_j(\theta) = \frac{\sin((j+1/2)\theta)}{\sin(\theta/2)},$$ where $\theta$ measures the angle rotated by (note that there is no dependence on the axis of rotation). By taking products and sums of the characters you can derive the following relationship between tensor products of spin $j_1, j_2$ representations and it's decomposition into irreps \begin{align*} V_{j_1} \otimes V_{j_2} \simeq \bigoplus_{\ell=0}^{2\mathrm{min}(j_1, j_2)} V_{j_1 + j_2 - \ell}\ . \end{align*} In this case we have two fundamentals $j_1=1 = j_2$ representations and want to deduce the decomposition of the tensor products into the irrep summands. In this case the formula tells us that we get \begin{align*} V_1 \otimes V_1 \simeq V_{1+1-0} \oplus V_{1+1-1} \oplus V_{1+1-2} = V_{2} \oplus V_{1} \oplus V_0 \end{align*} The dimensions work out as expected, namely $3 \times 3 = 9 = (2\cdot2+1) + (2\cdot 1 + 1) + (2 \cdot 0 + 1) = 5 + 3 + 1$. This agrees with the DOF present in your decomposition and you can just act with a rotation matrix $R \in \mathrm{SO}(3)$ on your vectors to verify that the three new tensors do indeed transform in the $0, 1$ and $2$ representation, respectively.

In this setting the decomposition is entirely based on the fact that the vectors rotate like vectors, i.e. belong to the fundamental representation $1$ of the rotation group $\mathrm{SO}(3)$ - despite the fact that at an algebraic level this would be true for any matrix.

Wihtedeka
  • 2,078