1

I am struggling to develop an intuitive understanding of rotations expressed using the Levi-Civita symbol. Specifically, I am trying to grasp how the following expressions represent a rigid rotation about an axis: $x_i' = x_i + \varepsilon_{ijk} \phi_j x_k$ or equivalently, $\xi_i = \varepsilon_{ijk} \phi_j x_k$.

My difficulty is understanding how these expressions clearly describe a rotation and how I could identify them as such.

Additionally, in Kip Thorne’s Modern Classical Physics (Chapter 11: Elastostatics), the following relations appear: $R_{ij} = -\epsilon_{ijk} \phi_k$ and $\phi_i = -\frac{1}{2} \epsilon_{ijk} R_{jk}$. I find it challenging to see the connection between these expressions and rotations.

Could someone help me build a clearer intuition for these equations and how they encode rotational motion? Any geometric or physical insight would be greatly appreciated!

ludicrous
  • 170

4 Answers4

1

The vector relation $$\vec{x}^\prime = \vec{x}+ \vec{\phi} \times \vec{x} \tag{1} \label{1} $$ describes an infinitesimal rotation of an arbitary vector $\vec{x}$ around the rotation axis pointing in the direction of the unit vector $\vec{\phi}/|\vec{\phi}|$ by an infinitesimal rotation angle $|\vec{\phi}|$, where $\vec{\phi} \times \vec{x}$ denotes the cross product of the vectors $\vec{\phi}$ and $\vec{x}$. Make a drawing to understand the simple geometric meaning of eq. \eqref{1}! Using index notation and summation convention, eq. \eqref{1} is equivalent to the relation $$ x_i^\prime = x_i +\epsilon_{ijk} \phi_j x_k,$$ where $\epsilon_{ijk}$ denotes the Levi-Civita symbol in three dimensions.

Concerning your second question, starting from the definition $R_{ij}=-\epsilon_{ijk} \phi_k$ and using the relation $\epsilon_{\ell ij} \epsilon_{ijk}=2 \delta_{\ell k}$, one finds indeed $\epsilon_{\ell ij} R_{ij}=-\epsilon_{\ell ij} \epsilon_{ijk}\phi_k=-2\delta_{\ell k} \phi_k=-2 \phi_\ell $.

Hyperon
  • 10,007
1

The components of the vector product $ = ×$ may be expressed in terms of those of the vectors $$ and $$ as $C^k = \sum_{i,j=1,2,3} ε^k_{ij} A^i B^j$; so $ε^k_{ij}$ is a device used to write vector products in component form.

Denote the infinitesimal change in $ = (x, y, z)$ as $δ = ' - $. Then the equation defining an infinitesimal rotation is $δ = ×$, where I'll use $$, instead of $$ in the following. If the finite form of the rotations are expressed in parametrized form as $ → R_ $, where $s$ is the parameter and $ = (s)$ is a function of $s$, then the infinitesimal form of the rotation is the differential with respect to $s$: $$\frac{d}{ds} R_ = × R_ ,\quad R_ = ,\quad = \frac{d}{ds},$$ with $$δ = \left.\frac{d}{ds} R_ \right|_{=} = × .$$

To find the finite form, apply Taylor's theorem ... this time by starting with $$ and setting $ = s $: $$\begin{align} R_ &= e^{sδ}\\ &= + s δ + \frac{s^2 δ^2}{2!} + \frac{s^3 δ^3}{3!} + \frac{s^4 δ^4}{4!} + ⋯\\ &= + s × + \frac{s^2 ×(×)}{2!} + \frac{s^3 ×(×(×))}{3!} + \frac{s^4 ×(×(×(×))}{4!} + ⋯ \end{align}$$ (the process being known as "exponentiation").

Since $×(×(×)) = -ω^2 ×$ (i.e. $δ^3 = -ω^2 δ$), where $ω = ||$, then this can be reduced to: $$\begin{align} R_ &= + \left(s - \frac{s^3ω^2}{3!} + \frac{s^5ω^4}{5!} ⋯\right) × + \left(\frac{s^2}{2!} - \frac{s^4ω^2}{4!} + \frac{s^6ω^4}{6!} ⋯\right) ×(×)\\ &= + \frac{\sin sω}{ω} × + \frac{1 - \cos sω}{ω^2} ×(×)\\ &= + \frac{\sin θ}{θ} × + \frac{1 - \cos θ}{θ^2} ×(×), \end{align}$$ after applying $ = s$ and $θ = || = s ω$ in the last step.

Thus, the finite form of the rotation is: $$ → R_ = + \sin θ \frac{×}θ + (1 - \cos θ) \frac{×(×)}{θ^2}\quad (θ = ||).$$

NinjaDarth
  • 2,850
  • 7
  • 13
1

$\def \b {\mathbf}$ $\def \c {\boldsymbol}$ the Rotation matrix $~\mathbf R~$ of a rigid body is described with 3 Euler angles $~\phi_i~$ . e.g.

$$\mathbf R= \mathbf R_x(\phi_1)\,\mathbf R_y(\phi_2)\,\mathbf R_z(\phi_3)= \left[ \begin {array}{ccc} 1&0&0\\ 0&\cos \left( \phi_{{1}} \right) &-\sin \left( \phi_{{1}} \right) \\ 0&\sin \left( \phi_{{1}} \right) &\cos \left( \phi_{{1}} \right) \end {array} \right] \,\left[ \begin {array}{ccc} \cos \left( \phi_{{2}} \right) &0&\sin \left( \phi_{{2}} \right) \\ 0&1&0 \\ -\sin \left( \phi_{{2}} \right) &0&\cos \left( \phi_{{2}} \right) \end {array} \right] \,\left[ \begin {array}{ccc} \cos \left( \phi_{{3}} \right) &-\sin \left( \phi_{{3}} \right) &0\\ \sin \left( \phi_{{3 }} \right) &\cos \left( \phi_{{3}} \right) &0\\ 0&0& 1\end {array} \right] $$

for a small angles $~\phi_i \ll~$

you obtain

$$\mathbf R\mapsto \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}+ \left[ \begin {array}{ccc} 0&-\phi_{{3}}&\phi_{{2}} \\ \phi_{{3}}&0&-\phi_{{1}}\\ - \phi_{{2}}&\phi_{{1}}&0\end {array} \right] \quad\Rightarrow$$ $$\b x'=\b R\,\b x=\b x+\c\phi\times\b x\\ x'_i+\epsilon_{ijk}\,\phi_j\,x_k$$

of course this is also valid for any combination of Euler angles

Eli
  • 13,829
1

Here are several straightforward derivations that I think can help see through some of that tensor notation, especially in relation to the Levi-civita symbol.

Express a general position vector via an orthonormal basis $\{\vec{e}_i \}$ (let $i=1\dots n$, and we'll assume later that $n=3$ in particular):

$$ \vec{r} = r_i \vec{e}_i \tag{1}$$

differentiate with respect to time to obtain:

$$ \dot{\vec{r}} = \dot{r}_i\vec{e}_i + r_i\dot{\vec{e}}_i\tag{2}$$

In general, the time derivatives of the basis vectors $\dot{\vec{e}}_i$ need not vanish. However, since we have a basis, they have to be given by some linear combination:

$$ \dot{\vec{e}}_i = \omega_{ik}\vec{e}_k \tag{3}$$

where $\omega_{ik}$ are elements of an $n\times n$ matrix. We can prove that it is an antisymmetric matrix, where essentially the need for the antisymmetric symbol $\varepsilon_{ijk}$ springs from. To show that write $(3)$ again as:

$$ \dot{\vec{e}}_j = \omega_{jk} \vec{e}_k \tag{4} $$

Next, we take the dot product of Eq. $(3)$ with $\vec{e}_j$ and of Eq. $(4)$ with $\vec{e}_i$ to obtain:

$$ \dot{\vec{e}}_i \cdot \vec{e}_j = \omega_{ik} \delta_{jk} = \omega_{ij} \tag{5} $$ $$ \dot{\vec{e}}_j \cdot \vec{e}_i = \omega_{jk} \delta_{ik} = \omega_{ji} \tag{6} $$

where we have used the fact that for an orthonormal basis we have $\vec{e}_i \cdot \vec{e}_j = \delta_{ij}$.

Summing Eqs. $(5)$ and $(6)$ we find:

\begin{align*} \omega_{ij} + \omega_{ji} &= \dot{\vec{e}}_i \cdot \vec{e}_j + \dot{\vec{e}}_j \cdot \vec{e}_i \\&= \frac{\mathrm{d}}{\mathrm{d}t}\left(\vec{e}_i \cdot \vec{e}_j \right) \\&= 0 \end{align*}

which implies $\boxed{\omega_{ij} = -\omega_{ji}}$ as claimed.

Now in $3-$dimensional Euclidean space, $\mathbb{E}^3$ we note that any $3\times 3$ antisymmetric matrix such as $\boldsymbol\omega$ has only $3$ independent components. We can, as a first attempt, try to express these components via a kind of a $3-$vector, by using the Levi-Civita symbol that has three indices:

$$ \zeta_i = \varepsilon_{ijk}\omega_{jk} $$

note however, this isn't exactly what we want. To see why, take for instance $i=1$, we have, noting the only nonvanishing terms in the implied summation:

$$ \zeta_1 = \varepsilon_{123}\omega_{23} + \varepsilon_{132}\omega_{32},$$

but now note that this gives us double of what we want:

$$ \zeta_1 = \omega_{23} - \omega_{32} = 2\omega_{23},$$

where we've used the derived property in setting $\omega_{32} = -\omega_{23}$.

So, the fix for that is where the factor of $1/2$ in the expression you've mentioned in your post comes from (at least very probably, because you didn't really provide the full context for the mentioned expression).

The components of the angular velocity pseudo-vector $\vec{\omega}$ are therefore defined by:

$$ \omega_i = \frac{1}{2}\varepsilon_{ijk}\omega_{jk}, \tag{7} $$

this in turn also implies (as can be readily verified) that $\omega_1=\omega_{23}$, $\omega_2=\omega_{31}$, $\omega_3=\omega_{12}$. You may convince yourself then that the inverse relation to $(7)$ is:

$$ \omega_{ij} = \varepsilon_{ijk}\omega_k \tag{8} $$

Going back and applying all of that to $(2)$ we find:

\begin{align*} \dot{\vec{r}} &= \dot{r}_i\vec{e}_i + r_i \dot{\vec{e}}_i \\&= \dot{r}_i\vec{e}_i + r_i \omega_{ij} \vec{e}_j && \text{via $(3)$} \\&= \dot{r}_i\vec{e}_i + r_i \varepsilon_{ijk} \omega_k \vec{e}_j && \text{via $(8)$} \\&= \dot{r}_i\vec{e}_i + \varepsilon_{kij}\omega_k r_i \vec{e}_j && \text{$\varepsilon_{ijk} = \varepsilon_{kij}$ (cyclic permutation of indices)} \\&= \dot{r}_i\vec{e}_i + \varepsilon_{ijk}\omega_i r_j \vec{e}_k && \text{rewrite dummy indices $k\leftrightarrow i$, $i\leftrightarrow j$, $j\leftrightarrow k$} \end{align*}

What we got is the well known Kinematic transport theorem that is often written in a coordinate free manner as:

$$ \left(\dot{\vec{r}}\right)_{\text{inertial}} = \left(\dot{\vec{r}}\right)_{\text{rotating}} + \vec{\omega}\times\vec{r}, $$

so that if we think of the basis vectors $\{\vec{e}_i \}$ as attached to a rotating reference frame, the components of $\left(\dot{\vec{r}}\right)_{\text{rotating}}$, given by $\dot{r}_i$, are what an observer within the rotating frame will measure as the rate of change of the vector $\vec{r}$, while the LHS is the rate of change of $\vec{r}$ that an inertial observer measures, which accounts for the time dependence of the basis vectors as well.

Incidentally, why did we assume that the two frames necessarily differ only in rotational motion? Can't the rotating frame also be linearly accelerating? Well it definitely can, but this acceleration will have no effect on the time derivatives of the basis vectors $\vec{e}_i$ since by assumption they are orthonormal, implying normalization $|\vec{e}_i|=1$, so the basis vectors can only vary in direction but not in magnitude.

Therefore, any linear acceleration of the non-inertial frame will be expressed through the components of $\vec{r}$, $r_i$ and their time derivatives, not in the basis vectors. So that the time derivatives of the basis vectors $\{\vec{e}_i \}$ don't tell us if the frame they're attached to is non-inertial in the most general sense, only about its rotation relative to the inertial frame in particular.

Amit
  • 6,024