2

There is something I never understood about Noether currents and I really want to catch it.

I will ask my question with an example but it is in fact a very general question.

We take the Klein Gordon Lagrangian :

$$\mathcal{L}=\partial_\mu \phi \partial^\mu \bar{\phi}-m^2 \phi \bar{\phi}$$

I remark that the following change of my fields will not change my Lagrangian :

$ \phi \rightarrow e^{i \alpha} \phi $

thus there is conserved currents.

To find them, we say : well as the lagrangian doesn't change after this transformation, then I have at first order at least $\delta \mathcal{L}=0$

In an other hand, I can compute $\delta L$ in a general case :

$$ \delta L = \mathcal{L}(\phi + \delta \phi, \bar{\phi}+\delta\bar{\phi},\partial \phi + \delta \partial \phi, \bar{\phi}+\partial \bar{\phi} )-\mathcal{L}(\phi, \bar{\phi},\partial \phi, \bar{\phi}) $$

$$\delta L = \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi + \frac{\partial \mathcal{L}}{\partial \bar{\phi}} \delta \bar{\phi}+\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)}\delta(\partial_\mu \phi)+\frac{\partial \mathcal{L}}{\partial (\partial_\mu \bar{\phi})}\delta(\partial_\mu \bar{\phi})$$

As I can change $\delta$ and $\partial_\mu$, and after using the formula for the derivative of a product, I have :

$$\delta L = (\frac{\partial \mathcal{L}}{\partial \phi} -\partial_\mu [\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)}])\delta \phi + (\frac{\partial \mathcal{L}}{\partial \bar{\phi}} -\partial_\mu[\frac{\partial \mathcal{L}}{\partial (\partial_\mu \bar{\phi})}])\delta \bar{\phi}+\partial_\mu [\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} \delta \phi+ \frac{\partial \mathcal{L}}{\partial (\partial_\mu \bar{\phi})} \delta \bar{\phi}]$$

If the equation of motions are satisfied, I can remove the two first terms.

I end up with :

$$\delta L =\partial_\mu [\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} \delta \phi + \frac{\partial \mathcal{L}}{\partial (\partial_\mu \bar{\phi})} \delta \bar{\phi}]$$

And now, as My lagrangian did'nt change with my transformation, it didn't change in particular at first order. Thus $\delta \mathcal{L}=0$, thus I have the 4-current : $j^\mu=\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} \delta \phi + \frac{\partial \mathcal{L}}{\partial (\partial_\mu \bar{\phi})} \delta \bar{\phi}$ that is my conserved quantity. By conserved current it means its 4-divergence is $0$.

(When I replace, I end up with : $j^\mu=i \alpha(\phi \partial^\mu \bar{\phi}-\bar{\phi}\partial^\mu \phi)$)


What I don't understand :

To do all the calculation we assumed our variation in regard to $\phi$ or to $\bar{\phi}$ are independent. Indeed when we compute $\frac{\partial \mathcal{L}}{\partial \phi}$ we do it at the other variables constants.

But here it is not the case, indeed, I have $\delta \phi = i \alpha \phi$ and $\delta \bar{\phi} = -i \alpha \bar{\phi}$. Thus, if $\phi$ varies, it means $\alpha \neq 0$ and thus $\bar{\phi}$ also varies. So, $\delta \phi_{\bar{\phi}=cte}$ is not possible. So our first order development can't be made.

Where am I wrong with what I say?


To be more specific with my question :

If I would have calculated $\delta \mathcal{L}$ in terms of $\alpha$, it means :

$$ \delta \mathcal{L}= \mathcal{L}(\alpha+d\alpha)-\mathcal{L}(\alpha)$$

I would end up with $0=0$ which make senses because if the Lagrangian does'nt vary for the total transformation, it will not vary for an infinitesimal one.

But why when I reason with a general variation of the fields and at the end I say "oh, finally, $\delta \phi=-i \alpha \phi$ and $\delta \bar{\phi}=+ i\alpha \bar{\phi}$". I would have more information than saying from the beginning that I transform with a parameter $\alpha$ and calculating the variation on the lagrangian along this parameter $\alpha$ (in this case I get the useless equation $0=0$).

StarBucK
  • 1,688

2 Answers2

2

Your question is a common one, at least when posed in the distilled form: how can I take derivatives of a function $f(z, \bar{z})$ with respect to $z$ at constant $\bar{z}$, if $z$ and $\bar{z}$ are interdependent?

Here are two ways of looking at this. Firstly, partial derivatives are superficial. When first learning about partial derivatives, you might have come across a problem of the form:

Suppose $h(x,y) = \exp(-x^2-y^2) $ is the height of a hill above sea level. Suppose you walk a path over the hill of the form $x = \cos(t)$, $y = t$. What is the highest point reached?

There are two ways to attack this problem. The first is to simply substitute the expressions for $x$ and $y$ into $h$ and differentiate with respect to $t$. However, we can also use partial derivatives: $$ 0 = \left.\frac{\partial{h}}{\partial{x}}\right|_y \frac{\mathrm{dx}}{\mathrm{d}t} + \left.\frac{\partial{h}}{\partial{y}}\right|_x \frac{\mathrm{dy}}{\mathrm{d}t} \,.$$ Hopefully this kind of manipulation is familiar to you. But look! How can we vary $h$ with respect to $x$ at constant $y$? We've just said that $x$ and $y$ are interdependent, and varying one will certainly cause the other to vary. What's happening? The point is that there's nothing stopping us from taking the expression for $h(x,y)$ at face value. The partial derivatives we take intentionally blind themselves to any further dependences that $x$ and $y$ might have.


If you are not satisfied, here's another way of looking at this. For some complex variable $z = x + i y$, we're perfectly happy treating $x$ and $y$ as independent. So perhaps we can make the operations of differentiating with respect to $z$ and $\bar{z}$ more palatable by translating them into derivatives with respect to $x$ and $y$. I claim that we can define $$ \left.\frac{\partial{}}{\partial{z}}\right|_\bar{z} = \frac{1}{2}\left( \left.\frac{\partial{}}{\partial{x}}\right|_y-i\left.\frac{\partial{}}{\partial{y}}\right|_x\right) \qquad \left.\frac{\partial{}}{\partial{\bar{z}}}\right|_z = \frac{1}{2}\left( \left.\frac{\partial{}}{\partial{x}}\right|_y+i\left.\frac{\partial{}}{\partial{y}}\right|_x\right) \,.$$ These are known as Wirtinger derivatives. You can justify these expressions to yourself by acting with both sides on elementary functions such as $z^2$, $z(1+\bar{z}$), $\exp(z - \bar{z})$, and so on, where the $z$ and $\bar{z}$ derivatives are to be calculated by pretending that the two variables are independent. (Note that it's important to write functions in an elementary way. For instance, $\mathrm{Re}(z)$ is not a 'permissible' function; written in this way, we would wrongly conclude that its partial derivative with respect to $\bar{z}$ is zero.) It's then easy to see that (for $f$ real) $$ \left.\frac{\partial{f}}{\partial{z}}\right|_\bar{z}=0 \iff \left.\frac{\partial{f}}{\partial{x}}\right|_y = \left.\frac{\partial{f}}{\partial{y}}\right|_x = 0 \,.$$ And so we find that minimising with respect to the legitimately independent real and imaginary parts of a complex variable is equivalent to forgetting about the interdependence of the variable and its complex conjugate and boldly taking the derivative with respect to it.

gj255
  • 6,525
1

$\phi(x)$ is a complex scalar field, so $\phi(x)$ and $\bar{\phi}(x)$ are independent of each other and the same holds for their variations.

This might be a bit easier to see, if you decompose the field in its real and imaginary part $\phi(x) = \phi_1(x) + i \phi_2(x)$, for these real fields the Lagrangian takes the form: $$\mathcal{L} = (\partial \phi_1)^2+(\partial \phi_2)^2$$ and the symmetry transformation is: $\begin{pmatrix} \phi_1 \\ \phi_2 \end{pmatrix} = \begin{pmatrix} 1 & -\alpha \\ \alpha & 1 \end{pmatrix} \begin{pmatrix} \phi_1 \\ \phi_2 \end{pmatrix} $

So here it's obvious that $\delta \phi_1 = -\alpha \phi_2$ and $\delta \phi_2 =\alpha \phi_2$ are independent.