6

On p. 28 in Chapter 2 of Peskin & Schroeder the concept of causality is discussed. Specifically, it considers two fields at two different point and then it considers a space-like separation. Subsequently, it states the following:

To really discuss causality, However, we should ask not whether particles can propagate over spacelike intervals, but whether a measurement performed at one point can affect a measurement at another point whose separation from the first is spacelike. The simplest thing we could try to measure is the field $\phi(x)$, so we should compute the commutator $\left[\phi(x), \phi(y) \right]$; if this commutator vanishes, one measurement cannot affect the other.

At this point, I have several questions. Firstly, why does it talk about measurements if the fields are not observables? What does a correlator actually represent?

----EDIT---- MY QUESTION IS NOT A DUPLICATE FOR THE FOLLOWING REASON!!

The answers in In QFT, why does a vanishing commutator ensure causality? didn't fully answer the specific question about the nature of fields as observables. Because of this I don't consider my question a duplicate of that question, or more precisely, the answers given to it do not address all the doubts raised by my current question. The given answers argue well the difference between the correlation function and the commutator of two observables, and emphasize why the commutator of two observables must commute in the case of spacelike separation. However, Peskin & Schroeder applies this reasoning to the operators $\phi(x)$ and $\phi(y)$ and talks about "measurements" as if these fields were actually observables, when a few pages earlier these operators are treated as creation and annihilation operators that create particle states in phase space. Moreover, while the answers to the previous question talk about correlation, in the Peskin & Schroeder it talks about particle propagation. The two interpretations don't seem similar at all to me, so my question is: what is the correct interpretation of the fields $\phi$, are they observables or not? Does the quantity $\langle\phi(x)\phi(y)\rangle$ express the probability that a particle propagates in space or does it represent the correlation between the field values at $x$ and $y$?

Ghilele
  • 91

2 Answers2

5

I do not know the P&S textbook; my answer below just reflects some implicit reasoning of some (other) textbooks not necessarily shared with P&S.

I stress that what I wrote below is not my viewpoint on this very delicate issue.

In a more rigorous perspective, the relevant object is $\hat{\phi}(f)$ that is formally interpreted as $$\hat{\phi}(f):= \int \hat{\phi}(x) f(x) \ \mathrm{d}^4x$$ It is the field operator smeared by the test function $f$, any smooth compactly supported real function $f$ defined in the spacetime.

In fact, $\hat{\phi}(f)$ is a densely defined Hermitian operator in the Hilbert space for every such $f$ (*).

Technically speaking, the associated observable is the closure of that operator, which is selfadjoint and thus, in principle, is an observable. However, there is a dense invariant domain in common for all operators $\hat{\phi}(f)$ when varying $f$, and the operator closure on that domain defines the said selfadjoint operator. I will assume to deal with this domain henceforth.

The commutation relations written into a rigorous version are $$[\hat{\phi}(f), \hat{\phi}(g)]= 0\quad \mbox{if the supports of f and g are causally separated}.$$ Here, we are considering the commutation relation of proper observables. These represent observables localised in regions defined by the supports of the two smearing functions. In turn, these sets are causally separated.

These observables are expected to be compatible since, for "obvious causal reasons" at least in some folklore of QFT (see below), their measurements cannot "disturb each other".

Compatibility, in the standard version of QM, is equivalent to commutativity of the spectral measures, which, on suitable invariant domains, implies commutativity of the same observables. This justifies the above commutation relations.

Formally, we can also write $$0= [\hat{\phi}(f), \hat{\phi}(g)] = \int\int [\hat{\phi}(x), \hat{\phi}(y)] f(x)g(y) \ \mathrm{d}^4x \ \mathrm{d}^4y.$$ Arbitrariness of $f$ and $g$ should imply $$[\hat{\phi}(x), \hat{\phi}(y)]=0$$ when $x$ and $y$ are causally separated, though this identity is just a shortened version of its rigorous version written above.

ADDENDUM. I sketch here a proof of commutativity based on some, let's say, folklore assumptions.

I explicitly admit that they are disputable, especially the second one, on account of the modern theory of quantum measurement.

I do not know if there is a better way to justify the commutativity postulate according to the modern view. (On my side, I just assume commutativity since it produces interesting and important physical facts.)

Let us assume that:

  1. real smeared boson fields (selfadjoint operators) are observables;

  2. measurements of them are described in terms of the Lüders projection postulate using the spectral measures of the smeared fields;

  3. outcomes of the above measurements are recorded in spacetime regions inside the support of the smearing functions.

Consider two causally separated regions in Minkowski spacetime $\Omega, \Omega' \subset M^4$ and two orthogonal projectors $P^{(\Omega)}_E$, $P^{(\Omega')}_{E'}$ of the spectral measures of $\hat{\phi}(f)$ and $\hat{\phi}(f')$ respectively, where $\text{supp}(f) \subset \Omega$ and $\text{supp}(f')\subset \Omega'$.

Suppose that the measurement in $\Omega$ is not selective. So we test $P_E^{(\Omega)}$ and $\neg P_E^{(\Omega)}:= I- P_E^{(\Omega)}$ without knowing the result. If the generically mixed initial state is $\rho$ the post-measurement state is $$\rho' := P_E^{(\Omega)} \rho P_E^{(\Omega)} + (I-P_E^{(\Omega)})\rho (I-P_E^{(\Omega)})\:.$$ The probability to measure $E'$ is therefore $$\text{tr}\left(P^{(\Omega')}_{E'} \rho'\right)\:.$$

However, since this measurement is located in a causally separated region, the same probability should arise when performing the measurement on the initial state $\rho$: That is because there is an observer who describes the measurement in $\Omega'$ before the one in $\Omega$. Hence, we are committed to assume that $$\text{tr}\left(P^{(\Omega')}_{E'} \rho\right)= \text{tr}\left(P^{(\Omega')}_{E'} \rho'\right).$$ An easy computation based on linearity and the cyclic property of the trace yields $$\text{tr}\left(\rho \left(P^{(\Omega')}_{E'} - P_{E}^{(\Omega)} P^{(\Omega')}_{E'} P_{E}^{(\Omega)} - (I-P_{E}^{(\Omega)}) P^{(\Omega')}_{E'} (I-P_{E}^{(\Omega)})\right)\right)=0\:.$$ Arbitrariness of $\rho$ entails $$P^{(\Omega')}_{E'} = P_{E}^{(\Omega)} P^{(\Omega')}_{E'} P_{E}^{(\Omega)} + (I-P_{E}^{(\Omega)}) P^{(\Omega')}_{E'} (I-P_{E}^{(\Omega)})\:.$$ Applying $P_E^{(\Omega)}$ separately on both sides produces: $$ P_E^{(\Omega)}P^{(\Omega')}_{E'} = P_E^{(\Omega)}P_{E}^{(\Omega)} P^{(\Omega')}_{E'} P_{E}^{(\Omega)} +0 = P_{E}^{(\Omega)} P^{(\Omega')}_{E'} P_{E}^{(\Omega)} \:$$ and $$P^{(\Omega')}_{E'} P_E^{(\Omega)} = P_{E}^{(\Omega)} P^{(\Omega')}_{E'} P_{E}^{(\Omega)}P_E^{(\Omega)} +0 = P_{E}^{(\Omega)} P^{(\Omega')}_{E'} P_{E}^{(\Omega)} \:.$$ So that we have the thesis: $$P^{(\Omega')}_{E'} P_E^{(\Omega)}=P_E^{(\Omega)}P^{(\Omega')}_{E'} .$$

Final comments

  1. If we do not assume that $\hat{\phi}(f)$ is directly measurable, but other formal local observables generated by it are, e.g. the smeared renormalized stress energy tensor, then we can repeat the proof above for them.

  2. All that does not mean that non-local correlations of measurement outcomes are forbidden for causally separated regions: these are possible and are due to the measured state: in a sense, it can be entangled. The so-called Bell theorem (actually the version principally due to Leggett) mutatis mutandis is compatible with the framework above. There, polarization/spin observables of a couple of particles are measured in two spatially separated regions. For that reason, these observables are assumed to commute. (More precisely, they are described as observables in two factors of a tensor product of Hilbert spaces. Here, this description is not generally permitted since the algebras of QFT are von Neumann factors of type-III, and independent subsystems are not described in terms of tensor product. However, commutativity of causally separated observables remains.) In spite of the commutativity of the couples of simultaneously measured observables, in the Bell experiment, correlations in the pairs of outcomes arise due to the special entangled state of the couple of particles. (To be precise, I stress that to see the violation of CHSH inequality one measures 3 pairs of commuting observables. However, observables of different pairs do not commute.)

  3. The major weakness of the "proof" above is that it exploits a very old and naive description of the measurement procedure and the post-measurement state. Nowadays, a physically very general and sound theory exists, and many new results against universality of (projective) Lüders description have been accumulated over the years.

  4. Locality of QFT observables gives rise to apparently non-local phenomena, first of all, the Reeh-Schlieder theorem. However, even in this context, commutativity of causally separated observables is a crucial ingredient to achieve the statement of Reeh-Schlieder result.


(*) I stress that the smeared fields $\hat{\phi}(f)$, for real boson quantum fields are considered observables in the perspective of Local Quantum Physics, the view relying on the Haag-Kastler formulation of QFT. Other fields, like fermions, are not considered observables.

Abstractly speaking, They are elements of the $*$-algebra of observables of the theory (their exponentials $e^{i\hat{\phi}(f)}$ define the generators of the Weyl $C^*$-algebra of quasi-local observables).

1

To get at the heart of this issue, you need to step back and discuss the difference between correlation and causation. Everyone has heard "correlation is not causation", so it's crucial to define what is meant by "causation" independent of correlation. Evidently, correlation between A and B is an undirected concept; it's just as correct to say "A is correlated with B" as it is to say "B is correlated with A". But causation is not symmetrical like this. If A causes B, then it's not reasonable to say that B causes A. So to define causation, one needs to break the symmetry.

In the modern viewpoint, the way to do this is to talk about external "interventions". (External to the system being modelled.) These interventions are special; they're "causes", as far as the modelled system is concerned. Specifically, if there is some intervention variable "A" that is correlated with some other observed (non-intervention) variable "B", then it follows that A causes B. The external intervention breaks the symmetry.

The language of quantum field theory isn't well suited to distinguishing interventions from observations; they both fall under the generic term "measurement". More accurate would be to distinguish measurement "settings" (the chosen basis or an externally-controlled Hamiltonian) from measurement "outcomes". It should go without saying that the settings correspond to interventions, and can be causes. The outcomes cannot be causes (but they can be caused by settings).

The second most popular answer to this prior question, In QFT, why does a vanishing commutator ensure causality? (by MannyC) drills down to this distinction, imagining a Hamiltonian representing a "kick". That's an intervention, of course. Then it shows that unless the commutator is zero, that kick can be correlated with a measured field somewhere else. This is causal analysis because the kick is an intervention, and the measured field is not.

Peskin and Schroeder are writing this before this modern view of causality has been popularized, so they talk about "measurements" in a seeming symmetrical way, and then try to break the symmetry by use a causal word "affect" to prime the reader's causal instincts. This leaves their argument unfortunately vague. A more modern view would to be explicitly identify a possible point of intervention, like the Hamiltonian "kick", and call that a "cause".

Ken Wharton
  • 2,498
  • 8
  • 16