I'm sorry for the long question.
In Brian Greene's book 'The Hidden Reality' he gives a nice illustration of the uncertainty principle. Imagine a butterfly flying around in a room. We would like to measure the speed and position of the butterfly by capturing a photograph of it. We can take a photo with a very fast shutter speed and the resulting sharp photo would give a good measure of the position. But this gives a poor measure of the speed. On the other hand, if we take a photo with a slow shutter speed, the resulting photo with a butterfly trail will give an estimate of the speed, but the position information in this photo will be very coarse. He makes it a point that we cannot measure both the speed and the position accurately at the same time and this illustrates the quantum uncertainty principle.
I understand what he tries to convey. Also I have seen this myself in action when I program the Fourier transform where I have to compromise on the number of frequency bins to get more accurate frequency values.
I was trying to explain this to a friend and I was trying to make a point that this is not a technological limitation and there is no way to improve the accuracy in the butterfly example using a better technology.
Then my friend asked what if we use two different cameras, one with slow shutter speed and one with fast shutter speed? I was bit taken aback. On the face of it, it sounds a smart idea. But I knew that this wouldn't work. However, I was not sure about the minimum argument against it.
I could think about two arguments.
- Here the crux is that we need to measure both the parameters simultaneously. Assume that my friend configures both the cameras to fire 'simultaneously' in a simulated manner. If the short exposure time is 10 millisecond and the long exposure time is 1 second, he could program both of his cameras so that the 0.5ms tick of the fast camera coincides with the 0.5s tick of the slow camera. This would mean that both camera would start at different times, but their timing midpoints will coincide and this will make sure that the cameras capture the information somewhat 'at the same instant'. Even though one could argue that this is not quite an 'instant', I believe this is according to the spirit of the experiment.
However, the catch here is that the slow camera still captures a span of 1 second and the resulting estimated velocity couldn't be reliably related to the position captured by the 1ms camera.
- This is the macro world equivalent of a quantum measurement. In the quantum world, we cannot make two measurements on the same particle which was in the super position state to get two different parameters. The first measurement disturbs the super position and snaps the particle to a definite eigen state and the second measurement would just measure the value of this eigen state, not of the original super position state.
So, the butterfly experiment is precisely not a quantum one.
My question : Should I necessarily invoke the 2nd argument? Is the 1st one enough to close the deal?
Edit : After learning little bit more about the non-commutative nature of the quantum measurement, it seems like my second argument is not quite accurate as it has nothing to do with the timing, but all about not having a common eigen vector between the two complement quantum operators that contribute to the uncertainty(or the commutator operator not being 0). However, the main question still holds.