-3

I am looking for an answer in a quantum/theoretical physics setting and not in the mathematical world (not the mathematical answer found on the staircase paradox: https://math.stackexchange.com/questions/12906/the-staircase-paradox-or-why-pi-ne4) :

You travel from point A to point B on a shape that is a right triangle only along its legs. For a 3 meter 4 meter 5 meter right triangle you would travel a distance of seven meters. Now imagine going in horizontal and vertical lines to point B while remaining within the right triangle. You would again travel a distance of seven: Figure 1. My question is once you start decreasing the size of your vertical and horizontal lines until they are infinitely small, would you start traveling the length of your hypotenuse (for this example five) instead of the length of the legs (for this example seven)? Does this also mean at some point you travel a distance of in between the length of both legs and hypotenuse? Figure 2.

Edit 1: Sorry if my question is not clear, but @Sandejo your interpretation is correct. I want to know if it does become five or some other value if the lines become infinitesimally small. At this point, would it not be possible to create horizontal and vertical lines? Is there a way to prove this using quantum physics?

2 Answers2

1

No, you always travel the distance of both legs. That distance is known as the Manhattan distance, induced by the taxicab norm, also known as the $L^1$ norm.

Why do they call it that? Imagine you are a taxi driver in a city defined by a grid system. You can never cut through the buildings, you always have to drive along the legs of the triangles that each block introduces. Making lots of right and left and right turns does not allow you to take any shorter of a distance than just going straight and taking one turn.

This principle holds for vanishingly small grid sizes, which is the world the $L^1$ norm represents.

prolyx
  • 1,439
1

Classically, we can define the trajectory of a particle by its position as a function of time: $\vec x(t)$. From this, we can define the distance $d$ it travels during some time interval $[t_0,t_1]$ by $$d = \int_{t_0}^{t_1}\lVert\dot{\vec x}(t)\rVert\,\mathrm{d}t$$ where $\lVert\dot{\vec x}(t)\rVert$ is the magnitude of the velocity (almost always the Euclidean norm, but in Jonathan Jeffrey's answer, it could be the Manhattan norm).

However, quantum mechanically, we cannot define the trajectory of a particle this way, as its position is, in general, not well defined. Instead, we can only talk about the probability distribution of where it could be observed. As such, we cannot define the distance travelled by the particle, since it does not follow a single path. To further illustrate this point, we can look to the path integral formulation, which describes quantum mechanics by integrating over all possible paths for the particle to take between two states.

Sandejo
  • 5,496