153

I was coding a physics simulation, and noticed that I was using discrete time. That is, there was an update mechanism advancing the simulation for a fixed amount of time repeatedly, emulating a changing system.

I though that was interesting, and now believe the real world must behave just like my program does. It is actually advancing forward in tiny but discrete time intervals?

knzhou
  • 107,105
jcora
  • 2,139

13 Answers13

103

As we cannot resolve arbitrarily small time intervals, what is ''really'' the case cannot be decided.

But in classical and quantum mechanics (i.e., in most of physics), time is treated as continuous.

Physics would become very awkward if expressed in terms of a discrete time: The discrete case is essentially untractable since analysis (the tool created by Newton, in a sense the father of modern physics) can no longer be applied.

Edit: If time appears discrete (or continuous) at some level, it could still be continuous (or discrete) at higher resolution. This is due to general reasons that have nothing to do with time per se. I explain it by analogy: For example, line spectra look discrete, but upon higher resolution one sees that they have a line width with a physical meaning.

Thus one cannot definitely resolve the question with finitely many observations of finite accuracy, no matter how contrived the experiment.

33

I think it's important to note that quantum or quantized time is not equal to discrete time. For instance, we have "quantized" space. By this we mean that it receives quantum treatment. But the underlying coordinates still form a continuum. So even if you live on a finite circle and only consider wavefunctions so that you get a countable set of basis functions from which to form all the others, you can still in principle measure incidence of particles at any point, again forming a continuum. Therefore, if we take quantum time in analogy to quantum space, we would have to conclude that quantum mechanically it would still form a continuum.

Of course none of this proves how the universe really works, which is your question. The only honest answer direct to your question is "We don't know". Physical theories do not describe how the universe actually works, the only thing we know is that their predictions match experimental results we currently posses. So even if the best physical theories we currently posses use a continuum of temporal coordinates, we cannot by any means conclude that the way the universe actually works matches our description.

SMeznaric
  • 1,584
25

The answer to this question is not known presently. Current physics is, as stated by other answers, based on fully continuous mathematical models, which particularly assume spacetime to be continuous. On the other hand you could argue that these models are isomorphic to discrete constructive models, with the general view that the continuous is the limit of the discrete. Some modern spacetime theories assume an underlying network/relational structure, and are fully discrete.

My personal belief is that continuous structures do not exist in the physical world. This is however just a belief.

See also: Is the universe finite and discrete?

18

I'd say there's no conclusive evidence, but in quantum physics, Planck time is sometimes cited as a possible smallest unit of time.

The source for my data is Quantum Gods: Creation, Chaos, and the Search for Cosmic Consciousness by Victor J. Stenger. In there, he goes into a lot of detail about this in one chapter.

16

What you are talking about is similar to the problem of quantum gravity. Since gravity is an effect of the curvature of spacetime, to have a quantum theory of it, you need to quantize the spacetime manifold. This is done with spin foams which are little units of volume in spacetime that have spins associated to them. They connect together like total angular momentum and build up into various kinds of geometry. This is just a theory, but comes from the very real problem of "what is the quantum field theory of gravity". Also, it answers the question "Higher power is needed to resolve smaller dimensions (sizes). To resolve small enough distances, the power eventually gets large enough to couple to the metric of space time. How do we talk about spacetime when the uncertainty in the injected energy transfers to uncertainty in the metric."

Ben Sprott
  • 1,430
7

Due to the work of Julian Barbour and others, time is defined (in a closed system) by keeping track of all the changes (of particles and so on).

In this respect we would say that in a classical system (macroscopic) that time would be continuous since the motions of such objects are essentially continuous and the way that you parameterize the changes would then be continuous.

In a quantum mechanical system, i think this gets trickier because the formalism is kind of set up from the POV of a "scientist in a lab" so that time is continuous classical external parameter for the macroscopic scientist.

In some formulations of QM, position is a continuous variable and particles have definite (but uncertain) position, in this context you can still have a continuous time parameter.

6

There is no continuous time or space. Only events are happening. Suppose if you are reading this answer is an event. And then looking on the roof is another event. So combine these two based on the measure of time elapse,will get the actual motion of events. same as that in the movies.

David Z
  • 77,804
wilfred
  • 115
5

This is both one of the most important questions to ask in physics and also one of the most ridiculous.

Many of the arguments go like this: we can't do the math if it's not continuous therefore it's continuous, or I don't get how time can be discrete so it must be continuous, or "history and tradition says... so please don't upset things."

If a system consists of discrete samples, then that is isomorphic to another system using continuous functions under the condition of finite information content. In that case, they are just different basis sets, one consisting of delta functions and another consisting of a superposition of any reasonable orthogonal basis set. Being made of discrete points in space or time is the same as saying the system is band-limited in Fourier space and can't have infinite energy, (or information content established below). It's a matter of choice then whether you want to consider the system as being made of continuous functions or an array of numbers.

That's why, for example electrons stuck in a potential well have discrete energy levels representable as a finite set of quantum numbers, because of finite boundary conditions and the enforcement of integer modes of wave-function continuity around a sphere, but given sufficient extra energy they're off somewhere else entirely.

This map from discrete to continuous is standard signal processing math of sampled systems which is used everyday whenever you listen to music etc. It goes from numbers in a computer to air vibrations, and mathematically can be done without any loss of information (no compression need be used in the music example.)

We might use a spatial or temporal basis of samples and a Fourier basis, or wavelet basis, or a mixture of Gaussians, etc. Every set of numbers that purports to describe a physical system makes no sense without an accompanying definition of its basis. And for each such basis there is a linear method to provide the lossless reconstruction of its representation as a continuous function, whether real or complex.

So it's always possible to map a continuous description of any finite bounded system onto a discrete description of the same system by some unitary transform which places zero everywhere except at a finite set of points. So the original question amounts to "how long is a piece of string?"

The real question is: Does the universe have a finite information content? Is the universe bounded? But more importantly: is the universe locally bounded with finite local resources, or a finite rate of accumulating local resources to meet demand?

The relevant thing to ask next is whether the universe has a finite maximum information density. That is: Can there be an infinity of points, arbitrarily close together each containing an infinite resolution of numerical values and in communication with their neighbors, with infinite bandwidth of their ability to transfer precisely those values over space and time without error from place to place in infinitesimal time, OR NOT.

It's worth bearing in mind that if you assume a continuum to time and space without any other consideration, you're loading in the assumption of infinite local resources and infinite information content, and to me Occam's razor say's no no no.

But evidence from physics also does not support this.

Firstly, there's an upper limit to energy density, before space collapses to a black hole singularity.

There's the related holographic entropy bound, which limits the maximum amount of bits that can be represented across any surface area and implicitly the minimum space resolution.

There's the Landauer limit which is established by experiment to say that it takes around 2.805 zJ to flip the information state of a physics system by one bit. It varies with temperature I believe, which is a little like inverse signal to noise ratio. So you're going to have to use more energy to flip more bits in any system. Each one comes with a finite cost. So it seems that energy cost prevents the packing of infinitely precise numbers arbitrarily densely if any of them will ever change.

And also there's the fact that it takes a finite non zero amount of time for any quantum system to transition from one orthogonal state to another orthogonal state and that provides a temporal bound on the upper limit of the rate of computation of any physical system. The processing rate cannot be higher than 6 × 10^33 operations per second per joule of energy according to the Margolus–Levitin theorem. And we can't put in as much energy as we'd like to make things go faster because of the previous constraints.

So to me that implies that to speak of a physical system computing things faster than a bounded rate is not physically meaningful, and so the universe can't simulate itself faster than the rate in which it can move between orthogonal states which represent information that is at some point isolated and distinct and discriminable as a set of observables, as opposed to being always entangled.

There's also evidence from the scattering matrix and its growing association with permutations: cayley graphs on small groups, the permutahedron, and finite rational sequences, that particles themselves are in some space similar to low dimensional polytopes, with discrete geometric representations.

And finally there's a question of scale relativity. We have special relativity but we don't know if there's a preferred scale and we observe everything from one specific preferred vantage point. There are publications by serious physicists advancing the argument that scale invariance is part of relativity puzzle (e.g. Laurent Nottale's book on scale relativity) and that means that the "sampling grid" if any, could appear to vary from place to place, or between observers under different conditions, because we don't have any absolute yardstick for size scale. I jokingly like to say: do you realize you're actually only 1/10th the size that you were when you were born? If everything scales freely over time slowly how would we ever know? Only extremely rapid differential changes of scale in a fully relativistic universe would create noticeable effects for humans. There should be a Higgs boson equivalent for local relativistic scale changes.

There are more arguments that come to mind but this a discrete and very bounded sample of them.

Robotbugs
  • 365
2

My understanding of the fundamental issue of Time, is that if we base it upon physical transactions, then we are dealing with a discretized system (e.g. quantized interactions).

Not only that, moreover, a discretized / quantized Time may then have geometric properties that further confound the question.

1

How can we answer that, without first having a better understanding of what we mean by "time" - and without first being certain that the various formulations we have of time actually correspond to what we mean?

Let's try a novel approach. Start by characterizing time as the passing of events. A key property of the events is that they have an ordering with respect to one another, some events take place after others and are connected to those that they follow. It's a partial ordering, as far as we know: remotely-situated events and unrelated events do not exhibit any definite before-after relation with one another, although they may be endowed with such a relation in the representations we have.

Historically, the representation that comes to mind is that arising from the Newtonian world. Its main distinguishing feature is that it imposed an ordering on events - no matter how far situated they were from one another - based on when they occurred, as reckoned by a universal clock. Only events that resided in the same 3D snapshot with one another were counted as being neither before nor after, but simultaneous.

The transition from the Newtonian world to the world of Relativity is well-accounted for and well-known, and I won't go in too much detail about it, except to note that the chief premise that there be planes of simultaneity was revoked in such a way as to allow for the existence of events A, B and C such that C is after A, but B is neither before nor after A or C. The chief example would be two ticks of a clock one second apart on the moon (A and C) with B being an event on Earth that lies in a certain small time window of about 1 1/2 seconds (twice the distance to the moon, in light seconds, minus 1 second).

Both of these cases impose an external, universal relation on events of some sort that glazes over the question of what connection (if any) any given pair of events may have to one another. And, neither addresses the question, definitely, of what we actually mean by an "event" - other than to beg the question by calling every single point in the underlying chrono-geometry an "event". It begs the question in two ways: (1) that there actually be any event that happens at the given point, and (2) that only one event can happen there.

So, the novel approach I'd like to try here is to identify an event with that which occurs when a measurement is done. What a measurement is, exactly, is the topic of Measurement Theory, which also happens to be ground zero of the issue of quantum theory interpretations. But it's not resolved.

Also, not definitely resolved is the question of what constitutes a before-after relation for measurements. When can we say that one measurement actually occurs before another - with the specific aim, in mind, that the earlier one should have some actual connection to the later one? And what kind of ordering relation results? A partial ordering? Or do cycles exist in the relation graph? Cycles exist, if time-travel loops exist or other forms of causality violation exist - like those considered by Feynman and Wheeler in their 1949 paper "Classical Electrodynamics in Terms of Direct Interparticle Action" (Reviews of Modern Physics 21 (3), accessible on-line), where they introduced the idea of the "glancing blow" solution to the time-travel paradox ... that was later picked up on and reiterated by Friedman et al. in their early 1990's time travel paper.

Whatever the answer to those questions may be, with this view, time is made out of events, each one being a measurement. So, the question of whether time is discrete or continuous now boils down to the question of whether the before-after ordering of measurement events is discrete or dense. What is its topology? Is it possible to pose a third measurement between any two measurements that are before and after one another, such that all three lie in before-after sequence, or is there a limit to how far this can be taken?

An example of a sequence of measurements are the consecutive ticks on a clock, each one marking some kind of measurement event (namely, the tick, itself, which we assume has some tangible, physical form). So, an upshot of the question is: can we double the resolution and frequency of an already-existing clock? And, can we do so for any clock, no matter when or how conceived? Or, is there a highest frequency?

Before you jump and say "Planck scale!", let me remind you of something. The Planck scale is composed from three physical quantities: Planck's constant h, the in-vacuo speed of light c, and Newton's gravitational coefficient G. But most theoretical physicists ascribe the notion that the 3+1 dimensional gravitational physics described by General Relativity (and described classically by Newtonian gravity), is not fundamental, but is built on a deeper layer of some sort, consisting of a law of gravity in a chrono-geometry of a larger number of dimensions.

That law of gravity has its own version of G, while the G you know of would not be fundamental at all, but derived from the higher-dimensional G and non-fundamental properties, like the size and shape of the higher unseen dimensions (e.g. the volume of a fibre, if the underlying geometry is a principal bundle or homogeneous space with a 3+1 dimensional base space).

The version of the "Planck scale" formed from c, h and the higher-dimensional G may be something entirely different from that formed from c, h and our G. They may not be anywhere near the same size at all. So, to those who say "Planck scale!": that, too, is unresolved, Which "Planck scale"? The one constructed from c, h and higher-dimensional-G or the presumably non-fundamental one constructed from c, h and our G?

And an explanation would still be required to answer the question of what bearing, if any, it has on the ability to interpose measurement events between one another.

NinjaDarth
  • 2,850
  • 7
  • 13
0

I would like to offer a gedanken approach to that question.

Imagine you are at a stationary system (the lab frame) $S$ in which time is discrete, with regard to some smallest increment of time ($Δt=κ$, a "chronon" if you will).

Within that frame, the maximal acceleration possible, $α$, is given by the maximal change in velocity (from going at the speed of light $c$ in the positive direction to going at $c$ in the negative direction- $Δv=2c$) over the minimal duration in which it can happen (a single chronon- $Δt=κ$). We get:

$$α= \frac{2c}{κ}$$

(In a similiar manner maximal jerk, pop etc. can be derived)

I am not quite sure how I should regard $α$. A maximal acceleration with respect to a frame sounds a bit off to me. If it is identical in all frames we recieve something like one of the the postulates of special relativity- which might sprout higher orders of special relativity. If it is not identical for all frames, then I am not even sure how it can be treated. Note that this is not a maximal acceleration in the object's own rest frame (like the Schwinger limit), but a frame-dependant limit.

Another approach I can think of is that in a discrete-time world, time translation symmetry becomes discrete time translation symmetry, which means conservation of energy is replaced by some discrete time parallel, which I am not familiar with. Maybe nonconsevations of energy can be used to suggest for or against the idea of discrete time.

If anyone could help me continue one of the two lines of thought introduced here, I think discrete time can lead to a contradiction, but I can't quite get there myself.

A. Ok
  • 683
0

Many things in physics are quantized (eg: angular momentum of objects, mass of objects, momentum of a particle in a box) so why not time? Well, it maybe in some future theory, but right now what works to explain nature is the idea that the transformations we do to obects behave like Lie Group transformations (eg: rotations, boosts, spacetial translations, time translation, strains). These transformations are labeled by continuous parameters (eg: rotation angles, velocity, boost parameters, translation distance, waiting time, radians of strain). For a Lie Group the parameters are continuous numbers and the group generators (eg: $\vec {J}, \vec {K}, \vec {P}, E $) have quantized eigenvalues. This Lie Group concept divides the quantities in physics into continuous numbers and quantized generators. Time is continuous just like rotation angles are continuous.

@Robotbugs: I'm responding here because the "comments" button isn't working.

First, how else can we talk about "the reality of physics" other than by using the words and math concepts that have previously explained measurements in some corners of physics? Using properties of the successful math (perhaps what you mean by "tractable theorems') to predict as yet unmeasured corners such as the continuity of time seems like the best bet for what future experiments might find.

In your second series of comments, each one has included the word invariant or symmetrical concerning the use of the Lie group. This is correct for the usual usage of the group as a symmetry or invariance of the Lagrangian. As you say, lots of physics comes from the breaking of these gauge symmetries, and you demote their fundamental nature in your comment "U(1) SU(2) and SU(3) are nothing actually to do with physics...…" reasonable from your pint of view. But I am using the Lie Group in a conceptually different way.

The fundamental game of physics is to make a one-to-one correspondence between a symbol (a ket) on a piece of paper and a real world object (eg: electron, carbon nucleus, bucket of helium, star, etc). It is also to make the one-to-one correspondence between the mathematical transformation done to the ket and the physical transformation done to the object. This paradigm has been beautifully carried out with the space rotation group SU(2), without having to talk about symmetries of a Lagrangian … though you could. Every object in the universe transforms under rotation as a ket in the carrier space of some irreducible rep of SU(2), and has a predicted quantized angular momentum. We tell these various objects apart by how they rotate (ie: j,$j_z$). This is hugely fundamental. What more can we want than to predict what objects can exist and how they transform by things we know how to do. We have since enlarged the group to include velocity boosts (Lorentz group SL(2,C)). The group was enlarged again to include abelian translations (space and time) (Poincare group), but because translations are abelian, mass is not quantized. It is my guess that translations do not commute with each other, and the Lorentz group will be enlarged to some thing like the DeSitter group where mass is quantized. If this provided the Holy Grail of correctly predicting particle masses, this Lie Group would truly be fundamental. And it would be fair to argue that the Lie Group paradigm with continuous group paramters ($\vec{\Theta},\vec{\lambda},\vec{x},t$) would argue for the continuity of time.

Gary Godfrey
  • 3,428
0

To answer your question, time may be advancing forward in tiny but discrete time intervals. If your model reflects or predicts the realty then it is at least just as good as any other. The only awkward part might be that you discretize the continuous/differentiable theory in order to create your simulations. Then the latter might seem superior. It would be nice instead to have an independent theory as a foundation for what you do. I suggest discrete calculus as a starting point. Its idea is simple: $$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$