0

I want to try simulating some bodies in space using the basic formulas I learned in my college mechanics class.

So I know Newton's Law and some basic mechanics equations:

$\vec{F} = G\frac{m_1m_2}{r^2}\hat{r}$

$\vec{v} = \vec{v}_0 + \vec{a}t$

$\vec{s} = \vec{s}_0 + \vec{v}t+\frac{1}{2}\vec{a}t^2$

Now let's say I have a large object with mass $M$ and a small object (say a satellite) with mass $m$. I can assume $m << M$ so that I only need to simulate the effects on the large object.

I also define $n$ to be the current "frame" and $\Delta t$ to be a timestep. I then end up with the following equations:

$\vec{a}_n = \frac{GM}{r_n^2}\hat{r}$

$\vec{v}_{n+1} = \vec{v}_n+\vec{a}_n\Delta t$

$\vec{s}_{n+1} = \vec{s}_n+\vec{v}_n\Delta t+\frac{1}{2}\vec{a}_n(\Delta t)^2$

To me, this seems like everything I would need to simulate gravitational interactions (according to Newtonian mechanics). Adding more bodies just means applying superposition and adding up the vectors accordingly.

My problem is that these equations are really meant to be continuous while I'm applying them discretely. From one frame to the next, the error won't be noticeable, but over say 1000 frames it will make the mechanics slightly off. Of course the smaller my $\Delta t$ the less noticeable everything is, but I'm not sure how small I need it to be. Is there a way to perform "error correction" every dozen frames or so so that everything is in its right place?

1 Answers1

4

The standard way of checking for stability in such simulations is to plot the total energy over a long simulation time. A small amount of drift is acceptable (in fact, inevitable) but if it's too large then you either need a smaller time-step or a more stable integration scheme. What counts as 'too large' though depends on your aims.

Another method is to plot the mean-square distance $\sqrt{\sum_i |\textbf{x}_i(t)-\textbf{x}_i^\prime(t)|^2}$ over time for two simulations: one with a very small time-step and the other with a larger one. This error will grow exponentially with time $\sim\exp(\lambda t)$ (it's called Lyapunov instability). So you can run a short simulation with an extremely fine time-step and one with a much larger time-step, and measure the associated Lyapunov exponent $\lambda$, and then use that to work out what the overall error will be after some simulation time $t$ and with a time-step $\Delta t$.

What you're doing is (almost) Euler integration but that's not terribly stable. A more stable (and relatively easy to implement) integration method is Verlet integration. See velocity Verlet in particular.

lemon
  • 13,320