27

From Taylor's theorem, we know that a function of time $x(t)$ can be constructed at any time $t>0$ as $$x(t)=x(0)+\dot{x}(0)t+\ddot{x}(0)\frac{t^2}{2!}+\dddot{x}(0)\frac{t^3}{3!}+...\tag{1}$$ by knowing an infinite number of initial conditions $x(0),\dot{x}(0),\ddot{x}(0),\dddot{x}(0),...$ at $t=0$.

On the other hand, it requires only two initial conditions $x(0)$ and $\dot{x}(0)$, to obtain the function $x(t)$ by solving Newton's equation $$m\frac{d^2}{dt^2}x(t)=F(x,\dot{x},t).\tag{2}$$ I understand that (2) is a second order ordinary differential equation and hence, to solve it we need two initial conditions $x(0)$ and $\dot{x}(0)$.

But how do we reconcile (2) which requires only two initial conditions with (1) which requires us to know an infinite number of initial informations to construct $x(t)$? How is it that the information from higher order derivatives at $t=0$ become redundant? My guess is that due to the existence of the differential equation (2), all the initial conditions in (1) do not remain independent but I'm not sure.

SRS
  • 27,790
  • 13
  • 115
  • 365

5 Answers5

39

On the other hand, it requires only two initial conditions x(0) and x˙(0), to obtain the function x(t) by solving Newton's equation

For notational simplicity, let

$$x_0 = x(0)$$ $$v_0 = \dot x(0)$$

and then write your equations as

$$x(t) = x_0 + v_0t + \ddot x(0)\frac{t^2}{2!} + \dddot x(0)\frac{t^3}{3!} + \cdots$$

$$m\ddot x(t) = F(x,\dot x,t)$$

Now, see that

$$\ddot x(0) = \frac{F(x_0,v_0,0)}{m}$$

$$\dddot x(0) = \frac{\dot F(x_0,v_0,0)}{m}$$

and so on. Thus

$$x(t) = x_0 + v_0t + \frac{F(x_0,v_0,0)}{m}\frac{t^2}{2!} + \frac{\dot F(x_0,v_0,0)}{m}\frac{t^3}{3!} + \cdots$$

In other words, the initial value of the 2nd and higher order time derivatives of $x(t)$ are determined by $F(x,\dot x, t)$.

12

FGSUZ has given part of the answer in his comment, but he has not given full details.

Consider $\ddot{x} (t)=F(x,\dot{x},t)$. In this case you have the second derivative in terms of lower order terms. You can therefore use this to remove the second derivative in favor of lower order items.

You can then take the time derivative of this equation. This will give you the third order time derivative of $x$ in terms of lower order derivatives. And you can use the first equation and its derivative to write everything in terms of at most the first derivative.

So, order by order, you can construct the Taylor expansion.

Now the general case may require you to deal with derivatives of $F(x,\dot{x},t)$. That is because you need the following (if I've recalled my calculus correctly).

$$\frac{d^3 x}{dt^3}=\dot{F}(x,\dot{x},t)= \frac{\partial}{\partial x}F(x,\dot{x},t) \frac{dx} {dt} + \frac{\partial}{\partial \dot{x}}F(x,\dot{x},t) \frac{d\dot{x}} {dt} +\frac{\partial} {\partial t}F(x,\dot{x},t)$$

This will often not be explicitly solvable. However, it also can be Taylor expanded in a similar fashion. And, at each order you keep only the corresponding order in the expansion of this equation.

So, order by order, you can construct the Taylor series. At each step you can use the equation of motion to remove all but the $x$, $\dot{x}$, and $t$ dependence. And so you will only need two initial conditions. Tedious, but possible.

The nice cases are those few where you can derive a simple formula that gives an easy recursion. So you might, for simple forms of $F$, get some simple thing that the $(n+1)$ derivative is some simple function of the $n$ derivative. In such cases, it is potentially useful in numerical solutions, since you can write things in terms of the time step and a nice Taylor expansion. Though, even in such cases, there are often more efficient methods.

9

Power series expansion does not hold for all functions $f(t)$ or for all $t\in\mathbb{R}$, but only for real analytic functions and for $t$ in the radius of convergence. In particular, it does not hold at any point e.g. for functions $C^2(\mathbb{R},\mathbb{R}^d)\smallsetminus C^3(\mathbb{R},\mathbb{R}^d)$. Therefore it is not possible to define any function by giving countably many real numbers $(x^{(n)}(0))_{n\in\mathbb{N}}$.

In particular, Newton's equation may have solutions in $C^2(\mathbb{R},\mathbb{R}^d)\smallsetminus C^3(\mathbb{R},\mathbb{R}^d)$, that therefore do not admit a power series expansion, or in general solutions that are not real analytic for all times, and therefore that do not always admit a Taylor expansion. Nonetheless, these functions are uniquely defined by two real numbers ($x(0)$, $\dot{x}(0)$) and by being solution of Newton's equation (i.e. they are also determined by $m$ and the functional form $F$ of the force).

In case that a solution of the Newton equation is real analytic, then the value of the higher order derivatives in zero is determined uniquely by the solution itself, and thus they also depend only on $x(0)$, $\dot{x}(0)$, $m$ and $F$; no further knowledge is required.

yuggib
  • 12,357
4

Long story short, to get to the core of your question, I hope

First, some functions don't correspond to their Taylor series at $0$. But let's ignore that for this answer.

But, more importantly: The Taylor series representation has more degrees of freedom simply because not all functions are solutions to the equation (2)! This should be rather obvious if you think of it: If I throw a ball, then if you didn't know any physics or not have any experience on real world, its path could be anything, it could fly to Mars and return to me, it could vibrate between two points, it could draw your name on air. If you only use (1), you can't discard these possibilities. But once you realize it's following Newton's equations, the possible paths are very limited.

JiK
  • 906
1

As an example, suppose we have Hooke's Law, F = -kx. Writing the Taylor (technically, Maclaurin, since it's centered at zero) series as

$x(t) = \sum_{n=0}^{\infty}\frac{x^{(n)}(0)t^n}{n!}$

Where $x^{(n)}$ is the nth derivative of $x$, then

$x^{(2)}(t) = \sum_{n=2}^{\infty}\frac{x^{(n)}(0)t^{n-2}}{(n-2)!}$

Shifting the index, this can be written as

$x^{(2)}(t) = \sum_{n=0}^{\infty}\frac{x^{(n+2)}(0)t^{n}}{n!}$

We can then write Hooke's Law as

$m\sum_{n=0}^{\infty}\frac{x^{(n+2)}(0)t^{n}}{n!} =-k \sum_{n=0}^{\infty}\frac{x^{(n)}(0)t^n}{n!}$

Setting like terms equal, we have

$m \frac{x^{(n+2)}(0)}{n!} =-k \frac{x^{(n)}(0)}{n!}$

or

$x^{(n+2)}(0) =\frac{-k}{m}x^{(n)}(0)$

So given any n, we can find the (n+2)th coefficient in terms of the nth coefficient. This means that the even coefficients are determined by the 0th coefficient, and the odd coefficients are determined by the 1st coefficient. (The even powers correspond to a solution in terms of cosine, and the odd powers correspond to a solution in terms of sine, and the general solution is a linear combination of the two.) This is known as an analytic solution of the ODE. In general, it won't be as simple as this. However, since the LHS of Newton's Equation has only a second order term, and the RHS is first order, the (n+2)th coefficient will be able to be expressed in terms of the nth and (n+1)th coefficients, giving the 0th and 1st coefficients as initial conditions.

So, the key is that for polynomials to be equal to each other, the coefficients of corresponding powers must be equal, and this can be extended to Taylor series. This gives a recurrence relation giving coefficients in terms of lower order coefficients, and the infinite Taylor series collapses down to being determined by two coefficients.