9

I have a wire that stretches from $x=0$ to $\infty$. The temperature at $x=0$ is given by the unknown function $f(t)$ for $t$ from $-\infty$ to now ($t=0$).

I can measure the temperature of the wire at each point now ($t=0$), $g(x)$.

Given the temperature of the wire now, $g(x)$, I would like to recover $f(t)$, the temperature at $x=0$, through history

The problem is likely "ill posed" meaning tiny errors in $g(x)$ lead to big errors in $f(t)$.

What is known about solving this problem? For example, is the degree of ill-posedness known? If it is mildly ill-posed, are there numerical techniques available? Can anyone point me to articles or a book that treats it?

Peter A
  • 610
  • 5
  • 12

2 Answers2

9

You are right, this is a classically ill-posed question, and here is why.

If you are measuring the temperature $k$ along the wire at position $x$ and time $t$, there are any number of different initial temperature distributions along that wire which would yield $t = k$ at $(x,t)$—that is, the answer is not unique for any given set of initial conditions.

This means you can predict what $k = f(x,t)$ will be for future times, but not what it was for past times.

Another way of thinking about this is as follows: heat flux travels by a diffusion process. At any stage of the diffusive process, it tends to erase any historical information about the initial conditions present at earlier times. That erasure makes it mathematically impossible to reconstruct the initial conditions by back-calculation.

Buzz
  • 17,816
niels nielsen
  • 99,024
3

There are indeed some numerical methods available that try to limit the impact of the ill-posedness.

Often times they rely on the introduction of a generalization term. This is a term that in a way measures the errors introduced by the ill-posed nature and adds this to minimization problem of finding your initial state.

This is actually used in a well-known machine learning technique called Support Vector Machines, where you have a similar trade-off between finding a perfect fit for your data (which is very noise-sensitive) and limiting the complexity of the model. In machine learning this is more often identified as the bias-variance tradeoff, but in essence these 2 concepts are very close to eachother.

Since you requested some literature here are some sources that I found usefull during my masters research on these topics: