3

Sorry if my question requires clarification. I am having trouble conveying exactly what my problem is.

I'm trying to code a ray tracer that works in curved spacetime. In principle, this just entails specifying the metric, calculating the Christoffel symbols, and solving the geodesic equation for the paths the rays of light trace from the camera. However, I'm running into a potential problem.

The issue has to do with the fact that we need to be in the camera's frame of reference before doing the raytracing (I think), since we need to ensure that the rays of light are traced to the appropriate places on the viewing plane and at the appropriate times, according to what the camera would actually see: the times and relative locations at which photons hit the camera depend on what frame we are in, due to loss of simultaneity and length contraction of the viewing plane (in GR, time also works differently in different parts of spacetime due to gravitational effects). For example, two photons that might hit the camera simultaneously in one frame might hit at different times in another and separated by a different distance.

If we were working in special relativity, it would be as simple as Lorentz transforming into a frame moving with the camera before performing the raytracing, but in GR, I am not sure that there even is a corresponding coordinate transformation, since we lack a sense in which we can attach a coordinate system to a local frame of reference. So how can I ensure the image produced on the viewing plane is actually accurate to what the camera would see?

Am I creating a problem where there is none? Could I actually perform the raytracing in an arbitrary coordinate system and get the correct visualization? If yes, how can I be convinced of this?

Aidan Beecher
  • 437
  • 2
  • 10

1 Answers1

0

If we were working in special relativity, it would be as simple as Lorentz transforming into a frame moving with the camera before performing the raytracing, but in GR, I am not sure that there even is a corresponding coordinate transformation, since we lack a sense in which we can attach a coordinate system to a local frame of reference.

We can define the reference frame of camera man by an unit time-like vector field $u^a$ in some coordinate system (i.e. $u^au_a=1$). We can think of $u^a$ as the tangent vector on the worldline of the observer (here cameraman). Now, given such a family of observers (note that $u^a=u^a(t,\vec{x})$ varies from one point to another, therefore we can imagine a family of world-lines each having such a timelike vector $u^a$ - aka the timelike congruence of observers), we can define the plane of simultaneity of the camera man as the hypersurface $\Sigma$ such that $u^a$ is orthogonal at each point of intersection. Given that the metric for full spacetime is $g_{ab}$, we can define the induced metric on $\Sigma$ as
$$h_{ab} = g_{ab}-u_au_b$$ You can convince yourself that in Minkowski space-time, a stationary observer ($u^a=(1,0,0,0)$) will correspond to $h_{ab}=diag(-1,-1,-1)$ - the three dimensional Euclidean space. $\Sigma$ will correspond to $t=$constant hypersurfaces.

Now let's say we have family of light rays defined by null vector fields $k^a$, we can define "the relative velocity of light rays as observed by the camera man" , $e^a$ using the following relation: $$k^a = \Gamma (u^a + e^a)$$ where $u^ae_a=0$, then $\Gamma = u^ak_a$. If you contract both sides of the above equation with $k_a$, you get $e^ae_a=-u^au_a=-1$. Thus, $e^a$ is space-like vector field and has magnitude the same as speed of light.

Given such family of rays with tangent vector $e^a$ in $\Sigma$, we can define the wavefront of such rays as the 2 dimensional surface $N\subset \Sigma$ as the one orthogonal to each ray at their points of intersections. The corresponding induced metric will be $$s_{ab}=h_{ab}+e_ae_b=g_{ab}-u_au_b+e_ae_b$$ Let's take $X^a$ as the connecting vector between two null rays $k$, then we can take projection of it on the wavefront as $\hat{X}^a={s^a}_bX^b$. This connecting vector could represent the image of some source.

Finally, if the screen of the camera has an unit normal vector $n^a$, then the incident light rays travelling at a relative velocity $e^a$ will subtend an angle $\theta$ as $$\cos \theta = e^an_a$$ Similar to the definition of wavefront, we can define the induced metric on the screen as $$q_{ab}=h_{ab}+n_an_b=g_{ab}-u_au_b+n_an_b$$ and the projection of the image $X^a$ on the screen will be $\tilde{X}^a = {q^a}_bX^b$. If $k^a$ is also geodetic, it maybe shown that a change in observer's velocity $u^a$ will only shift $\hat{X}^a$ by some multiple of $k^a$. Also, take any two arbitrary connecting vectors $\hat{X},\hat{Y}$, the quantity $\hat{X}^a\hat{Y}_a$ will remain invariant under change in $u^a$, i.e. shape and size of image on the wavefront of light rays is independent of observer's motion. What can you say about the shape and size of projected images on the camera screen?

KP99
  • 2,027
  • 1
  • 5
  • 18