After Albert Einstein’s proposal of the General Theory of Relativity, our outlook on how gravity works has changed significantly opening the possibility of mystifying objects such as Black Holes. Black Holes are massive compact objects resulting from unhalted gravitational collapse. Their gravity affects the spacetime surrounding it to such an extreme level that any object near it, including light, will have as an unavoidable future a path of “falling” towards the black hole.

While the theory of general relativity was proven to be highly accurate from experiments testing the bending of light seen during an eclipse or the gravitational time dilation experience of objects closer to massive objects, there had not been a direct observation of a black hole until a few years ago. The Event Horizon Telescope Collaboration was able to combine the power of multiple telescopes around Earth to capture an image of the supermassive black hole at the center of our galaxy, *Sagittarius A* Black Hole (EHT Collaboration).

However, a valid question arises from such an achievement: “How do we know what the image of a black hole should look like?” It turns out that the answer is both easier and harder than one expects: raytracing a black hole.

#### Concept

Just like CGI scenes in movies use raytracing to add the real-life look of lightning in objects, we can use a similar technique to render what our eyes or a camera would see in the vicinity of a black hole.

First, we need to understand that light does not always travel in a straight line, in fact, in the presence of mass/energy, light does not travel on a ‘straight’ path. This is because massive objects bend light around them. It’s just that at the scale we experience our everyday lives, the deflection of light is so insignificant that it is undetectable to the naked eye. Therefore, the approximation that light travels in a straight line in path tracing algorithms is excellent for all we usually want to render. However, for the astronomical scales of black holes and the proximity of light to the extreme space surrounding black holes, such an estimate falls short. Therefore, for raytracing a black hole, the main hurdle we find is the computation of the light path.

Let’s consider a simple scene of a black hole and its accretion disk that doesn’t require light bounces or any other complex interaction with the matter surrounding the black hole besides the gravitational effects of the black hole itself. For this scene, the black hole is the only object that will affect the spacetime surrounding it, thus, affecting the path of light. Meanwhile, the accretion disk and the background are the only light sources. With this, we can start raytracing the black hole using a very simple algorithm, ray casting.

Similar to how backward path tracing works, we start by shooting rays from the camera. For ray casting this simple scene, we shoot just one ray per pixel. The difference now begins with the information we store for each ray. To compute how light will travel in spacetime, we require the initial velocity 4-vector and position 4-vector. These 4-vectors are the combination of the normal 3d space vectors and the time component which is necessary to describe the trajectory in the extreme spacetime curvature environment in the vicinity of a black hole.

To compute the path of light rays we require a metric. A metric specifies a mathematical description of the spacetime curvature. For a single black hole, there are different metrics depending on its physical properties. For a stationary black hole, we have the Schwarzchild metric, while for a rotating black hole, we have the Kerr metric, and so on. The metric allows us to compute the geodesics, i.e., the path light travels. While these geodesics can be computed algebraically for simple metrics; for more complex ones, numerical methods are used. In essence, we compute the “acceleration” of each of the components of light and make a small step forward updating the “velocity” and “position” of the light ray until we trace out the complete path. You can see an example of how this light shooting and tracing looks like for a camera in front of a black hole in the following figure:

Computing the light paths is half the work, the remaining part is computing the color of each pixel. For this simple scene, we need to consider three things:

- Does the light ray fall inside the black hole?
- Does the light ray cross the accretion disk?
- What is the final direction the light ray points to in the background?

If the light ray collides with the black hole, as discussed, the light ray falls into it, so the resulting pixel should be black; otherwise, we continue. If the light ray crosses the accretion disk, we can think of the pixel getting some color contribution from the light of the accretion disk. Finally, after the light rays have traveled a significant amount of time (more or less escaped the gravitational pull of the black hole), the direction they point into the sky will tell you which color should be sampled from the background image.

After having gone through this process of deciding what each light ray did, and what color should be attributed to each one, we can collect all this information into a single image that will render how the scene would look to the camera.

#### Results

Implementing all these ideas concretely into code allows us to create a Black Hole Raytracer. The following images were generated using ArrayFire with the following code. The image used as the background is Westerlund 2 from NASA, ESA, the Hubble Heritage Team (STScI/AURA), A. Nota (ESA/STScI), and the Westerlund 2 Science Team.

First, if we trace how the scene would look traditionally with a normal raytracer that has light travel in a straight line, this is how our scene would look like:

We can see that the black hole is just a black sphere, the halo is the accretion disk, and the background looks just like if you were to look at the sky. However, when we start raytracing the black hole with accurate general relativity, this is how it would look like:

We can observe that the relativistic effects are visible. The two very notable ones are the gravitational lensing and the light bending seen from the accretion disk. The gravitational lensing causes the background to be distorted in a circular pattern surrounding the black hole allowing us to see areas not seen from the background closer to the event horizon of the black hole. On the other hand, the accretion disk becomes warped around the event horizon and distorted halo in front of the black hole which occurs from light being bent to such an extent that one can see the accretion disk at the back of the black hole. One notable effect which is missing from the model is the relativistic beaming of the accretion disk which causes it to appear brighter on the side that is spinning towards the observer.

Another property we can add to a black hole is rotation. Just like the Earth rotates around its axis, black holes can too. This is known as frame-dragging and it is barely measurable for all but the most massive objects. However, in the extreme conditions near the event horizon of a black hole, this effect is significant. This effect distorts the shape of the black hole shadow on the side that is spinning towards a distant observer as the black hole “drags” spacetime around causing light moving in the direction of rotation to get “boosted” while the light moving against the rotations gets “decelerated”. This can be shown in the following picture:

Instead of a black hole, we can also consider the theoretical Ellis wormhole. To understand how those sci-fi objects would behave, we can ray trace the spacetime around it:

One of the things to note from the computation of the light paths is that rays close to the middle of the black hole tend to be very sensitive to the step size of each iteration due to the numerical instability of the stiff differential equation for computing the geodesics. For this reason, using an adaptive runge-kutta method is required for this simulation.

#### Conclusion

By combining some basic ideas of raytracing, the mathematical foundations of spacetime curvature, and some numerical methods we can create a black hole raytracer and generate images that inform us of what black holes can do to spacetime by visualizing their effects on the light captured by a camera.

Black Hole Raytracing code: GitHub