Ray Tracing Renderer (Course Project)

Motivational Image

The scene that we want to render is a dimly lit, smoky bar, inspired by the following two images. Rows of exquisite liquor bottles line the shelves, their labels veiled in the subtle play of light and shadow. An ashtray rests nearby, adorned with an unlit cigarette. A dim spotlight bathes the ashtray and the wine bottles. The scene is to capture the essence of the bar's mystique, inviting participants to let their imagination run wild as they bring out the richness of the hidden stories within the rendering. The following two images are our motivation images.

Final Images

Based on the motivation images, we rendered our final images.

Features implemented

Images as Texture

Relevant Code

  • imagetexture.cpp

Implementation Details

To load the texture from external files, I used stb_image to read the image data. I defined a ImageData class similar to Bitmap class, which is drived from Eigen::Array to store the image data. The read image data is probably in sRGB color space. Therefore, we provide an option boolean field raw in the img_texture property to indicate whether use raw image color space or convert it to linear color space with toLinearRGB() function. We also provide two difference filter modes, nearest and bilinear.

Validation

To validate the results, we used a texture from the website Reference: UV Map Grids and the scenes in assignment for rendering. The results look right. We also compared our results with mitsuba. Since the coordinate used in mitsuba is right-handed while nori's is left-handed, the up axis coordinate is inversed so that the scene representation is the same. The results are quite similar, while the results of mitsuba seem to be more vivid than ours. I think this may due to the usage of mipmap in mitsuba. The overall results are correct.

Normal Mapping

Relevant Code

  • mesh.h
  • mesh.cpp
  • shape.h
  • shape.cpp

Implementation Details

Normal mapping requires us to update the mesh normal with the normals provided in the normal textures. Since the frame normal is calculated in the setHitInformation() function in Shape, I decided to read the normal map and update the normals here. I added a field of type Texture<Color3f> m_normalMap. The normal map can be loaded with the Image as Texture implemented before. After reading the texture evaled value, it has to be converted to range [-1, 1]. Currently, only Mesh supports computing shaing frame with normal mapping. In the setHitInformation() function in Mesh, the local normal read from the normal mapping needs to be converted to the world normal using the local frame first. However, if the default frame setup is used, the result is discontinuous and is absolutely wrong. Therefore, I need to construct the right local frame first. I followed the formula in learnopengl.

\[ \begin{bmatrix}T_{x}&T_{y}&T_{z}\\ B_{x}&B_{y}&B_{z}\end{bmatrix}=\frac{1}{\Delta U_{1}\Delta V_{2}-\Delta U_{2}\Delta V_{1}}\begin{bmatrix}\Delta V_{2}&-\Delta V_{1} \\ -\Delta U_{2} & \Delta U_{1} \end{bmatrix}\begin{bmatrix}E_{1x}&E_{1y}&E_{1z}\\ E_{2x}&E_{2y}&E_{2z}\end{bmatrix} \]

where \(E_{1}\) and \(E_{2}\) are two edges of the triangle, and \(\Delta U, \Delta V\) are the corresponding uv offsets of the edges. \(T\) and \(B\) is dpdv and dpdu. Then, we can construct the local frame correctly with dpdv and dpdu.

Validation

To validate the results, the results of nori are compared to the results of mitsuba.

Emitter (Spotlight) (5pt)

Relevant Code

  • spotlight.cpp
  • path_mis.cpp
  • direct_mis.cpp

Implementation Details

The implementation of the spotlight follows the mistuba designs. It has three properties, intensity, maxAngle and beamAngle. intensity is the max irridiance of emitter at the center. All the ray emitted within the beamAngle has an irridiance of intensity. The emitted intensity of the ray begins to attenuate linearly between beamAngle and maxAngle. All the ray outside maxAngle is evaluated to be zero, meaning no light can be emitted in that direction. The FallOff function calculates that attenuation giving the sampled outgoing light direction. I also implemented the samplePhoton function in order to apply spotlight in photon mapper. The sampled power is simply I * falloff / pdf, following implementation in mitsuba. Pdf of the sampled photon uses Warp::squareToUniformSphereCapPdf(cosMaxAngle) since the sampled ray can only exist in the cone area contrained by maxAnlge.

One problem is that since the spotlight is a delta emitter, the original implementation of multiple importance sampling does not work on spotlight. So I added a flag isDeltaEmitter in EmitterRecord. Everytime a delta emitter is sampled, the weight of the emitter sampling should be 1 while weight of bsdf is 0. I modified direct_mis and path_mis based on this rule to fix the problem.

Validation

To validate the correctness of the spotlight and the modification on the integrator, I compared several rendered scenes in different integrator with mitsuba. The first scene is a simple floor with spotlight shooting from the above, rendered in direct integrator.

To validate the correctness in direct, path and photonmapper integrator, I put a spotlight in the cbox scene. The noise level may be slightly different from mitsuba, but the overall brightness is correct. This can successfully validate the correctness of spotlight

Stratified Sampling

Relevant Code

  • stratified.cpp
  • render.cpp

Implementation Details

Stratified class is derived from Sampler and implemented the stratified sampling. prepare() function is used to initialize the sampler before start sampling. It is called before the first sample starts. generate() function is used to prepare for the sampling of the next image pixel. advanced() function is called every time before the next sampling starts. next1D and next2D is used to retrieve the next 1d or 2d samples.

To enable sampling multiple dimensions, a dimensionIndex is maintained in the class. Similarly, sampleIndex and pixelIndex records the number of samples that are already sampled and the number of pixels sampled. When a sample is needed, first a seed will be calculated based on a permutation base seed, dimension index and pixel index. The permutation sequences are thus the same for all the samples with the same dimensionIndex and pixelIndex, while different dimensionIndex and pixelIndex will have different sequences. Then, the position of the current sampled region is retrieved with sampleIndex. Then a random offset is added according to whether the sampled is jittered or not.

In the prepare() function, I initialize the sampleIndex and the random number generators. A base permutation seed is generated randomly. It can then be used to generate a permuation sequence of size sampleCount. generate() resets the dimension index and increment the pixelIndex. advance resets the pixel index and increment the sample index.

Validation

In th warptest, I visualize the results of the stratified sampler and grid sampler to show the difference between them. The result is shown below.

I also rendered a simple scene which consists of a plane and a sphere to see the actual effects of the stratified sampler. The result is shown below. The stratified sampler can reduce the noise level in .

Disney BSDF (metallic, specular, roughness, specular tint, clearcoat)

Relevant Code

  • disney.cpp
  • warp.h
  • warp.cpp
  • warptest.cpp

Implementation Details:

The disney BSDF is implemented based on the paper Physically-Based Shading at Disney. The code basically refers to the implementation of Disney BRDF explorer.

There are ten parameters in Disney BRDF. Five of them, metallic, specular, roughness, specular tint and clearcoat graded. Disney BRDF is consisted of three parts, which are diffuse, specular and clearcoat. Diffuse is influenced by metallic and roughness. Specular is influenced by specular, metallic, roughness and specular tint. Clearcoat is influenced by clearcoat. (Only graded ones are listed here) The specular and clearcoat are two microfacet models. The normal distribution of specular is GGX (GTR2), and the normal distribution of clearcoat is GTR1. The squareToGTR2(), squareToGTR1() and their pdf uses the formula derived in the appendix of the paper. The pdf are essentially the cosine weighted pdf of the normal distribution. The eval() function follows exactly the same as BRDF explorer. The choice of the sampling lobe refers to the implementation in mitsuba.

Validation

The GTR1 and GTR2 are tested in the warptest. The results are shown below.

To validate the correctness of disney BSDF, I used the scene cbox for test. For each test, only the parameter being tested is changed, with all other graded parameters fixed and all other ungraded parameters being 0. The results are shown below.

Metallic=0 vs Metallic=1

Roughness=0 vs Roughness=1

Specular=0.1 vs Specular=1

Specular tint=0 vs Specular tint=1

Clearcoat=0 vs Clearcoat=1

Progressive Photon Mapping

Relevant Code

  • progressivepm.cpp

Implementation Details

The implementation of progressive photon mapping is basically based on the paper Progressive Photon Mapping. The algorithm is consisted of two passes. In the first pass, we need to generate the visible hit points from the camera and store them. In the second pass, we need to generate the photon map repeated. After each photon map generation, we need to update the hit points with radius reduction and flux correction. A new rendered image can be generated right after eahc photon map generation process.

To make use of the current multiple thread rendering framework, the execution process of the progressive photon mapper is a little different from other integrators. In the process() function, we iterate all the image pixels and shoot sampleCount rays and perform path tracing for each ray and get the hit point on the diffuse surfaces. I define a function persamplePreprocess(). This function is called everytime a new sample starts. In this way, the sampleCount defined in the sampler is actually the number of photon map generation passes, while the actual number of samples for each pixel is the sampleCount defined in the progressive photon mapper. Then in the persamplePreprocess() function, a new photon map is generated. Then, a new image is rendered in Li() function. Since now the pixel is needed to find the corresponding hit points stored, a new parameter pixel is added to the Li() function. First, the hit points information is updated with the new photon map. Then, the image is rendered with the updated hit points. Doing these two steps at the same time in Li() function can make the best of the rendering framework to accelerate the rendering process and is easy to implement even though it is a little unintuitive.

Validation

To validate the progressive photon mapper, I rendered cbox with progressive photon mapper of 1 photon map generation pass and 1000 photon map generation passes. I also used the same setting on mitsuba for comparison. Even though the results are not exactly the same, it proves that the progressive photon mapper is converging to the correct results.

My results

Mistuba results