Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project 3: Weiqi Chen #21

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 63 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,70 @@
CUDA Path Tracer
# CUDA Path Tracer
================

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 3**

* (TODO) YOUR NAME HERE
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Weiqi Chen
* [LinkedIn](https://www.linkedin.com/in/weiqi-ricky-chen-2b04b2ab/)
* Tested on: Windows 10, i7-8750H @ 2.20GHz 2.21GHz, 16GB, GTX 1050 2GB

### (TODO: Your README)
![](img/cornell1.png)

*DO NOT* leave the README to the last minute! It is a crucial part of the
project, and we will not be able to grade you without a good README.

## Project Description
In this project we aim to render globally illuminated images using CUDA based-path tracer.
Path tracer is a realistic lighting algorithm that simulates light bouncing around a given scene. We shoot out rays from each pixel for many times and each time they may bounce towards different directions depending on the properties of the materials they hit. We accumulate the effects for each pixel and average them over the number of iterations.

![](img/path-teaser.png)

## Features

### Depth of Field
Depth of field is the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image. We create blurry scenes by simulating a camera lens.

*FD = focal distance, LR = lens radius*

| Normal | Depth of Field (FD = 6, LR = 0.5) | Depth of Field (FD = 9, LR = 0.5)|
| -- | -- | -- |
| ![](img/dof1.png) | ![](img/dof2.png) | ![](img/dof3.png) |

### Diffusion, Reflection and Refraction
My path tracer is able to render diffusion, specular reflection and refraction with Frensel effects using Schlick's approximation.

| Diffusion | Reflection | Transmission |
| -- | -- | -- |
| ![](img/diffuse.png) | ![](img/reflect.png) | ![](img/refract.png) |

### Motion Blur
Motion blur is achieved by randomly jittering the camera position and therefore jittering the outgoing rays using a Gaussian distribution.

| Normal | MB in xz direction | MB in y direction |
| -- | -- | -- |
| ![](img/reflect.png) | ![](img/mbxz.png) | ![](img/mby.png) |


## Performance Analysis
Using path tracer to render images are computationally expensive. Each ray may bounce for an uncertain number of times and hit various objects along the path. Computing intersections with them is also time-consuming. A few optimization methods are utilized here to speed up the rendering process.

### Stream Compaction
Stream compaction becomes very useful to remove early terminated ray paths (hit a light source or hit nothing). We keep track of only the rays which are still traveling and we can launch fewer threads after each bounces. This can reduce warp divergence. Below is a plot of active threads vs number of bounces in one iteration for the cornell box scene.

![](img/sc.png)

However, using stream compaction in closed scenes is not very helpful. In open scenes rays can be removed if they hit nothing, while in closed scenes ray will keep bouncing until they reach the maximum number of allowed bounces. Below is a plot of active threads vs number of bounces in one iteration for a closed scenes.

![](img/sc2.png)

Below is a bar graph showing the rendering time for using stream compaction and not using stream compaction. On the left group, we use the simple cornell box and stream compaction takes more time probably because launching kernels is more time-consuming compared to rendering. When more objects are added to the scene, stream compaction shows the benefit quickly.

![](img/1.png)

### Caching First Bounces
Rays shot out from the camera always hit the same object along the same paths in this project and they may go towards different directions in different iterations. Therefore we can cache the first bounces, recording what object is hit from what angle, and reuse this data in subsequent iterations.

This is not compatible if motion blur and depth of field features are toggled on, which requires jittering rays in time and jittering rays within a lens respectively.

### Sorting by Materials

Each ray may hit a different object, and the time of computing the intersection and shading are different for different types of materials. To reduce warp divergence, rays interacting with the same materials are sorted to be contiguous in memory before shading.

![](img/)
Binary file added img/1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/cornell1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/diffuse.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/dof1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/dof2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/dof3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/mbxz.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/mby.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/path-teaser.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/reflect.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/refract.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/sc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/sc2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion scenes/cornell.txt
Original file line number Diff line number Diff line change
Expand Up @@ -114,4 +114,4 @@ sphere
material 4
TRANS -1 4 -1
ROTAT 0 0 0
SCALE 3 3 3
SCALE 3 3 3
163 changes: 163 additions & 0 deletions scenes/custom.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
// Emissive material (light)
MATERIAL 0
RGB 1 1 1
SPECEX 0
SPECRGB 0 0 0
REFL 0
REFR 0
REFRIOR 0
EMITTANCE 5

// Diffuse white
MATERIAL 1
RGB .98 .98 .98
SPECEX 0
SPECRGB 0 0 0
REFL 0
REFR 0
REFRIOR 0
EMITTANCE 0

// Diffuse purple
MATERIAL 2
RGB .7 .5 .3
SPECEX 0
SPECRGB 0 0 0
REFL 0
REFR 0
REFRIOR 0
EMITTANCE 0

// diffuse blue
MATERIAL 3
RGB .4 .7 .8
SPECEX 0
SPECRGB 0 0 0
REFL 0
REFR 0
REFRIOR 0
EMITTANCE 0

// Specular white
MATERIAL 4
RGB .98 .98 .98
SPECEX 0
SPECRGB .98 .98 .98
REFL 1
REFR 0
REFRIOR 0
EMITTANCE 0

// Refractive white
MATERIAL 5
RGB .98 .98 .98
SPECEX 0
SPECRGB .98 .98 .98
REFL 0
REFR 1
REFRIOR 2.4
EMITTANCE 0



// Camera
CAMERA
RES 800 800
FOVY 45
ITERATIONS 5000
DEPTH 8
FILE cornell
EYE 0.0 5 10.5
LOOKAT 0 5 0
UP 0 1 0
FOCAL 6
LENSRAD 0.5

// Ceiling light
OBJECT 0
cube
material 0
TRANS 0 10 0
ROTAT 0 0 0
SCALE 3 .3 3

// Floor
OBJECT 1
cube
material 1
TRANS 0 0 0
ROTAT 0 0 0
SCALE 10 .01 15

// Ceiling
OBJECT 2
cube
material 1
TRANS 0 10 0
ROTAT 0 0 90
SCALE .01 10 15

// Back wall
OBJECT 3
cube
material 1
TRANS 0 8.75 -5
ROTAT 20 90 0
SCALE .01 20 10


// Left wall
OBJECT 4
cube
material 2
TRANS -5 5 0
ROTAT 0 0 0
SCALE .01 10 15

// Right wall
OBJECT 5
cube
material 3
TRANS 5 5 0
ROTAT 0 0 0
SCALE .01 10 15

// Sphere 1
OBJECT 6
sphere
material 4
TRANS -2.5 2 4
ROTAT 0 0 0
SCALE 3 3 3

// Sphere 2
OBJECT 7
sphere
material 4
TRANS 3 6 0
ROTAT 0 0 0
SCALE 3 3 3

// back wall 2
OBJECT 8
cube
material 1
TRANS 0 1.25 -5
ROTAT 0 90 -20
SCALE .01 20 10

// cube 1
OBJECT 9
cube
material 4
TRANS 0 3 0
ROTAT 45 60 60
SCALE 3.5 3.5 3.5

// Sphere 3
OBJECT 10
sphere
material 4
TRANS -3 7 -4
ROTAT 0 0 0
SCALE 3 3 3
62 changes: 57 additions & 5 deletions src/interactions.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ glm::vec3 calculateRandomDirectionInHemisphere(
*
* The visual effect you want is to straight-up add the diffuse and specular
* components. You can do this in a few ways. This logic also applies to
* combining other types of materias (such as refractive).
* combining other types of materials (such as refractive).
*
* - Always take an even (50/50) split between a each effect (a diffuse bounce
* and a specular bounce), but divide the resulting color of either branch
Expand All @@ -66,14 +66,66 @@ glm::vec3 calculateRandomDirectionInHemisphere(
*
* You may need to change the parameter list for your purposes!
*/

__host__ __device__
void calculateRefract(PathSegment & pathSegment, glm::vec3 normal, const Material &m) {
bool outside = glm::dot(pathSegment.ray.direction, normal) < 0.f;
glm::vec3 n = outside ? normal : (-1.f * normal);
float eta = outside ? 1.0f / m.indexOfRefraction : m.indexOfRefraction;
glm::vec3 dir = glm::refract(pathSegment.ray.direction, n, eta);

if (glm::length(dir) < 0.01f) {
dir = glm::reflect(pathSegment.ray.direction, normal);
pathSegment.color *= 0;
}
pathSegment.ray.direction = dir;
}


__host__ __device__
float schlickApprox(PathSegment & pathSegment, glm::vec3 normal, const Material &m) {
bool outside = glm::dot(pathSegment.ray.direction, normal) < 0.f;
float eta = outside ? 1.f / m.indexOfRefraction : m.indexOfRefraction;
float r0 = powf((1.f - eta) / (1 + eta), 2);
float r = r0 + (1 - r0) * powf(1 - glm::abs(glm::dot(pathSegment.ray.direction, normal)), 5.0f);
return r;
}



__host__ __device__
void scatterRay(
PathSegment & pathSegment,
glm::vec3 intersect,
glm::vec3 normal,
const Material &m,
thrust::default_random_engine &rng) {
// TODO: implement this.
// A basic implementation of pure-diffuse shading will just call the
// calculateRandomDirectionInHemisphere defined above.
}
thrust::uniform_real_distribution<float> u01(0, 1);
float p = u01(rng);

if (m.hasRefractive > 0) {
float schlick = schlickApprox(pathSegment, normal, m);
if (schlick > u01(rng)) {
pathSegment.ray.direction = glm::reflect(pathSegment.ray.direction, normal);
pathSegment.color *= m.specular.color;
}
else {
calculateRefract(pathSegment, normal, m);
pathSegment.color *= m.color;
}

}
// reflect
else if (m.hasReflective > p) {
pathSegment.ray.direction = glm::reflect(pathSegment.ray.direction, normal);
pathSegment.color *= m.specular.color;
}
// diffuse
else {
pathSegment.ray.direction = calculateRandomDirectionInHemisphere(normal, rng);
pathSegment.color *= m.color;
}
glm::clamp(pathSegment.color, glm::vec3(0.0f), glm::vec3(1.0f));
pathSegment.ray.origin = intersect + 0.001f * pathSegment.ray.direction;

}
Loading