Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

submission #10

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 15 additions & 93 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,113 +3,35 @@ CIS565: Project 2: CUDA Pathtracer
-------------------------------------------------------------------------------
Fall 2012
-------------------------------------------------------------------------------
Due Friday, 10/12/2012
Due Sunday, 10/12/2012
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
NOTE:
-------------------------------------------------------------------------------
This project requires an NVIDIA graphics card with CUDA capability! Any card after the Geforce 8xxx series will work. If you do not have an NVIDIA graphics card in the machine you are working on, feel free to use any machine in the SIG Lab or in Moore100 labs. All machines in the SIG Lab and Moore100 are equipped with CUDA capable NVIDIA graphics cards. If this too proves to be a problem, please contact Patrick or Karl as soon as possible.

-------------------------------------------------------------------------------
INTRODUCTION:
-------------------------------------------------------------------------------
In this project, you will extend your raytracer from Project 1 into a full CUDA based global illumination pathtracer.

For this project, you may either choose to continue working off of your codebase from Project 1, or you may choose to use the included basecode in this repository. The basecode for Project 2 is the same as the basecode for Project 1, but with some missing components you will need filled in, such as the intersection testing and camera raycasting methods.

How you choose to extend your raytracer into a pathtracer is a fairly open-ended problem; the supplied basecode is meant to serve as one possible set of guidelines for doing so, but you may choose any approach you want in your actual implementation, including completely scrapping the provided basecode in favor of your own from-scratch solution.
BLOG Link: http://seunghoon-cis565.blogspot.com/2012/10/project-2-cuda-pathtracer.html

-------------------------------------------------------------------------------
CONTENTS:
A brief description
-------------------------------------------------------------------------------
The Project2 root directory contains the following subdirectories:

* src/ contains the source code for the project. Both the Windows Visual Studio solution and the OSX makefile reference this folder for all source; the base source code compiles on OSX and Windows without modification.
* scenes/ contains an example scene description file.
* renders/ contains two example renders: the raytraced render from Project 1 (GI_no.bmp), and the same scene rendered with global illumination (GI_yes.bmp).
* PROJ1_WIN/ contains a Windows Visual Studio 2010 project and all dependencies needed for building and running on Windows 7.
* PROJ1_OSX/ contains a OSX makefile, run script, and all dependencies needed for building and running on Mac OSX 10.8.

The Windows and OSX versions of the project build and run exactly the same way as in Project0 and Project1.
The goal of this project is to implement a simple PathTracing algorithm by using CUDA.

-------------------------------------------------------------------------------
REQUIREMENTS:
Features
-------------------------------------------------------------------------------
In this project, you are given code for:

* All of the basecode from Project 1, plus:
* Intersection testing code for spheres and cubes
* Code for raycasting from the camera

You will need to implement the following features. A number of these required features you may have already implemented in Project 1. If you have, you are ahead of the curve and have less work to do!

* Full global illumination (including soft shadows, color bleeding, etc.) by pathtracing rays through the scene.
- Basic
* Full global illumination (including soft shadows, color bleeding, etc.)
* Properly accumulating emittance and colors to generate a final image
* Supersampled antialiasing
* Parallelization by ray instead of by pixel via string compaction (see the Physically-based shading and pathtracing lecture slides from 09/24 if you don't know what this refers to)
* Parallelization by ray instead of by pixel via stream compaction
* Perfect specular reflection


You are also required to implement at least two of the following features. Some of these features you may have already implemented in Project 1. If you have, you may NOT resubmit those features and instead must pick two new ones to implement.

* Additional BRDF models, such as Cook-Torrance, Ward, etc. Each BRDF model may count as a separate feature.
* Texture mapping
* Bump mapping
* Translational motion blur
* Fresnel-based Refraction, i.e. glass
* OBJ Mesh loading and rendering without KD-Tree
* Interactive camera
* Integrate an existing stackless KD-Tree library, such as CUKD (https://github.com/unvirtual/cukd)
- Addtional
* Interactive camera via keyboard and mouse
* Depth of field

Alternatively, implementing just one of the following features can satisfy the "pick two" feature requirement, since these are correspondingly more difficult problems:

* Physically based subsurface scattering and transmission
* Implement and integrate your own stackless KD-Tree from scratch.
* Displacement mapping
* Deformational motion blur

As yet another alternative, if you have a feature or features you really want to implement that are not on this list, let us know, and we'll probably say yes!

-------------------------------------------------------------------------------
NOTES ON GLM:
How to build
-------------------------------------------------------------------------------
This project uses GLM, the GL Math library, for linear algebra. You need to know two important points on how GLM is used in this project:

* In this project, indices in GLM vectors (such as vec3, vec4), are accessed via swizzling. So, instead of v[0], v.x is used, and instead of v[1], v.y is used, and so on and so forth.
* GLM Matrix operations work fine on NVIDIA Fermi cards and later, but pre-Fermi cards do not play nice with GLM matrices. As such, in this project, GLM matrices are replaced with a custom matrix struct, called a cudaMat4, found in cudaMat4.h. A custom function for multiplying glm::vec4s and cudaMat4s is provided as multiplyMV() in intersections.h.

-------------------------------------------------------------------------------
BLOG
-------------------------------------------------------------------------------
As mentioned in class, all students should have student blogs detailing progress on projects. If you already have a blog, you can use it; otherwise, please create a blog using www.blogger.com or any other tool, such as www.wordpress.org. Blog posts on your project are due on the SAME DAY as the project, and should include:

* A brief description of the project and the specific features you implemented.
* A link to your github repo if the code is open source.
* At least one screenshot of your project running.
* A 30 second or longer video of your project running. To create the video use http://www.microsoft.com/expression/products/Encoder4_Overview.aspx

-------------------------------------------------------------------------------
THIRD PARTY CODE POLICY
-------------------------------------------------------------------------------
* Use of any third-party code must be approved by asking on Piazza. If it is approved, all students are welcome to use it. Generally, we approve use of third-party code that is not a core part of the project. For example, for the ray tracer, we would approve using a third-party library for loading models, but would not approve copying and pasting a CUDA function for doing refraction.
* Third-party code must be credited in README.md.
* Using third-party code without its approval, including using another student's code, is an academic integrity violation, and will result in you receiving an F for the semester.

-------------------------------------------------------------------------------
SELF-GRADING
-------------------------------------------------------------------------------
* On the submission date, email your grade, on a scale of 0 to 100, to Karl, [email protected], with a one paragraph explanation. Be concise and realistic. Recall that we reserve 30 points as a sanity check to adjust your grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We hope to only use this in extreme cases when your grade does not realistically reflect your work - it is either too high or too low. In most cases, we plan to give you the exact grade you suggest.
* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as the path tracer. We will determine the weighting at the end of the semester based on the size of each project.

-------------------------------------------------------------------------------
SUBMISSION
-------------------------------------------------------------------------------
As with the previous project, you should fork this project and work inside of your fork. Upon completion, commit your finished project back to your fork, and make a pull request to the master repository.
You should include a README.md file in the root directory detailing the following

* A brief description of the project and specific features you implemented
* At least one screenshot of your project running, and at least one screenshot of the final rendered output of your pathtracer
* Instructions for building and running your project if they differ from the base code
* A link to your blog post detailing the project
* A list of all third-party code used
I developed this project on Visual Studio 2010.
Its solution file is located in "PROJ1_WIN/565Raytracer.sln".
You should be able to build it without modification.
Binary file added ScreenCapture_10-15-2012 2.58.42 PM.wmv
Binary file not shown.
Binary file removed renders/GI_no.bmp
Binary file not shown.
Binary file removed renders/GI_yes.bmp
Binary file not shown.
52 changes: 44 additions & 8 deletions scenes/sampleScene.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
MATERIAL 0 //white diffuse
RGB 1 1 1
SPECEX 0
SPECRGB 1 1 1
SPECRGB 0 0 0
REFL 0
REFR 0
REFRIOR 0
Expand Down Expand Up @@ -38,7 +38,7 @@ MATERIAL 3 //red glossy
RGB .63 .06 .04
SPECEX 0
SPECRGB 1 1 1
REFL 0
REFL 0.2
REFR 0
REFRIOR 2
SCATTER 0
Expand All @@ -47,10 +47,10 @@ RSCTCOEFF 0
EMITTANCE 0

MATERIAL 4 //white glossy
RGB 1 1 1
RGB 1 1 1
SPECEX 0
SPECRGB 1 1 1
REFL 0
SPECRGB 1 1 1
REFL 0.2
REFR 0
REFRIOR 2
SCATTER 0
Expand All @@ -74,7 +74,7 @@ MATERIAL 6 //green glossy
RGB .15 .48 .09
SPECEX 0
SPECRGB 1 1 1
REFL 0
REFL 0.2
REFR 0
REFRIOR 2.6
SCATTER 0
Expand Down Expand Up @@ -106,6 +106,18 @@ ABSCOEFF 0 0 0
RSCTCOEFF 0
EMITTANCE 15

MATERIAL 9 // perfect specular(mirror)
RGB 1 1 1
SPECEX 0
SPECRGB 1 1 1
REFL 1
REFR 0
REFRIOR 0
SCATTER 0
ABSCOEFF 0 0 0
RSCTCOEFF 0
EMITTANCE 0

CAMERA
RES 800 800
FOVY 25
Expand Down Expand Up @@ -182,7 +194,7 @@ SCALE .01 10 10

OBJECT 5
sphere
material 4
material 9
frame 0
TRANS 0 2 0
ROTAT 0 180 0
Expand Down Expand Up @@ -226,4 +238,28 @@ SCALE .3 3 3
frame 1
TRANS 0 10 0
ROTAT 0 0 90
SCALE .3 3 3
SCALE .3 3 3

OBJECT 9
cube
material 9
frame 0
TRANS -4.9 5 0
ROTAT 0 0 0
SCALE .01 5 5
frame 1
TRANS -4.9 5 0
ROTAT 0 0 0
SCALE .01 5 5

OBJECT 10
sphere
material 4
frame 0
TRANS -2 2 -2
ROTAT 0 0 0
SCALE 2 2 2
frame 1
TRANS -2 5 -2
ROTAT 0 180 0
SCALE 3 3 3
20 changes: 16 additions & 4 deletions src/interactions.h
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
#define INTERACTIONS_H

#include "intersections.h"
#include "glm/gtx/norm.hpp"

struct Fresnel {
float reflectionCoefficient;
Expand Down Expand Up @@ -44,9 +45,9 @@ __host__ __device__ glm::vec3 calculateTransmissionDirection(glm::vec3 normal, g
}

//TODO (OPTIONAL): IMPLEMENT THIS FUNCTION
__host__ __device__ glm::vec3 calculateReflectionDirection(glm::vec3 normal, glm::vec3 incident) {
__host__ __device__ glm::vec3 calculateReflectionDirection(glm::vec3 normal, glm::vec3 incident) {//TODO: inline?
//nothing fancy here
return glm::vec3(0,0,0);
return glm::normalize(incident - 2.0f * normal * glm::dot(normal, incident));
}

//TODO (OPTIONAL): IMPLEMENT THIS FUNCTION
Expand Down Expand Up @@ -90,7 +91,11 @@ __host__ __device__ glm::vec3 calculateRandomDirectionInHemisphere(glm::vec3 nor
//Now that you know how cosine weighted direction generation works, try implementing non-cosine (uniform) weighted random direction generation.
//This should be much easier than if you had to implement calculateRandomDirectionInHemisphere.
__host__ __device__ glm::vec3 getRandomDirectionInSphere(float xi1, float xi2) {
return glm::vec3(0,0,0);
// reference: Slide 7 in http://www.cs.sjsu.edu/~teoh/teaching/previous/cs116b_sp08/lectures/lecture16_raytracing.ppt
float q = TWO_PI * xi1;
float f = acos(2.f*xi2 - 1);

return glm::normalize(glm::vec3(cos(q)*sin(f), sin(q)*sin(f), cos(f)));
}

//TODO (PARTIALLY OPTIONAL): IMPLEMENT THIS FUNCTION
Expand All @@ -102,5 +107,12 @@ __host__ __device__ int calculateBSDF(ray& r, glm::vec3 intersect, glm::vec3 nor
return 1;
};

// diffuse: 0, specular reflection 1
__host__ __device__ inline int decideDiffOrSpec(float reflectivity, float randNumber)
{
if (randNumber <= reflectivity) return 1;
else return 0;
}

#endif

37 changes: 35 additions & 2 deletions src/intersections.h
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ __host__ __device__ float boxIntersectionTest(glm::vec3 boxMin, glm::vec3 boxMa



normal = multiplyMV(box.transform, glm::vec4(currentNormal,0.0));
normal = glm::normalize(multiplyMV(box.transform, glm::vec4(currentNormal,0.0)));
return glm::length(intersectionPoint-ro.origin);
}

Expand Down Expand Up @@ -285,4 +285,37 @@ __host__ __device__ glm::vec3 getRandomPointOnSphere(staticGeom sphere, float ra
return randPoint;
}

#endif
__host__ __device__ int findClosestIntersection(const staticGeom* geoms, int numOfGeoms, const ray& r,
glm::vec3* closestIntersection, glm::vec3* closestIntersectionNormal,
float* closestDistance) {
// if the front closest geom is found, the index of it is returned
// otherwise, returns -1
glm::vec3 intersectionPoint, normal;
float intersectionDistance;
*closestDistance = FLT_MAX;
int closestGeomInd = -1;

for (int i = 0; i < numOfGeoms; i++) { // for each object
if (geoms[i].type == SPHERE) {
intersectionDistance = sphereIntersectionTest(geoms[i], r, intersectionPoint, normal);
} else if (geoms[i].type == CUBE) {
intersectionDistance = boxIntersectionTest(geoms[i], r, intersectionPoint, normal);
} else { // not-supported object type
continue;
}

if (intersectionDistance < EPSILON) { // object is missed
continue;
} else if (intersectionDistance < *closestDistance) { // closer is found
*closestDistance = intersectionDistance;
*closestIntersection = intersectionPoint;
*closestIntersectionNormal = normal;
closestGeomInd = i;
}
}

return closestGeomInd;
}


#endif
Loading