<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head><title>CS133 Speed(T)racer Final Report</title></head>

<body>
<h1>Speed(T)racer Project Writeup</h1>
<p>Team members: Gene Auyeung, Jeffrey Su, Daniel Quach, Albert Wang, Yajie (Jessica) Wang and David Woo</p>

<h2>Guide to Source Code</h2>

<h3>Source Code Compilation</h3>
<p>All of our source code compiles. To compile, type the command 'make' in the root directory. For more instructions and information, please refer to the README.txt document in the root directory.</p>

<h3>Source Code Completion</h3>
<p>We are 100% done with all of the code work. As listed in our original proposal, we have completed work on the following features/sections:
<ul>
    <li>Refraction</li>
    <li>Planes</li>
    <li>Triangles</li>
    <li>Textures</li>
    <li>Anti-Aliasing</li>
    <li>Parallelization of code (OpenMP)</li>
</ul>
The only thing to note is that textures is only supported by planes and spheres, but not triangles. This was due to unexpected delays in the completion of the triangle geometry code.</p>

<h3>Thread Code Location</h3>
<p>Since we are using OpenMP to parallelize our raytracer, the parallel pragmas are located on the following lines:
<ul>
    <li>raytrace.cpp: "#pragma omp parallel for schedule(dynamic)" line 83 </li>
    <li>raytrace.cpp: "#pragma omp parallel for schedule(dynamic)" line 100 </li>
</ul>
</p>



<!-- -->
<h2>Project breakdown</h2>

<h3>Why did you choose to parallelize the code in this way?</h3>
<p>Our team decided to parallelize raytracing on the outermost loop level. The backward raytracing algorithm loops through each pixel of the viewport. A ray is shot from the camera through each pixel, and the color of whatever the ray hits is calculated, taking refraction, reflection, diffusion etc into account. The work for each pixel is not dependent on the work of each other. Thus if the work for each pixel is run in a thread, threads can run in parallel without having to wait for each other. We parallelize the outermost level is so that each thread has more to do than if we parallelize an inner loop.</p>

<h3>Was synchronization needed (in the form of locks/mutexes, condition variables, critical sections, barriers, etc)? If so, where was it needed?</h3>

<p>To make the project more challenging, our team decided to implement anti-aliasing. We chose to use supersampling to achieve anti-aliasing. But instead of naively applying supersampling for every pixel of the viewport, we optimised to only supersample edges. This introduces data dependency as a thread working on one pixel needs to know what object, if any, neighboring rays hit.</p>

<p>To parallelize that solution, we chose to experiment with the simplest solution of raytracing in two passes. The first pass is the normal raytracing without anti-aliasing, but each ray stores the object that it first hits in a matrix paralleled to the screen. In the second pass, each pixel uses can then tell if supersampling is necessary. Essentially this two pass solution is introducing a barrier to ensure all threads are finished with the regular raytracing step, before proceeding to refine the edges.</p>



<h3>Which sections of the source code did each team member work on?</h3>

<p>David Woo implemented raytracing for planes. He cleaned up and fixed old bugs in the whole code base because the raytracing code was originally his from the graphics course. He also fixed bugs and optimized here and there.</p>
<p>Gene Auyeung implemented refraction. Fixed bugs here and there.</p>
<p>Albert Wang worked on triangles. He had trouble adapting example code from tutorials into our code base because he lacks experience working in computer graphics. Gene Auyeung helped with interpreting the example code and how to adapt it. Albert Wang was able to partially implement triangles, but there were bugs with the triangle appearing as an infinite plane. Yajie (Jessica) Wang in the end fixed the bugs and completed triangles.</p>
<p>Yajie (Jessica) Wang implemented textures for planes and spheres.</p>
<p>Daniel Quach implemented anti-aliasing, initially with the help of Gene Auyeung and Jeffery Su. There was a bug that made anti-aliasing lopsided: only edges on the left side of an object was smoothed out, the other side was still jagged. Yajie (Jessica) Wang found and fixed the bug.</p>
<p>Jeffrey Su parallelized the first and second pass of raytracing. Fixed small bugs here and there. He also added a testing script for performance testing.</p>

<p>Mecurial commit log <a href="html_files/hg_log.txt">here</a>.</p>


<h3>What features of the implementation needed to be sacrificed? Which were completed and which were not?</h3>

<p>All features proposed in the project proposal were implemented fully.</p>


<!-- -->
<h3>What bugs did you run into, if any? Did your team get stuck on any part of the implementation?</h3>

<ul>
    <li>Not all group members had experience with raytracing concept or general computer graphics</li>
    <li>Used one member's solution code as base and most of us were not familiar with it</li>
    <li>First implementation of supersampling was flawed, supersampled points which were too far apart in world space, later fixed this by taking infinitesimally small distances between supersampled points</li>
    <li>Anti-aliasing initially was lopsided, only the rightmost or bottommost edge of each primitive was anti-aliased, this was later fixed by correcting the edge detection between primitives</li>
    <li>Not all group members had multi-core computers capable of multi-threading, necessitating the use of remote connections. This caused the debugging process and execution time to be quite slow</li>
    <li>Furthermore, setting various aspects (refraction, transparency, etc.) to make the scene look realistic was difficult, done purely by trial and error and combined with the slow program execution time, resulted in huge delays</li>
    <li>Comparisons between anti-aliased and aliased images was done subjectively (by inspection), there was no way to generate an algorithm to evaluate images, so testing was slow</li>
</ul>

<h3>What challenges existed in using this framework?</h3>
<p>OpenMP was straight forward to use. Originally Jeffery Su used static scheduling for the OpenMP for pragma, but noticed performance could be improved on because the workload across pixels might not be the same. He then changed it to a dynamic schedule and saw performance improvements.</p>

<h3>What have you learned about parallel programming from this project?</h3>
<p>Our team learned that for certain frameworks make parallelizing code very easy. This allows developers to focus their efforts on the application. Our Speed(T)racer project is an example of that. The team was able to add a significant number of features, <em>and</em> achieve significant performance improvements.</p>


<!-- -->
<h2>Final Results</h2>
<p>Our parallelized raytracer outperformed the sequential version in all test cases. In the result graph below, the horizontal axis is the execution time of the sequential raytracer in seconds. parallelized version's execution time relative to the sequential version's. The vertical axis is the execution time of the parallelized raytracer relative to that of the sequential (sequential_time/execution_time).</p>

<p>
<div><img src="html_files/performance_graph.png"></div>
Graph: Speed ups versus sequential raytracer's execution time in seconds
<p>

<p>For data points, see the Excel spreadsheet <a href="html_files/performance_sheet.xls">here</a>.</p>


<h3>Screenshots</h3>
<p>The following screenshots all show features of our raytracer together, we did not create screenshots that showed individual features. The captions will highlight noteworthy features.</p>

<p><div><img src="html_files/presentation0.png"></div>
Notice that light is refracted through the glass sphere, and that the blue and red balls are reflected in the glass sphere.
</p>
<p><div><img src="html_files/presentation1.png"></div>
This shows a plane that has a texture mapped onto it. It is reflecting the red and blue spheres. The top right sphere has a different refractive index that the others, and is refracting the plane more than the other spheres.
</p>
<p><div><img src="html_files/presentation2.png"></div>
Two planes make the floor and ceiling, which are reflected by the elongated spheres.
</p>
<p><div><img src="html_files/presentation3.png"></div>
A specially made scene in the theme of Super Mario Bros.
</p>
<p><div><img src="html_files/presentation4.png"></div>
Textures mapped onto spheres.
</p>
<p><div><img src="html_files/testRandom10.png"></div>
Randomly generated scene with 10 of each supported shapes: planes, spheres, triangles.
</p>
<p><div><img src="html_files/testRandom20.png"></div>
Randomly generated scene with 20 of each supported shapes.
</p>
<p><div><img src="html_files/testRandomClose10.png"></div>
Close up of another randomly generated scene with 10 of each supported shapes.
</p>
<p><div><img src="html_files/testRandomClose20.png"></div>
Close up of another randomly generated scene with 20 of each supported shapes.
</p>



</body>
</html>
