CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=UcI-RnWzASk
TU Wien Rendering #30 - Dispersion and Spectral Rendering
Some materials, such as prisms have a non-constant index of refraction, and therefore they reflect and refract different colors of light into different directions, creating rainbow-like effects. Yes, rainbows are also created by dispersion! This also requires a fully spectral renderer that traces rays in the continuum of the visible light frequency domain. Ps: hope you also like The Dark Side of the Moon by Pink Floyd, yoohoo! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, so I don't think I started. I will show you some of the small paint contest assignments from previous years. Is this visible? Is this? Looking to the one, I think I should pull some of the curtains. Maybe I'll be honest with you. Just a thanks. How about that? So you can see that even with the small paint program, you can make incredible, incredible scenes. So this is the thanks. So this is the state of the art lecture. Basically what we are interested in, this and the next lecture, is starting from the very first algorithm that was ever created to solve the rendering equation, a classical path eraser, up to the most sophisticated works, some of them, which came out less than a week ago. And I won't go into deep mathematical details for most of these techniques. What I would like you to know is the basic idea and intuition behind the method and why we are doing the things we do. So deep mathematical details will also be there in the form of links where you can look behind the curtain and you will see what is exactly going on in there. Now before we start with the state of the art part, there's a few things that we need to discuss. One, this person. So we have talked about indices of refraction. The index of refraction for different materials, what was it? It was a number. So in every code, in every program, in every theory, we use numbers. Well, in reality, indices of refractions are not numbers. They are, in fact, functions. What does it mean? They could be functions that depend on the wavelength of incoming light. And what it exactly means is that there are materials that refract incoming light of different colors took different directions. And that's quite profound because you will see the beautiful effects of dispersion in a second. And there are also some supplementary videos that you should take a look at. This is a good example of it. This is a prism. So you can see that the incoming light is white and the prism does break down to this white incoming light to all possible colors there are. Another good example of this is rainbows. So whenever you are on a family trip and they are asking what you're looking at, and if you accidentally don't say rainbows, you will maybe say dispersion. And they will put you in an asylum instead. But don't worry. You are correct, scientifically, and that's all that matters. You can also see, and maybe not so beneficial and not so beautiful effect of dispersion. It is called chromatic aberration. This means that we have a camera lens that is possibly not the highest quality, and it can introduce artifacts like this because different colors of light are reflected into different directions. And you don't get the sharp image that you would be looking for. Now this is dispersion rendered inside LuxRender. So you can see that with physically based rendering, you can actually capture this effect. And you can also render diamonds. So if you have a fiance and you would like to buy a ring, but you are broke because you're a university student, then you can just render one. And you can also render one with dispersion. But see if you have a nerd girlfriend because if so, then maybe she will be happy about it. And most people aren't. And I speak from experience. And you can also see this really beautiful effect in the old, old pink-cold album cover called The Dark Side of the Moon. There are also some videos about this in the internet rendered with LuxRender. Take a look. Now, the first question is, is the index of refraction of glass constant? Well, let's look it up. Obviously we may have glasses that are made and manufactured in different ways. There are most of them are not completely clear. They are some kinds of a mixture. So there are different kinds of glass. But let's just pick one randomly from a database that gives you indices of refraction. And you can see that it is actually not flat. It is not a constant. There's something happening in the function. So this means that there are glass types that have dispersion effects. And even only slightly because you can see that between the minimum and the maximum there is not such a large difference, but there is something. So you could say that at least this kind of glass introduces some degree of dispersion. So let's take a look. What do you think about this image? Does this caustic have any kind of dispersion effect or does it not? What do you think? Is it a bit more colorful around the edges or is it completely white? Looks exactly. Looks exactly. Looks a bit red. Could be. Give me one more opinion. What do you think? It's a little bit rough. It's a little bit more. It looks like a rainbow or something. It might be a rainbow, but it may be significantly smaller. So maybe you would have to zoom in really close to Cedar Rainbow. So this is up for debate. We could see that the IOR seems to be non-constant. And therefore there should be a dispersion effect. Some artists claim that they can spot a difference between a physically based renderer, even for materials like that, and simple RGB rendering, where you cannot render these dispersion effects correctly. This is up for debate, whether you can see it, but science says that yes, there is, even if there is a slight difference, there is a difference. If you'd like to know more about this person, there is this wonderful series called Cosmos, a space time on the C. Have any of you heard of this before? Raise your hand. Okay. Few of you. So this is hosted by the magnificent Nile de Grasse Tyson. Yeah. Nile de Grasse Tyson. And you should absolutely watch it. So everyone who hasn't watched it yet, I'd like to hear your excuse, or at least I'd like to hear that you will go home and watch it. So this episode is about that dispersion mostly, and you will know all about dispersion if you watch it. Okay. Now we have another question, because we have written an RGB renderer. So we, if you look at the source code of small paint, everywhere you just see RGB, RGB, RGB. How do we write a correct physically based renderer? And even before that, how do we even represent light in the visible spectrum? Now a good answer to this is to introduce a function that describes how much light is carried at different wavelengths. Now this would be a continuous function that we could call spectral power distribution. And you can see that at these lower wavelengths, there is not too much light carried on the higher wavelengths, there is more. So you can put this representation into your renderer. And what you would do is that you would just a naive solution, you would pick a randomly chosen wavelength. And you would trace it into the scene using this wavelength. And if you do this, you can actually do another kind of Monte Carlo integration, because you would also add one more dimension of integration, and this one more dimension would be over wavelengths. Because you would also be statistically taking random samples of the rendering equation for a given wavelength in a given color. And then you would need to sum it up somehow to get a sensible solution. There is more about this in PBRT, Chapter 5.
[{"start": 0.0, "end": 3.4, "text": " Okay, so I don't think I started."}, {"start": 3.4, "end": 11.200000000000001, "text": " I will show you some of the small paint contest assignments from previous years."}, {"start": 11.200000000000001, "end": 13.700000000000001, "text": " Is this visible? Is this?"}, {"start": 13.700000000000001, "end": 16.7, "text": " Looking to the one, I think I should pull some of the curtains."}, {"start": 16.7, "end": 20.7, "text": " Maybe I'll be honest with you."}, {"start": 20.7, "end": 22.7, "text": " Just a thanks."}, {"start": 22.7, "end": 27.7, "text": " How about that?"}, {"start": 27.7, "end": 36.019999999999996, "text": " So you can see that even with the small paint program, you can make incredible, incredible"}, {"start": 36.019999999999996, "end": 40.0, "text": " scenes."}, {"start": 40.0, "end": 41.84, "text": " So this is the thanks."}, {"start": 41.84, "end": 44.099999999999994, "text": " So this is the state of the art lecture."}, {"start": 44.099999999999994, "end": 50.06, "text": " Basically what we are interested in, this and the next lecture, is starting from the very"}, {"start": 50.06, "end": 54.9, "text": " first algorithm that was ever created to solve the rendering equation, a classical path"}, {"start": 54.9, "end": 62.78, "text": " eraser, up to the most sophisticated works, some of them, which came out less than a week"}, {"start": 62.78, "end": 64.46, "text": " ago."}, {"start": 64.46, "end": 68.58, "text": " And I won't go into deep mathematical details for most of these techniques."}, {"start": 68.58, "end": 73.58, "text": " What I would like you to know is the basic idea and intuition behind the method and why"}, {"start": 73.58, "end": 75.3, "text": " we are doing the things we do."}, {"start": 75.3, "end": 83.46000000000001, "text": " So deep mathematical details will also be there in the form of links where you can look"}, {"start": 83.46, "end": 90.33999999999999, "text": " behind the curtain and you will see what is exactly going on in there."}, {"start": 90.33999999999999, "end": 94.58, "text": " Now before we start with the state of the art part, there's a few things that we need"}, {"start": 94.58, "end": 95.58, "text": " to discuss."}, {"start": 95.58, "end": 98.58, "text": " One, this person."}, {"start": 98.58, "end": 101.78, "text": " So we have talked about indices of refraction."}, {"start": 101.78, "end": 105.58, "text": " The index of refraction for different materials, what was it?"}, {"start": 105.58, "end": 108.53999999999999, "text": " It was a number."}, {"start": 108.54, "end": 113.78, "text": " So in every code, in every program, in every theory, we use numbers."}, {"start": 113.78, "end": 117.58000000000001, "text": " Well, in reality, indices of refractions are not numbers."}, {"start": 117.58000000000001, "end": 124.22, "text": " They are, in fact, functions."}, {"start": 124.22, "end": 125.7, "text": " What does it mean?"}, {"start": 125.7, "end": 129.82, "text": " They could be functions that depend on the wavelength of incoming light."}, {"start": 129.82, "end": 136.78, "text": " And what it exactly means is that there are materials that refract incoming light of different"}, {"start": 136.78, "end": 140.7, "text": " colors took different directions."}, {"start": 140.7, "end": 148.06, "text": " And that's quite profound because you will see the beautiful effects of dispersion in"}, {"start": 148.06, "end": 149.06, "text": " a second."}, {"start": 149.06, "end": 153.22, "text": " And there are also some supplementary videos that you should take a look at."}, {"start": 153.22, "end": 154.46, "text": " This is a good example of it."}, {"start": 154.46, "end": 155.46, "text": " This is a prism."}, {"start": 155.46, "end": 163.34, "text": " So you can see that the incoming light is white and the prism does break down to this"}, {"start": 163.34, "end": 169.3, "text": " white incoming light to all possible colors there are."}, {"start": 169.3, "end": 172.1, "text": " Another good example of this is rainbows."}, {"start": 172.1, "end": 176.7, "text": " So whenever you are on a family trip and they are asking what you're looking at, and if"}, {"start": 176.7, "end": 182.62, "text": " you accidentally don't say rainbows, you will maybe say dispersion."}, {"start": 182.62, "end": 185.78, "text": " And they will put you in an asylum instead."}, {"start": 185.78, "end": 187.02, "text": " But don't worry."}, {"start": 187.02, "end": 191.34, "text": " You are correct, scientifically, and that's all that matters."}, {"start": 191.34, "end": 195.9, "text": " You can also see, and maybe not so beneficial and not so beautiful effect of dispersion."}, {"start": 195.9, "end": 198.38, "text": " It is called chromatic aberration."}, {"start": 198.38, "end": 205.86, "text": " This means that we have a camera lens that is possibly not the highest quality, and it"}, {"start": 205.86, "end": 212.9, "text": " can introduce artifacts like this because different colors of light are reflected into different"}, {"start": 212.9, "end": 213.9, "text": " directions."}, {"start": 213.9, "end": 218.3, "text": " And you don't get the sharp image that you would be looking for."}, {"start": 218.3, "end": 224.38000000000002, "text": " Now this is dispersion rendered inside LuxRender."}, {"start": 224.38000000000002, "end": 229.38000000000002, "text": " So you can see that with physically based rendering, you can actually capture this effect."}, {"start": 229.38000000000002, "end": 231.34, "text": " And you can also render diamonds."}, {"start": 231.34, "end": 238.38000000000002, "text": " So if you have a fiance and you would like to buy a ring, but you are broke because"}, {"start": 238.38000000000002, "end": 242.58, "text": " you're a university student, then you can just render one."}, {"start": 242.58, "end": 245.10000000000002, "text": " And you can also render one with dispersion."}, {"start": 245.1, "end": 251.06, "text": " But see if you have a nerd girlfriend because if so, then maybe she will be happy about it."}, {"start": 251.06, "end": 253.26, "text": " And most people aren't."}, {"start": 253.26, "end": 256.34, "text": " And I speak from experience."}, {"start": 256.34, "end": 263.34, "text": " And you can also see this really beautiful effect in the old, old pink-cold album cover"}, {"start": 263.34, "end": 267.18, "text": " called The Dark Side of the Moon."}, {"start": 267.18, "end": 271.78, "text": " There are also some videos about this in the internet rendered with LuxRender."}, {"start": 271.78, "end": 273.26, "text": " Take a look."}, {"start": 273.26, "end": 279.3, "text": " Now, the first question is, is the index of refraction of glass constant?"}, {"start": 279.3, "end": 281.74, "text": " Well, let's look it up."}, {"start": 281.74, "end": 288.09999999999997, "text": " Obviously we may have glasses that are made and manufactured in different ways."}, {"start": 288.09999999999997, "end": 291.98, "text": " There are most of them are not completely clear."}, {"start": 291.98, "end": 295.26, "text": " They are some kinds of a mixture."}, {"start": 295.26, "end": 297.18, "text": " So there are different kinds of glass."}, {"start": 297.18, "end": 303.14, "text": " But let's just pick one randomly from a database that gives you indices of refraction."}, {"start": 303.14, "end": 307.21999999999997, "text": " And you can see that it is actually not flat."}, {"start": 307.21999999999997, "end": 309.58, "text": " It is not a constant."}, {"start": 309.58, "end": 311.65999999999997, "text": " There's something happening in the function."}, {"start": 311.65999999999997, "end": 317.21999999999997, "text": " So this means that there are glass types that have dispersion effects."}, {"start": 317.21999999999997, "end": 322.78, "text": " And even only slightly because you can see that between the minimum and the maximum there"}, {"start": 322.78, "end": 326.09999999999997, "text": " is not such a large difference, but there is something."}, {"start": 326.09999999999997, "end": 333.06, "text": " So you could say that at least this kind of glass introduces some degree of dispersion."}, {"start": 333.06, "end": 336.46, "text": " So let's take a look."}, {"start": 336.46, "end": 339.22, "text": " What do you think about this image?"}, {"start": 339.22, "end": 343.98, "text": " Does this caustic have any kind of dispersion effect or does it not?"}, {"start": 343.98, "end": 348.18, "text": " What do you think?"}, {"start": 348.18, "end": 352.86, "text": " Is it a bit more colorful around the edges or is it completely white?"}, {"start": 352.86, "end": 354.86, "text": " Looks exactly."}, {"start": 354.86, "end": 355.86, "text": " Looks exactly."}, {"start": 355.86, "end": 356.86, "text": " Looks a bit red."}, {"start": 356.86, "end": 363.46000000000004, "text": " Could be."}, {"start": 363.46000000000004, "end": 366.62, "text": " Give me one more opinion."}, {"start": 366.62, "end": 369.62, "text": " What do you think?"}, {"start": 369.62, "end": 372.18, "text": " It's a little bit rough."}, {"start": 372.18, "end": 375.18, "text": " It's a little bit more."}, {"start": 375.18, "end": 378.42, "text": " It looks like a rainbow or something."}, {"start": 378.42, "end": 382.26, "text": " It might be a rainbow, but it may be significantly smaller."}, {"start": 382.26, "end": 386.26, "text": " So maybe you would have to zoom in really close to Cedar Rainbow."}, {"start": 386.26, "end": 388.02, "text": " So this is up for debate."}, {"start": 388.02, "end": 392.62, "text": " We could see that the IOR seems to be non-constant."}, {"start": 392.62, "end": 394.94, "text": " And therefore there should be a dispersion effect."}, {"start": 394.94, "end": 400.9, "text": " Some artists claim that they can spot a difference between a physically based renderer, even for"}, {"start": 400.9, "end": 407.94, "text": " materials like that, and simple RGB rendering, where you cannot render these dispersion effects"}, {"start": 407.94, "end": 409.53999999999996, "text": " correctly."}, {"start": 409.53999999999996, "end": 414.53999999999996, "text": " This is up for debate, whether you can see it, but science says that yes, there is, even"}, {"start": 414.54, "end": 418.74, "text": " if there is a slight difference, there is a difference."}, {"start": 418.74, "end": 424.98, "text": " If you'd like to know more about this person, there is this wonderful series called Cosmos,"}, {"start": 424.98, "end": 429.66, "text": " a space time on the C. Have any of you heard of this before?"}, {"start": 429.66, "end": 431.06, "text": " Raise your hand."}, {"start": 431.06, "end": 432.06, "text": " Okay."}, {"start": 432.06, "end": 433.06, "text": " Few of you."}, {"start": 433.06, "end": 439.22, "text": " So this is hosted by the magnificent Nile de Grasse Tyson."}, {"start": 439.22, "end": 440.22, "text": " Yeah."}, {"start": 440.22, "end": 442.90000000000003, "text": " Nile de Grasse Tyson."}, {"start": 442.9, "end": 444.46, "text": " And you should absolutely watch it."}, {"start": 444.46, "end": 450.26, "text": " So everyone who hasn't watched it yet, I'd like to hear your excuse, or at least I'd"}, {"start": 450.26, "end": 453.5, "text": " like to hear that you will go home and watch it."}, {"start": 453.5, "end": 459.9, "text": " So this episode is about that dispersion mostly, and you will know all about dispersion if"}, {"start": 459.9, "end": 461.38, "text": " you watch it."}, {"start": 461.38, "end": 462.38, "text": " Okay."}, {"start": 462.38, "end": 467.38, "text": " Now we have another question, because we have written an RGB renderer."}, {"start": 467.38, "end": 474.02, "text": " So we, if you look at the source code of small paint, everywhere you just see RGB, RGB,"}, {"start": 474.02, "end": 475.02, "text": " RGB."}, {"start": 475.02, "end": 479.5, "text": " How do we write a correct physically based renderer?"}, {"start": 479.5, "end": 486.7, "text": " And even before that, how do we even represent light in the visible spectrum?"}, {"start": 486.7, "end": 493.21999999999997, "text": " Now a good answer to this is to introduce a function that describes how much light is"}, {"start": 493.21999999999997, "end": 496.02, "text": " carried at different wavelengths."}, {"start": 496.02, "end": 502.18, "text": " Now this would be a continuous function that we could call spectral power distribution."}, {"start": 502.18, "end": 508.82, "text": " And you can see that at these lower wavelengths, there is not too much light carried on the"}, {"start": 508.82, "end": 510.82, "text": " higher wavelengths, there is more."}, {"start": 510.82, "end": 515.34, "text": " So you can put this representation into your renderer."}, {"start": 515.34, "end": 520.38, "text": " And what you would do is that you would just a naive solution, you would pick a randomly"}, {"start": 520.38, "end": 522.34, "text": " chosen wavelength."}, {"start": 522.34, "end": 527.62, "text": " And you would trace it into the scene using this wavelength."}, {"start": 527.62, "end": 534.7800000000001, "text": " And if you do this, you can actually do another kind of Monte Carlo integration, because"}, {"start": 534.7800000000001, "end": 539.4200000000001, "text": " you would also add one more dimension of integration, and this one more dimension would be over"}, {"start": 539.4200000000001, "end": 542.58, "text": " wavelengths."}, {"start": 542.58, "end": 546.82, "text": " Because you would also be statistically taking random samples of the rendering equation"}, {"start": 546.82, "end": 549.7800000000001, "text": " for a given wavelength in a given color."}, {"start": 549.78, "end": 553.3, "text": " And then you would need to sum it up somehow to get a sensible solution."}, {"start": 553.3, "end": 582.8599999999999, "text": " There is more about this in PBRT, Chapter 5."}]
Two Minute Papers
https://www.youtube.com/watch?v=cDi-uti2oLQ
TU Wien Rendering #29 - Path Tracing Implementation & Code Walkthrough
Now that we know how path tracing works, we put in to code close to everything we've learned so far and will now implement a full global illumination path tracer from scratch in just 250 lines of C++ code. Imagine that all this knowledge we've amassed can be compressed into such a small program! The full implementation can be downloaded here: http://cg.tuwien.ac.at/~zsolnai/gfx/smallpaint/ About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
What a wonderful day we have today and what a wonderful time it is to write a path tracer. So why don't we get started? What we are going to be looking at is a program called SmallPaint which is a small path tracer in effectively 250 lines of code and it contains everything that we have learned during this course. We're going to be able to compute soft shadows, anti-aliasing, Monte Carlo integration and even quasi Monte Carlo sampling which basically means low discrepancy sampling. This version of the program is going to be able to compute refraction, color breathing and caustics and in the end the binary you compile from the code can be compressed into less than 4 kilobytes. This is how the end result looks like it has a beautiful painterly look which actually comes from a bug and you can also see that the light source up there, the whitish looking sphere is you could say perfectly anti-aliased. In order to achieve this with a recursive ray tracer and no global illumination algorithm you would need to compute the very same image on a much larger resolution and then scale it down to a smaller image like this. This anti-aliasing effect you get for free if you compute Monte Carlo path tracing. The question is how is this exactly done and now is the best time to put everything into use that we have learned so far. Let's get started. We have a completely standard vector class. It is a three-dimensional vector. It has its own constructor, the classical operators that you would expect and we also have a dot product for the vector and a cross product for the vector. It is also obviously possible to compute the length of this vector so nothing too exciting or important here but we definitely need to build on a solid vector class. Now the representation of a ray, a ray has an origin and the direction and if you take a close look at the constructor you can see that when you initialize a ray with a direction then this direction is normed and the reason is that when we compute the dot products between these vectors most of these information needs to be directional information so we are not interested in the magnitude of the vector only interested in the direction of the vector. A good way to get rid of problems where you would initialize your ray with directions that are not vectors of unit length but you can do is to normalize this input in the constructor so you will never have headaches about incorrect results where you have no idea what is really happening. What is the representation of an object? Well an object has a color. This is denoted by CL. This is actually very sloppy notation and this you can imagine as Elbedo but it is not a double so it's not a number it is actually a vector and the reason for this is the fact that we need to define the Elbedo's how much light in different wavelengths is being reflected and how much is being absorbed by the given object. Now object may also have a mission if they have some non-zero emission then they are light sources and by type we have an integer that would specify what kind of BRDF we have. It is also important to have an intersection routine and some other function that can compute the normal of the object. Now these are of course virtual functions we don't define them or an abstract object but they would have to be implemented in other classes that inherit from the object. Let's take a look at the sphere. So sphere has this C and R C is the center of the objects and R is the radius. The constructor is trivial. We have the intersection function. Now if you hopefully remember all three of the elements of the quadratic function that we need to solve but if you take a good look then you will see that A is missing from here and the question is why is that? The answer is we are multiplying D with D the direction vector of array with itself and if it's a vector that is normed so it is of length 1 then this A will always be 1. After that we can deal with the discriminant. The discriminant is normally not B squared minus 4 AC. Remember A is 1 here so it's omitted but it is behind the square root and this square root is completely omitted here which looks like a mistake but it is not. It is essentially an optimization step because you will see that we are testing the discriminant if it's less than 0. If it's less than 0 then we don't need to deal with this equation because the solutions for the quadratic equation are going to exist in the plane of complex numbers and that's not a valid T. It's not a valid distance where we would intersect the sphere. If this is not happening then we can compute the square root for the discriminant. Why after that? Because if the discriminant is bigger than 0 then the square root is not going to make a difference. So what we can essentially do is to postpone the square root calculation after the test. Note that square roots are really really expensive so we can save up lots of computational time if you omit this calculation. There is a really nice square root hack in the source code of quick 3 which is by the way open source. Take a look at how people are trying to hack together functions in order to work better and faster than they should because square roots are super expensive and there are some really interesting hacks in order to speed them up. We have the plus and minus term and the division by 2a is again postponed. And that's also another interesting question. Why is this postponed? So you can see that the sol 2 is divided by 2 and the sol 1 is also divided by 2 but only after the test. So it is possible that if we have the solution 2 if it is bigger than epsilon then we have the first expression after the question mark but if it's not then we are looking for the second expression after it which is another test and if the answer is no for that as well then we are going to return 0. This would mean that we don't have any hits or the hits are behind us and we are not interested in intersections that are behind our array. There is a possibility that we encounter this and in this case I don't want to waste my time by dividing these solutions by 2 because I'm not going to use them. Why am I splitting hairs here? That's an important question. Why do we need to optimize so much? Because if you grab a profiler a program that is able to show you in what functions are you spending most of your time in then this profiler would show you that more than 90% of the execution time is spent in the intersection routines. So you have to have a really well optimized intersection routine. Some programs have replaced these expressions with assembly in order to speed it up even more. So how do we compute the normal of a sphere? Well very simple. What we have here is p minus c. Now what does it mean? So if I have a minus p then this means a vector that points from b to a. So what this would mean look at the figure here. If I would have a circle then this would mean that from the center it is pointing towards the given points of the sphere. Now this is also divided by r because you could imagine you have a sphere that is not of unit radius and if it's not of unit radius then this normal vector would have a length. You could compute a normalization we have a normalized function in our vector implementation but it also has a square root so it would be much better to have something that's faster. Well if you just divide by the radius of the sphere then you would immediately get a vector of unit length. So in the end we can get the correct result by just simply dividing by r. Excellent. Now we have a perspective camera here. Hopefully you remember from the first lecture we are basically just copy-pasting this expression here we have derived them rigorously and analyzed how this exactly works. A simple intuition basically what we are doing we have as an input an x and a y. Basically this means give me a pixel with this displacement and what it would give you back is the world's base coordinates of these pixels. Uniform sampling of a hemisphere what is this for if we are encountering a diffuse object what we would like to do is to send a ray out on the hemisphere of this object. This we would need to uniform example this means that the diffuse PRDF is one over pi or row over pi if you take into consideration the elbows and you need transforms in order to do it there is a reading behind this link how it works exactly is drawing uniform samples on a plane which is simple and then we are projecting it on the hemisphere that's basically all there is. What about the trace function? As you can see here in the first line this code says that there is a maximum depth. Now clamping after a maximum depth value is not really optimal because whatever number you put in there the higher order bounces are going to be completely omitted. Now the real solution would be Russian-Rulet past termination which we fortunately also have after depth of an arbitrary number like five you start the Russian-Rulet routine which basically says there is a probability for stopping the light path right there and we generate a random number and compare to this probability. If we don't hit this probability then we will continue our path but we will multiply the output and the contribution of this light path by this given number that we have specified in one of the previous lectures. So this was implemented by Christian Mahacek and kind thanks. And you can see that later we are going to use this RR factor in order to multiply the contribution of a ray later. Now what about the intersection routine? This is definitely not the best way to do it but it's sure as hell the easiest way to do it. We specify this T which is going to be the intersection distance. How far we are from the start of the ray and how far is this intersection exactly? ID is basically which object did we hit and then we iterate through all of the objects in the scene and what we are interested in is calling the intersection routine. This will return a T how far is the intersection and what I am interested in an intersection that is the smallest number. This means the closest intersection and also something that is larger than epsilon because if I would tolerate zero then this would mean that self intersections are accepted. Therefore every single object that I am on is going to be the first intersection. I'm not interested in this. I know where I am. I just want to know where I am continuing my path. If there is no intersection then we return. There is zero contribution. Where is the intersection in world space? We denote this by HP. This means hit point and where we have started a ray we traveled along the direction of the ray with this T amount where the intersection is. So this is where we ended up and the origin of the new ray is going to be this hit point. What is the normal going to be? Well we just grabbed the object that we intersected and we are taking the normal with the given function. What is the return radiance? We simply add the emission term. This is the emission term on all three wavelengths. There is a magic multiplier disregard that and then we continue evaluating the second part of the rendering equation. The first part is the emission and the second is the reflected amount of light. Let's continue with the inside of the trace function. If we hit an object with a type of one then this is a diffuse BRDF. The next functions compute the next random number for the low discrepancy halton sampler and the direction is going to be a random sample in the hemisphere, a completely uniform random sample in the hemisphere of this object. What we have here is this N plus the hemisphere function. This is intuition. This is not exactly what is happening. I have just shortened the code slightly in order to simplify what is going on here. The code that you will download will have the real deal in there. Now then we compute the cosine term very straightforward and on the TMP we are going to instantiate a new vector and this is going to hold the recursion. So the subsequent samples that we shoot out from the hemisphere are going to be added and added to this TMP. Now is the time for recursion. We pass the ray and the scene to the trace function. The ray is actually not the current one, it's the new one. So basically we set up the new hit point and the new direction of the ray and this is what we are going to pass to the trace function. We increment the depth variable because we have computed the bounds. The TMP is going to be a variable where we gather all these radians and we put every other parameter that is needed to compute one more bounds. Now the color is going to contain the cosine term and all these radians that is collected from the recursion and we multiply it with the CL.xyz which is basically the BRDF. So this is the right side of the rendering equation for a diffuse BRDF. This is multiplied by 0.1. This is just a magic constant. Now what about a specular BRDF? What if we hit a mirror? Well very simple. We compute the perfect reflection direction. You can see the ray dot D and we again set up this variable to collect the radians in there and we are not doing anything. We are just going to add the radians as we get reflected off of this mirror. Then we are going to compute subsequent bounces and this is going to be stored on this TMP. So this is what we are going to add to this radians. What about a refractive material? Well we have every bit of knowledge that we need for this because essentially this is the vector version of Snals Law. What does it mean? Well the original Snals Law that we have computed is in 1D. So it only gives you one angle. But if you are in 3D you are interested in angles in two different dimensions. This is nothing but the extension of the very same law into higher dimension. Now where is this implemented exactly? You can see the cosine of theta 2. Note that n1 and n2 is considered differently because one of these media is always going to be air. Therefore one of the indices of refraction is always going to be 1. The rest is just copy paste. And again you can see that the square root is missing and we are going to postpone this after the test of cosine t2 because if it is actually not larger than 0 then we are not going to need this variable at all. Therefore we can postpone this after the test again. What about the direction of the outgoing gray? Well this is just copy paste from the formula that we have derived before. So simple as that. Obviously we again need the recursion because if we go inside a glass sphere then we are going to compute the refraction. So we are going to be inside of the sphere. What does it mean? One that we have to invert the normal because we are inside so the normals are flipped. And again we need to compute the trace function which is the recursion. So we are also interested in the higher order bounces. Onwards to Fresnel's law. What is the probability of reflection and refraction when rays are bouncing off in different directions in different angles of refractive surfaces? Implemented by Christian Hathner. So a big thanks for him. It is very simple. You can see that it is exactly the same as what we have learned in mathematics. So this is the R0 term. This is the probability of reflection in normal incidents. And we are interested in the square of that. And note that you don't see the n1 and then 2. This is because one of them is always going to be air or vacuum. So it is going to have the index of refraction of one. Now what about the final probability of reflection? It is also coming from the formula. We have every bit of information we need. So we just put in there this term with the cosine attenuation. How does the main function look like? Well we have some wizardry with C++11 lambda functions. But basically this is just a shortcut in order to be able to add a new sphere or a new plane to the scene in one line of code. Spheres are given by their radius position color by color. We obviously mean albedos emission and type. Type means what kind of BRDF we have? A diffuse a specular or a refractive BRDF. Now for planes we have position normal color emission and obviously type. So what kind of material we have? So using just one line of code you can add a new object and specify everything every information that you would need to it. Now we also add the light source and we specify the index of refraction for the refractive BRDFs. And we also specify how many samples per pixel would we like to compute? Onwards to the main loop we have two for loops that iterate through the width and the height of the image plane. Now vector C is color. It's again very sloppy. What it means is actually the radiance that we compute. We instantiate array. What is going to be the origin of the ray? This is going to be a 0 0 0. So this is where the camera is placed. What is going to be the direction of the ray? Well we connect this ray to the camera plane. And we specify which pixel we are computing with i and j and then we add this weird random number to it. Now what this means is actually filtering. In recursive ray tracing what you would do is you would only send the ray through the midpoint of a pixel and that's it. You would compute one sample per pixel. In Monte Carlo part tracing you're computing many samples per pixel and they don't have to go through the midpoint of the pixel. You would sample the area of the pixel. And this gives you anti aliasing effects for free if you use it correctly. What is going to be the direction of the ray? Well this is again the same a minus b. The b is the origin of the ray and a is the camera coordinate. So what does it mean? That it is pointing from 0 to the camera plane. And we normalize this expression to have a ray of unit length. Now we obviously call the trace function. The number of bounces is 0 and we pass every information that needs to be known in order to compute these bounces. So we provide this initial ray and the scene and everything else. Obviously we also pass the C and this is going to collect all the radiance there is in the subsequent bounces. And then after this recursion is done we deposit all this energy, all these radiance to the individual pixels. And then we divide by the number of samples because if we wouldn't do this then you remember the one over n multiplier everywhere in Monte Carlo integration. If you wouldn't do this then the more samples you compute the brighter image you would get. And this is obviously not what we're looking for. At the very end we create a file. This is the ppm file format where you can easily write all your contributions in there. We also start a stopwatch in order to measure how long we have been tracing all these rays. So very simple, very trivial and when we are done we close the file. It has the image in there and we done write how long the rendering algorithm has been running for. And basically that's it. That's it. This is effectively 250 lines of code that can compute indirect illumination, caustics and every global illumination effect. And it can compute images like this. This is one student submission from previous years. Absolutely gorgeous. This is the fixed version of small paint where there are no errors in the sampling. Another one from Michal Kama. This actually looks like I don't know if you are into the music band Boards of Canada but this looks exactly like one of their album covers. Love it. Really cool. And also Syrpinski Triangles from Christiane Kusla. You can find the link for the code in there and take a crack at it. Just try it, build different scenes, try to understand what is going on in there, try to mess the code up. I wonder what happens if I would not normalize this vector. Play with it. It's a really small, concise and really understandable path tracer. So take your time and play with it. It's lots of fun and you can create lots of beautiful, beautiful images with global illumination. Thank you.
[{"start": 0.0, "end": 6.4, "text": " What a wonderful day we have today and what a wonderful time it is to write a"}, {"start": 6.4, "end": 14.64, "text": " path tracer. So why don't we get started? What we are going to be looking at is a"}, {"start": 14.64, "end": 22.080000000000002, "text": " program called SmallPaint which is a small path tracer in effectively 250 lines of code"}, {"start": 22.080000000000002, "end": 28.88, "text": " and it contains everything that we have learned during this course. We're going to be able to compute"}, {"start": 28.88, "end": 36.0, "text": " soft shadows, anti-aliasing, Monte Carlo integration and even quasi Monte Carlo sampling which basically"}, {"start": 36.0, "end": 42.0, "text": " means low discrepancy sampling. This version of the program is going to be able to compute"}, {"start": 42.0, "end": 48.4, "text": " refraction, color breathing and caustics and in the end the binary you compile from the code"}, {"start": 48.4, "end": 55.120000000000005, "text": " can be compressed into less than 4 kilobytes. This is how the end result looks like it has a"}, {"start": 55.12, "end": 61.28, "text": " beautiful painterly look which actually comes from a bug and you can also see that the light source"}, {"start": 61.28, "end": 68.88, "text": " up there, the whitish looking sphere is you could say perfectly anti-aliased."}, {"start": 69.75999999999999, "end": 75.12, "text": " In order to achieve this with a recursive ray tracer and no global illumination algorithm you"}, {"start": 75.12, "end": 81.84, "text": " would need to compute the very same image on a much larger resolution and then scale it down to a"}, {"start": 81.84, "end": 88.56, "text": " smaller image like this. This anti-aliasing effect you get for free if you compute Monte Carlo path"}, {"start": 88.56, "end": 96.4, "text": " tracing. The question is how is this exactly done and now is the best time to put everything"}, {"start": 96.4, "end": 104.08000000000001, "text": " into use that we have learned so far. Let's get started. We have a completely standard vector class."}, {"start": 104.08000000000001, "end": 110.16, "text": " It is a three-dimensional vector. It has its own constructor, the classical operators that you would"}, {"start": 110.16, "end": 116.47999999999999, "text": " expect and we also have a dot product for the vector and a cross product for the vector. It is"}, {"start": 116.47999999999999, "end": 122.56, "text": " also obviously possible to compute the length of this vector so nothing too exciting or important"}, {"start": 122.56, "end": 128.72, "text": " here but we definitely need to build on a solid vector class. Now the representation of a ray,"}, {"start": 129.35999999999999, "end": 136.0, "text": " a ray has an origin and the direction and if you take a close look at the constructor you can see"}, {"start": 136.0, "end": 142.88, "text": " that when you initialize a ray with a direction then this direction is normed and the reason is"}, {"start": 142.88, "end": 150.4, "text": " that when we compute the dot products between these vectors most of these information needs to be"}, {"start": 150.4, "end": 155.52, "text": " directional information so we are not interested in the magnitude of the vector only interested"}, {"start": 155.52, "end": 160.96, "text": " in the direction of the vector. A good way to get rid of problems where you would initialize your"}, {"start": 160.96, "end": 167.36, "text": " ray with directions that are not vectors of unit length but you can do is to normalize this input"}, {"start": 167.36, "end": 173.12, "text": " in the constructor so you will never have headaches about incorrect results where you have no idea"}, {"start": 173.12, "end": 179.76000000000002, "text": " what is really happening. What is the representation of an object? Well an object has a color. This is"}, {"start": 179.76000000000002, "end": 186.72, "text": " denoted by CL. This is actually very sloppy notation and this you can imagine as Elbedo but it is not"}, {"start": 186.72, "end": 193.28, "text": " a double so it's not a number it is actually a vector and the reason for this is the fact that"}, {"start": 193.28, "end": 200.32, "text": " we need to define the Elbedo's how much light in different wavelengths is being reflected and how"}, {"start": 200.32, "end": 206.16, "text": " much is being absorbed by the given object. Now object may also have a mission if they have some"}, {"start": 206.16, "end": 212.0, "text": " non-zero emission then they are light sources and by type we have an integer that would specify"}, {"start": 212.0, "end": 218.24, "text": " what kind of BRDF we have. It is also important to have an intersection routine and some other"}, {"start": 218.24, "end": 224.72, "text": " function that can compute the normal of the object. Now these are of course virtual functions we don't"}, {"start": 224.72, "end": 230.8, "text": " define them or an abstract object but they would have to be implemented in other classes that"}, {"start": 230.8, "end": 241.76, "text": " inherit from the object. Let's take a look at the sphere. So sphere has this C and R"}, {"start": 241.76, "end": 248.64, "text": " C is the center of the objects and R is the radius. The constructor is trivial. We have the"}, {"start": 248.64, "end": 254.72, "text": " intersection function. Now if you hopefully remember all three of the elements of the quadratic"}, {"start": 254.72, "end": 259.52, "text": " function that we need to solve but if you take a good look then you will see that A is missing"}, {"start": 259.52, "end": 266.48, "text": " from here and the question is why is that? The answer is we are multiplying D with D"}, {"start": 266.48, "end": 272.40000000000003, "text": " the direction vector of array with itself and if it's a vector that is normed so it is of length"}, {"start": 272.40000000000003, "end": 283.44, "text": " 1 then this A will always be 1. After that we can deal with the discriminant. The discriminant"}, {"start": 283.44, "end": 292.24, "text": " is normally not B squared minus 4 AC. Remember A is 1 here so it's omitted but it is behind the"}, {"start": 292.24, "end": 297.92, "text": " square root and this square root is completely omitted here which looks like a mistake but it"}, {"start": 297.92, "end": 303.12, "text": " is not. It is essentially an optimization step because you will see that we are testing the"}, {"start": 303.12, "end": 308.72, "text": " discriminant if it's less than 0. If it's less than 0 then we don't need to deal with this"}, {"start": 308.72, "end": 314.96000000000004, "text": " equation because the solutions for the quadratic equation are going to exist in the plane of complex"}, {"start": 314.96000000000004, "end": 320.96000000000004, "text": " numbers and that's not a valid T. It's not a valid distance where we would intersect the sphere."}, {"start": 320.96, "end": 328.88, "text": " If this is not happening then we can compute the square root for the discriminant. Why after that?"}, {"start": 328.88, "end": 335.91999999999996, "text": " Because if the discriminant is bigger than 0 then the square root is not going to make a difference."}, {"start": 335.91999999999996, "end": 341.84, "text": " So what we can essentially do is to postpone the square root calculation after the test. Note that"}, {"start": 341.84, "end": 348.08, "text": " square roots are really really expensive so we can save up lots of computational time if you omit"}, {"start": 348.08, "end": 356.32, "text": " this calculation. There is a really nice square root hack in the source code of quick 3 which is"}, {"start": 356.32, "end": 362.08, "text": " by the way open source. Take a look at how people are trying to hack together functions in order to"}, {"start": 362.08, "end": 367.59999999999997, "text": " work better and faster than they should because square roots are super expensive and there are some"}, {"start": 367.59999999999997, "end": 376.08, "text": " really interesting hacks in order to speed them up. We have the plus and minus term and the division"}, {"start": 376.08, "end": 384.0, "text": " by 2a is again postponed. And that's also another interesting question. Why is this postponed? So you"}, {"start": 384.0, "end": 390.88, "text": " can see that the sol 2 is divided by 2 and the sol 1 is also divided by 2 but only after the test."}, {"start": 390.88, "end": 397.03999999999996, "text": " So it is possible that if we have the solution 2 if it is bigger than epsilon then we have the"}, {"start": 397.03999999999996, "end": 401.91999999999996, "text": " first expression after the question mark but if it's not then we are looking for the second"}, {"start": 401.92, "end": 406.40000000000003, "text": " expression after it which is another test and if the answer is no for that as well then we are"}, {"start": 406.40000000000003, "end": 412.96000000000004, "text": " going to return 0. This would mean that we don't have any hits or the hits are behind us and we"}, {"start": 412.96000000000004, "end": 418.24, "text": " are not interested in intersections that are behind our array. There is a possibility that we"}, {"start": 418.24, "end": 424.16, "text": " encounter this and in this case I don't want to waste my time by dividing these solutions by 2"}, {"start": 424.16, "end": 430.32, "text": " because I'm not going to use them. Why am I splitting hairs here? That's an important question."}, {"start": 430.32, "end": 437.04, "text": " Why do we need to optimize so much? Because if you grab a profiler a program that is able to show"}, {"start": 437.04, "end": 442.64, "text": " you in what functions are you spending most of your time in then this profiler would show you that"}, {"start": 442.64, "end": 449.84, "text": " more than 90% of the execution time is spent in the intersection routines. So you have to have a"}, {"start": 449.84, "end": 456.48, "text": " really well optimized intersection routine. Some programs have replaced these expressions with"}, {"start": 456.48, "end": 462.96000000000004, "text": " assembly in order to speed it up even more. So how do we compute the normal of a sphere?"}, {"start": 462.96000000000004, "end": 470.32, "text": " Well very simple. What we have here is p minus c. Now what does it mean? So if I have a minus p"}, {"start": 470.32, "end": 476.24, "text": " then this means a vector that points from b to a. So what this would mean look at the figure here."}, {"start": 476.24, "end": 481.84000000000003, "text": " If I would have a circle then this would mean that from the center it is pointing towards the"}, {"start": 481.84, "end": 489.35999999999996, "text": " given points of the sphere. Now this is also divided by r because you could imagine you have a"}, {"start": 489.35999999999996, "end": 494.79999999999995, "text": " sphere that is not of unit radius and if it's not of unit radius then this normal vector would have"}, {"start": 494.79999999999995, "end": 500.15999999999997, "text": " a length. You could compute a normalization we have a normalized function in our vector"}, {"start": 500.15999999999997, "end": 505.59999999999997, "text": " implementation but it also has a square root so it would be much better to have something that's"}, {"start": 505.6, "end": 511.68, "text": " faster. Well if you just divide by the radius of the sphere then you would immediately get a vector"}, {"start": 511.68, "end": 518.64, "text": " of unit length. So in the end we can get the correct result by just simply dividing by r."}, {"start": 520.16, "end": 527.0400000000001, "text": " Excellent. Now we have a perspective camera here. Hopefully you remember from the first lecture"}, {"start": 527.0400000000001, "end": 532.1600000000001, "text": " we are basically just copy-pasting this expression here we have derived them rigorously and"}, {"start": 532.16, "end": 537.68, "text": " analyzed how this exactly works. A simple intuition basically what we are doing we have as an input"}, {"start": 537.68, "end": 544.56, "text": " an x and a y. Basically this means give me a pixel with this displacement and what it would give"}, {"start": 544.56, "end": 552.64, "text": " you back is the world's base coordinates of these pixels. Uniform sampling of a hemisphere what is"}, {"start": 552.64, "end": 559.68, "text": " this for if we are encountering a diffuse object what we would like to do is to send a ray out on the"}, {"start": 559.68, "end": 565.4399999999999, "text": " hemisphere of this object. This we would need to uniform example this means that the diffuse"}, {"start": 565.4399999999999, "end": 572.3199999999999, "text": " PRDF is one over pi or row over pi if you take into consideration the elbows and you need"}, {"start": 572.3199999999999, "end": 578.0799999999999, "text": " transforms in order to do it there is a reading behind this link how it works exactly is"}, {"start": 578.88, "end": 585.12, "text": " drawing uniform samples on a plane which is simple and then we are projecting it on the hemisphere"}, {"start": 585.12, "end": 593.36, "text": " that's basically all there is. What about the trace function? As you can see here in the first line"}, {"start": 593.36, "end": 599.92, "text": " this code says that there is a maximum depth. Now clamping after a maximum depth value is not"}, {"start": 599.92, "end": 606.32, "text": " really optimal because whatever number you put in there the higher order bounces are going to be"}, {"start": 606.32, "end": 612.0, "text": " completely omitted. Now the real solution would be Russian-Rulet past termination which we"}, {"start": 612.0, "end": 619.68, "text": " fortunately also have after depth of an arbitrary number like five you start the Russian-Rulet routine"}, {"start": 619.68, "end": 626.96, "text": " which basically says there is a probability for stopping the light path right there and we generate"}, {"start": 626.96, "end": 631.84, "text": " a random number and compare to this probability. If we don't hit this probability then we will"}, {"start": 631.84, "end": 638.4, "text": " continue our path but we will multiply the output and the contribution of this light path by this"}, {"start": 638.4, "end": 643.1999999999999, "text": " given number that we have specified in one of the previous lectures. So this was implemented by"}, {"start": 643.1999999999999, "end": 655.76, "text": " Christian Mahacek and kind thanks. And you can see that later we are going to use this RR"}, {"start": 655.76, "end": 665.12, "text": " factor in order to multiply the contribution of a ray later. Now what about the intersection"}, {"start": 665.12, "end": 670.48, "text": " routine? This is definitely not the best way to do it but it's sure as hell the easiest way to"}, {"start": 670.48, "end": 677.68, "text": " do it. We specify this T which is going to be the intersection distance. How far we are from the"}, {"start": 677.68, "end": 683.76, "text": " start of the ray and how far is this intersection exactly? ID is basically which object did we hit"}, {"start": 683.76, "end": 688.48, "text": " and then we iterate through all of the objects in the scene and what we are interested in is"}, {"start": 688.48, "end": 693.6800000000001, "text": " calling the intersection routine. This will return a T how far is the intersection"}, {"start": 693.68, "end": 700.0799999999999, "text": " and what I am interested in an intersection that is the smallest number. This means the closest"}, {"start": 700.0799999999999, "end": 706.2399999999999, "text": " intersection and also something that is larger than epsilon because if I would tolerate zero"}, {"start": 706.2399999999999, "end": 712.0, "text": " then this would mean that self intersections are accepted. Therefore every single object that I"}, {"start": 712.0, "end": 717.92, "text": " am on is going to be the first intersection. I'm not interested in this. I know where I am. I just"}, {"start": 717.92, "end": 724.0799999999999, "text": " want to know where I am continuing my path. If there is no intersection then we return. There is"}, {"start": 724.0799999999999, "end": 732.64, "text": " zero contribution. Where is the intersection in world space? We denote this by HP. This means"}, {"start": 732.64, "end": 740.0799999999999, "text": " hit point and where we have started a ray we traveled along the direction of the ray with this T"}, {"start": 740.0799999999999, "end": 745.92, "text": " amount where the intersection is. So this is where we ended up and the origin of the new ray"}, {"start": 745.92, "end": 751.12, "text": " is going to be this hit point. What is the normal going to be? Well we just grabbed the object"}, {"start": 751.12, "end": 756.4799999999999, "text": " that we intersected and we are taking the normal with the given function. What is the return"}, {"start": 756.4799999999999, "end": 761.92, "text": " radiance? We simply add the emission term. This is the emission term on all three wavelengths."}, {"start": 761.92, "end": 768.0, "text": " There is a magic multiplier disregard that and then we continue evaluating the second part"}, {"start": 768.0, "end": 772.48, "text": " of the rendering equation. The first part is the emission and the second is the reflected amount"}, {"start": 772.48, "end": 782.0, "text": " of light. Let's continue with the inside of the trace function. If we hit an object with a type"}, {"start": 782.0, "end": 787.76, "text": " of one then this is a diffuse BRDF. The next functions compute the next random number for the"}, {"start": 787.76, "end": 794.96, "text": " low discrepancy halton sampler and the direction is going to be a random sample in the hemisphere,"}, {"start": 794.96, "end": 801.2, "text": " a completely uniform random sample in the hemisphere of this object. What we have here is this N"}, {"start": 801.2, "end": 806.5600000000001, "text": " plus the hemisphere function. This is intuition. This is not exactly what is happening. I have just"}, {"start": 806.5600000000001, "end": 811.9200000000001, "text": " shortened the code slightly in order to simplify what is going on here. The code that you will"}, {"start": 811.9200000000001, "end": 818.72, "text": " download will have the real deal in there. Now then we compute the cosine term very straightforward"}, {"start": 818.72, "end": 825.76, "text": " and on the TMP we are going to instantiate a new vector and this is going to hold the recursion."}, {"start": 825.76, "end": 831.12, "text": " So the subsequent samples that we shoot out from the hemisphere are going to be added and added"}, {"start": 831.12, "end": 838.5600000000001, "text": " to this TMP. Now is the time for recursion. We pass the ray and the scene to the trace function."}, {"start": 838.5600000000001, "end": 843.84, "text": " The ray is actually not the current one, it's the new one. So basically we set up the new"}, {"start": 843.84, "end": 848.32, "text": " hit point and the new direction of the ray and this is what we are going to pass to the trace"}, {"start": 848.32, "end": 853.92, "text": " function. We increment the depth variable because we have computed the bounds. The TMP is going to"}, {"start": 853.92, "end": 859.84, "text": " be a variable where we gather all these radians and we put every other parameter that is needed to"}, {"start": 859.84, "end": 866.72, "text": " compute one more bounds. Now the color is going to contain the cosine term and all these radians"}, {"start": 866.72, "end": 873.6800000000001, "text": " that is collected from the recursion and we multiply it with the CL.xyz which is basically the BRDF."}, {"start": 874.64, "end": 880.4, "text": " So this is the right side of the rendering equation for a diffuse BRDF. This is multiplied by"}, {"start": 880.4, "end": 890.0799999999999, "text": " 0.1. This is just a magic constant. Now what about a specular BRDF? What if we hit a mirror?"}, {"start": 890.0799999999999, "end": 895.6, "text": " Well very simple. We compute the perfect reflection direction. You can see the ray dot D"}, {"start": 896.16, "end": 902.72, "text": " and we again set up this variable to collect the radians in there and we are not doing anything."}, {"start": 902.72, "end": 908.0799999999999, "text": " We are just going to add the radians as we get reflected off of this mirror. Then we are going to"}, {"start": 908.08, "end": 914.0, "text": " compute subsequent bounces and this is going to be stored on this TMP. So this is what we are going"}, {"start": 914.0, "end": 923.44, "text": " to add to this radians. What about a refractive material? Well we have every bit of knowledge that"}, {"start": 923.44, "end": 932.32, "text": " we need for this because essentially this is the vector version of Snals Law. What does it mean?"}, {"start": 932.32, "end": 939.5200000000001, "text": " Well the original Snals Law that we have computed is in 1D. So it only gives you one angle."}, {"start": 939.5200000000001, "end": 945.2800000000001, "text": " But if you are in 3D you are interested in angles in two different dimensions. This is nothing"}, {"start": 945.2800000000001, "end": 949.2, "text": " but the extension of the very same law into higher dimension."}, {"start": 951.36, "end": 958.0, "text": " Now where is this implemented exactly? You can see the cosine of theta 2. Note that n1 and n2"}, {"start": 958.0, "end": 964.88, "text": " is considered differently because one of these media is always going to be air. Therefore one of"}, {"start": 964.88, "end": 972.56, "text": " the indices of refraction is always going to be 1. The rest is just copy paste. And again you can"}, {"start": 972.56, "end": 979.2, "text": " see that the square root is missing and we are going to postpone this after the test of cosine t2"}, {"start": 979.84, "end": 986.4, "text": " because if it is actually not larger than 0 then we are not going to need this variable at all."}, {"start": 986.4, "end": 989.12, "text": " Therefore we can postpone this after the test again."}, {"start": 994.72, "end": 998.72, "text": " What about the direction of the outgoing gray? Well this is just copy paste from the formula"}, {"start": 998.72, "end": 1006.56, "text": " that we have derived before. So simple as that. Obviously we again need the recursion because if we go"}, {"start": 1006.56, "end": 1012.96, "text": " inside a glass sphere then we are going to compute the refraction. So we are going to be inside"}, {"start": 1012.96, "end": 1018.24, "text": " of the sphere. What does it mean? One that we have to invert the normal because we are inside"}, {"start": 1018.24, "end": 1024.24, "text": " so the normals are flipped. And again we need to compute the trace function which is the recursion."}, {"start": 1024.24, "end": 1031.52, "text": " So we are also interested in the higher order bounces. Onwards to Fresnel's law. What is the"}, {"start": 1031.52, "end": 1037.3600000000001, "text": " probability of reflection and refraction when rays are bouncing off in different directions"}, {"start": 1037.36, "end": 1043.84, "text": " in different angles of refractive surfaces? Implemented by Christian Hathner. So a big thanks for him."}, {"start": 1043.84, "end": 1049.36, "text": " It is very simple. You can see that it is exactly the same as what we have learned in mathematics."}, {"start": 1049.36, "end": 1055.36, "text": " So this is the R0 term. This is the probability of reflection in normal incidents. And we are"}, {"start": 1055.36, "end": 1061.52, "text": " interested in the square of that. And note that you don't see the n1 and then 2. This is because one"}, {"start": 1061.52, "end": 1067.28, "text": " of them is always going to be air or vacuum. So it is going to have the index of refraction of one."}, {"start": 1068.4, "end": 1074.32, "text": " Now what about the final probability of reflection? It is also coming from the formula. We have"}, {"start": 1074.32, "end": 1080.8, "text": " every bit of information we need. So we just put in there this term with the cosine attenuation."}, {"start": 1082.0, "end": 1089.76, "text": " How does the main function look like? Well we have some wizardry with C++11 lambda functions."}, {"start": 1089.76, "end": 1096.4, "text": " But basically this is just a shortcut in order to be able to add a new sphere or a new plane to"}, {"start": 1096.4, "end": 1103.68, "text": " the scene in one line of code. Spheres are given by their radius position color by color. We obviously"}, {"start": 1103.68, "end": 1110.48, "text": " mean albedos emission and type. Type means what kind of BRDF we have? A diffuse a specular or"}, {"start": 1110.48, "end": 1117.76, "text": " a refractive BRDF. Now for planes we have position normal color emission and obviously type. So what"}, {"start": 1117.76, "end": 1123.52, "text": " kind of material we have? So using just one line of code you can add a new object and specify"}, {"start": 1123.52, "end": 1129.6, "text": " everything every information that you would need to it. Now we also add the light source and we"}, {"start": 1129.6, "end": 1136.8799999999999, "text": " specify the index of refraction for the refractive BRDFs. And we also specify how many samples per"}, {"start": 1136.8799999999999, "end": 1146.72, "text": " pixel would we like to compute? Onwards to the main loop we have two for loops that iterate through"}, {"start": 1146.72, "end": 1154.32, "text": " the width and the height of the image plane. Now vector C is color. It's again very sloppy."}, {"start": 1154.32, "end": 1159.76, "text": " What it means is actually the radiance that we compute. We instantiate array. What is going to be"}, {"start": 1159.76, "end": 1165.3600000000001, "text": " the origin of the ray? This is going to be a 0 0 0. So this is where the camera is placed."}, {"start": 1166.24, "end": 1171.6000000000001, "text": " What is going to be the direction of the ray? Well we connect this ray to the camera plane."}, {"start": 1171.6, "end": 1178.1599999999999, "text": " And we specify which pixel we are computing with i and j and then we add this weird random number"}, {"start": 1178.1599999999999, "end": 1184.8799999999999, "text": " to it. Now what this means is actually filtering. In recursive ray tracing what you would do is you"}, {"start": 1184.8799999999999, "end": 1191.52, "text": " would only send the ray through the midpoint of a pixel and that's it. You would compute one"}, {"start": 1191.52, "end": 1197.6, "text": " sample per pixel. In Monte Carlo part tracing you're computing many samples per pixel and they"}, {"start": 1197.6, "end": 1203.4399999999998, "text": " don't have to go through the midpoint of the pixel. You would sample the area of the pixel."}, {"start": 1203.4399999999998, "end": 1208.08, "text": " And this gives you anti aliasing effects for free if you use it correctly."}, {"start": 1209.36, "end": 1214.9599999999998, "text": " What is going to be the direction of the ray? Well this is again the same a minus b. The b is"}, {"start": 1214.9599999999998, "end": 1220.6399999999999, "text": " the origin of the ray and a is the camera coordinate. So what does it mean? That it is pointing"}, {"start": 1220.6399999999999, "end": 1226.56, "text": " from 0 to the camera plane. And we normalize this expression to have a ray of unit length."}, {"start": 1226.56, "end": 1233.76, "text": " Now we obviously call the trace function. The number of bounces is 0 and we pass every information"}, {"start": 1233.76, "end": 1238.96, "text": " that needs to be known in order to compute these bounces. So we provide this initial ray"}, {"start": 1238.96, "end": 1244.6399999999999, "text": " and the scene and everything else. Obviously we also pass the C and this is going to collect all"}, {"start": 1244.6399999999999, "end": 1252.0, "text": " the radiance there is in the subsequent bounces. And then after this recursion is done we deposit"}, {"start": 1252.0, "end": 1257.84, "text": " all this energy, all these radiance to the individual pixels. And then we divide by the number"}, {"start": 1257.84, "end": 1264.8, "text": " of samples because if we wouldn't do this then you remember the one over n multiplier everywhere"}, {"start": 1264.8, "end": 1269.84, "text": " in Monte Carlo integration. If you wouldn't do this then the more samples you compute the brighter"}, {"start": 1269.84, "end": 1288.6399999999999, "text": " image you would get. And this is obviously not what we're looking for."}, {"start": 1300.3999999999999, "end": 1306.3999999999999, "text": " At the very end we create a file. This is the ppm file format where you can easily write all your"}, {"start": 1306.3999999999999, "end": 1312.56, "text": " contributions in there. We also start a stopwatch in order to measure how long we have been tracing"}, {"start": 1312.56, "end": 1319.12, "text": " all these rays. So very simple, very trivial and when we are done we close the file. It has the"}, {"start": 1319.12, "end": 1323.36, "text": " image in there and we done write how long the rendering algorithm has been running for."}, {"start": 1323.36, "end": 1332.1599999999999, "text": " And basically that's it. That's it. This is effectively 250 lines of code that can compute"}, {"start": 1332.1599999999999, "end": 1340.08, "text": " indirect illumination, caustics and every global illumination effect. And it can compute images"}, {"start": 1340.08, "end": 1345.6, "text": " like this. This is one student submission from previous years. Absolutely gorgeous. This is the"}, {"start": 1345.6, "end": 1354.08, "text": " fixed version of small paint where there are no errors in the sampling. Another one from Michal Kama."}, {"start": 1354.08, "end": 1359.76, "text": " This actually looks like I don't know if you are into the music band Boards of Canada but this"}, {"start": 1359.76, "end": 1370.0, "text": " looks exactly like one of their album covers. Love it. Really cool. And also Syrpinski Triangles from"}, {"start": 1370.0, "end": 1379.2, "text": " Christiane Kusla. You can find the link for the code in there and take a crack at it. Just try"}, {"start": 1379.2, "end": 1385.12, "text": " it, build different scenes, try to understand what is going on in there, try to mess the code up."}, {"start": 1385.12, "end": 1391.04, "text": " I wonder what happens if I would not normalize this vector. Play with it. It's a really small,"}, {"start": 1391.04, "end": 1400.0, "text": " concise and really understandable path tracer. So take your time and play with it. It's lots of fun"}, {"start": 1400.0, "end": 1429.84, "text": " and you can create lots of beautiful, beautiful images with global illumination. Thank you."}]
Two Minute Papers
https://www.youtube.com/watch?v=z9p2nis3amM
TU Wien Rendering #28 - Assignment 3
The assignment file is available under the assignments section, around the last slide in the linked ppt: https://www.cg.tuwien.ac.at/courses/Rendering/VU.SS2019.html There will be lots of fun to be had with Assignment 3! Yes, you've seen it right: assignment 2 is going to be shown in a later video with Thomas Auzinger. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's talk about assignment 3. This is the fun stuff. There's a lot of writing that don't is there. Assignment 2 is where you need to do quite a bit of mathematics. And down, there's only the fun stuff. So this is one of that. And that line for this is going to be way after the previous assignment. So it's not like I'm going to add that line of this next to the previous assignment. So after you have completed that, then you will still have enough time to do it. So lots of blah, blah, blah in there. One. Please, download this in file. And there you can see the implementation of my simple one. So you want the Monte Carlo integrator that we have played with. Put together a function, any function that you like, and integrate it with the Monte Carlo integrator. First, write it down in some way in Lottec and put it in a PNG file over PNF or whatever, and say that this is being enrolled by the light of the computer. And that be the mathematician or be the engineer. So do the actual calculation through analytic integration or just open it from alpha and let it do the hard work. And then we the Monte Carlo guide and do the actual Monte Carlo something for this function. And see if you will get the same results. So this is how you can prove that your calculations are correct. Now, the pro version of the same thing is what they find the code to be suitable for higher dimensional integration. And then also create any function, any where he found polynomial cosine, whatever function that you would like, exponentials, whatever. And do multi-dimensional Monte Carlo integration. This is literally one line of code of the change in the code. And then you can also integrate higher dimensional functions with this very small C++ program. And if you feel like a pro, you can also evaluate the speed of convergence. So how does it look like? How far on the solution after 10 samples, 10,000 samples, 1,000 samples, just plot the result? And if you feel adventurous, we can also snatch my code in this rendering program code small paint. This is the one with the painted both. You can snatch my code for the halton, low disc frequency series. And you can sample a one or two or whatever dimensional function with halton series. And you can see that how the last ratify a sample. Second part, this is the even more fun part. There's going to be a scene contest. So we have this Monte Carlo pass pressure program implemented in 250 lines. We threw the code, try to understand what's going on, but we're going to talk through this anyway. And just render an interesting scene. Just put together a full scene with a given number of objects. See what you can do with this. And if you go to this website, you will be able to see a gallery of the results from previous years. And some of these guys and girls have made amazing, amazing artistic works in the power. So they're going to be a contest subject your result and make sure to generate complete converges. So no noise images, converges. Don't try this in the last five minutes. This takes time, but it's insane. And as in the previous years, we will make this in this year a gallery of the submissions. So you can be proud of your own work after you're done with this course. You can show your friends how well you have done. Then obviously the subject is going to be the same only in the number of the assignments. And the seventh class I am very excited to see your results even in the middle of the night. So I just met one of my former students at conference. And he immediately told me that he has the fondest memories from the rendering course because he was one time late with his assignment. And he thought that I need to work on this all night to be done. So I got a email from him with the results 3 am. And that he was very delighted to see that five minutes later. So five minutes after 3 am, he got an answer with something like past. So it's great. And he thought that oh my god, I messed up. It's the mail or even because who answers the email? So he went to my five minutes later. Spirited the mail. Checked it out. No, okay. Wonderful. Wonderful. Okay. That's basically it. And I can go to your address. Yes.
[{"start": 0.0, "end": 6.42, "text": " Let's talk about assignment 3. This is the fun stuff. There's a lot of writing that"}, {"start": 6.42, "end": 12.26, "text": " don't is there. Assignment 2 is where you need to do quite a bit of mathematics."}, {"start": 12.26, "end": 20.34, "text": " And down, there's only the fun stuff. So this is one of that. And that line for this is going to be"}, {"start": 20.34, "end": 33.34, "text": " way after the previous assignment. So it's not like I'm going to add that line of this next to the previous assignment."}, {"start": 33.34, "end": 38.34, "text": " So after you have completed that, then you will still have enough time to do it."}, {"start": 38.34, "end": 49.34, "text": " So lots of blah, blah, blah in there. One. Please, download this in file. And there you can see the implementation of my simple one."}, {"start": 49.34, "end": 58.34, "text": " So you want the Monte Carlo integrator that we have played with. Put together a function, any function that you like, and integrate it with the Monte Carlo integrator."}, {"start": 58.34, "end": 68.34, "text": " First, write it down in some way in Lottec and put it in a PNG file over PNF or whatever, and say that this is being enrolled by the light of the computer."}, {"start": 68.34, "end": 77.34, "text": " And that be the mathematician or be the engineer. So do the actual calculation through analytic integration"}, {"start": 77.34, "end": 88.34, "text": " or just open it from alpha and let it do the hard work. And then we the Monte Carlo guide and do the actual Monte Carlo something for this function."}, {"start": 88.34, "end": 97.34, "text": " And see if you will get the same results. So this is how you can prove that your calculations are correct."}, {"start": 97.34, "end": 115.34, "text": " Now, the pro version of the same thing is what they find the code to be suitable for higher dimensional integration. And then also create any function, any where he found polynomial cosine, whatever function that you would like, exponentials, whatever."}, {"start": 115.34, "end": 129.34, "text": " And do multi-dimensional Monte Carlo integration. This is literally one line of code of the change in the code. And then you can also integrate higher dimensional functions with this very small C++ program."}, {"start": 129.34, "end": 146.34, "text": " And if you feel like a pro, you can also evaluate the speed of convergence. So how does it look like? How far on the solution after 10 samples, 10,000 samples, 1,000 samples, just plot the result?"}, {"start": 146.34, "end": 161.34, "text": " And if you feel adventurous, we can also snatch my code in this rendering program code small paint. This is the one with the painted both. You can snatch my code for the halton, low disc frequency series."}, {"start": 161.34, "end": 173.34, "text": " And you can sample a one or two or whatever dimensional function with halton series. And you can see that how the last ratify a sample."}, {"start": 173.34, "end": 187.34, "text": " Second part, this is the even more fun part. There's going to be a scene contest. So we have this Monte Carlo pass pressure program implemented in 250 lines."}, {"start": 187.34, "end": 201.34, "text": " We threw the code, try to understand what's going on, but we're going to talk through this anyway. And just render an interesting scene. Just put together a full scene with a given number of objects. See what you can do with this."}, {"start": 201.34, "end": 217.34, "text": " And if you go to this website, you will be able to see a gallery of the results from previous years. And some of these guys and girls have made amazing, amazing artistic works in the power."}, {"start": 217.34, "end": 235.34, "text": " So they're going to be a contest subject your result and make sure to generate complete converges. So no noise images, converges. Don't try this in the last five minutes. This takes time, but it's insane."}, {"start": 235.34, "end": 251.34, "text": " And as in the previous years, we will make this in this year a gallery of the submissions. So you can be proud of your own work after you're done with this course. You can show your friends how well you have done."}, {"start": 251.34, "end": 273.34000000000003, "text": " Then obviously the subject is going to be the same only in the number of the assignments. And the seventh class I am very excited to see your results even in the middle of the night."}, {"start": 273.34, "end": 287.34, "text": " So I just met one of my former students at conference. And he immediately told me that he has the fondest memories from the rendering course because he was one time late with his assignment."}, {"start": 287.34, "end": 305.34, "text": " And he thought that I need to work on this all night to be done. So I got a email from him with the results 3 am. And that he was very delighted to see that five minutes later."}, {"start": 305.34, "end": 319.34, "text": " So five minutes after 3 am, he got an answer with something like past. So it's great. And he thought that oh my god, I messed up. It's the mail or even because who answers the email?"}, {"start": 319.34, "end": 337.34, "text": " So he went to my five minutes later. Spirited the mail. Checked it out. No, okay. Wonderful. Wonderful. Okay. That's basically it. And I can go to your address."}, {"start": 337.34, "end": 351.34, "text": " Yes."}]
Two Minute Papers
https://www.youtube.com/watch?v=vPwiqXjDgeo
TU Wien Rendering #27 - Russian Roulette Path Termination
To be faithful to mother nature, we would need a to trace an infinite number of bounces for every ray (of course this depends on the attenuation of the energy of the photon bounce after bounce). Since we only have finite resources, this sounds impossible. To our surprise, this is possible to solve with a statistical trick call Russian roulette path termination. We can easily prove that the estimator converges to the right quantity, though it has its own variance that recedes over time as more samples are added. Warning: it will boggle your mind! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Quick question, we really discussed the rational plan that we were done. When do we start tracing the array? Well, what we did so far is we did a product, so we said that there's a maximum number of bounces that I'm going to compute and the rest of it I'm not interested in. The problem is that this is a biased solution and we are missing some energy in the image, because if I didn't compute many more subsequent bounces, I would accumulate more radiance, so I would get perhaps a brighter but more faithful in this to reality. We can do much better than that. Once more, there is a technique that can compute an infinite amount of samples, an infinite amount of depth, and that boggles the mind, because what is really happening, if I want to be able to compute physical reality, I would need to have a maximum depth of infinity. I couldn't even compute one sample of the pixel, because I would have to bounce the array indefinitely. But there's an mathematical technique that can keep you, the results, as if you computed an infinite number of bounces. And this is against the statistics and probability, which is usually very difficult to wrap once mind around what is happening, but we can actually solve this problem. So how can we overcome this? What we're looking for is an estimator that converges to the expected value of integral. Okay, that's fine, that's fine. I'm looking for the expected value parts. Having the expected value, or looking for the expected value or something is one thing, but there's not only expected value that is variance. So I may have multiple estimators, and what I'm looking for is one that has the lowest possible variance. What I can do here is, after each step, by step, I need each bounce, I decide whether I will terminate the path, so I would stop this like that, or I would continue. But if I continue, I multiply the collected radiance with the sun then, and the question is what should this happen? Now, this, I would like to relate to, for instance, Fredel's law. In Fredel's law, we could compute what is the probability of reflection, and what is the probability of reflection? Like here, the last window, and with some probability, I will continue to migrate through the window, and with some probability, I will get a very effective. Now, what I can do is, I can run many samples, and add that together, or what I can also do is, that I don't run many samples, I enter the window, and I compute that there's an 80% chance of reflection, and 20% for reflection. And I will send out only one ray in each direction, but I would multiply this by relative probability of the effect. So I'm not tracing 10,000 rays, I will send out one, and I will multiply it by 0.8 in one direction, and 0.2 in another direction. And then, if I will do this, yes, I will compute more and more samples, but statistically, this is sound, so what this means, is that this converges to the expected value of the integral. And rational that does the exact same thing, but it gives you an infinite number of boxes. So, with a given probability, I stopped, and with a given probability, I continue, but I will multiply the collected variance with a factor. And this factor for the front-end, is an example of the probability of reflection. What does the algorithm look like? I choose a random variable, let's call it xe, on 0,1. And with a given probability that's pi, I continue the light path after hitting something every round, and it flow a dice. And if I have this probability, I will continue my light path, but I will multiply the collected variance with something. And what gives you the end results, as what you would see in the textbook, I'll try to show you the gross process on from how someone can put together. I will need to multiply it by something, I don't know what this something should be. We will find out together. And if I don't hit this probability, then I will terminate the light path, so we could imagine this as if I continue the light path, but I will multiply all the results even for gradient, pi, 0. I spoiled 0 for the second question, so that's a damage, but you would have found this out in a second anyway. So I'm looking for an expected value of something, that l i in the hat is an estimation, an estimator. And on the right side, this is the actual L i. So whatever happens in the middle is some magic, but the constraint is that the expected value of the estimator should be the same as the origin of what it is in the line of the input. There is a probability of continuation, and if I don't hit this probability, then I will stop. The stopping part is trivial, if I stop, then I will multiply this term with 0. So imagine that I continue my light path, I won't be the wasted time until infinity, but it will be the pi 0. Now the question is, what is the other question mark? What I know is that on the right side, I would want to get l i. So forget this right term, what do I need to do with this expression on the left in order to get l i to the wrong side? Raise your hand if you know the answer. I want you to take a few seconds and think about it. What do I need to do to get l i from this expression? The rest is multiplied by 0, so this doesn't matter. Raise your hand if you know. Maybe, maybe. Yes, please. Yes, I have an answer. Asked if you were OK. I think that l i multiplied by 0. Yes. It was that. I killed the pi because I don't see an epi here. So there is going to be a fraction, and the denominator is going to be pi. So I killed this guy, but there is no one in there, and I want someone in there, and that someone is the l i. So I killed the pi with my fraction, and in the numerator there is going to be l i. So if I do this, then what I am doing is going to be statistically solved. And I try to give you the intuition again. This takes time to wrap your head around. It is almost like in the front-out equation that what you could do is you could send out one ray in one direction. So you could send out 800 rays in one direction and sum them up. What you could do is that I sent out only one ray, and I multiply it with 800. And no, I would not get the same result, but I would get the same expected value. And over time, the variance around this expected value would shrink if I do this many times. So this is the intuition behind the whole thing. What is the good choice for the pi? Because this has been a parameter so far. What should I put in there? Well, with a little fraction, I could say it doesn't matter. You could put many sensible choices in there, and it would work. But quickly, let's review the cases where it would not work. Well, obviously, there are two very stupid options. If you put pi or zero, then this would mean that you would never continue your path. You would always stop. So this is obviously great. What if I say pi equals one? Well, this means that I would always continue. I would never stop. Well, you can say that the mathematically this is sound, but you could never compute one sound or two pixel. If you're on that condition, you would say I have a theory machine that you never stop. It doesn't make too much sense if you're looking for practical solution. Now, anything that it may mean the two is completely fine. The only difference is, because I've showed you that the expected value is the same as the actual quantity done. That I'm looking for. But the variance is different. So it is oscillating around the very same number, but the magnitude of the oscillation depends on this choice. And what you can prove, but it is actually very easy to visualize that the good choice for the PI would usually be something that would sample brighter paths, longer, and darker paths I would want to terminate faster. Because this is the same as matching the green function with the blue function with the green bars. I would want to reconstruct the brighter regions more faithfully than darker regions. Because this is what this means, smaller error. So what you can plug in there is plenty of the elevator of the material. So if you have a really bright, wide wall, then you would be the super high-quality if you would want to continue up. But if you have a really dark object like the curtains, you either side of the room, you would want to stop with a much larger probability. So this is how Russian will work. We will also code this. So in the next lecture, you will see the whole thing that we studied in code.
[{"start": 0.0, "end": 5.0, "text": " Quick question, we really discussed the rational plan that we were done."}, {"start": 5.0, "end": 7.0, "text": " When do we start tracing the array?"}, {"start": 7.0, "end": 12.0, "text": " Well, what we did so far is we did a product, so we said that"}, {"start": 12.0, "end": 15.0, "text": " there's a maximum number of bounces that I'm going to compute"}, {"start": 15.0, "end": 18.0, "text": " and the rest of it I'm not interested in."}, {"start": 18.0, "end": 21.0, "text": " The problem is that this is a biased solution"}, {"start": 21.0, "end": 24.0, "text": " and we are missing some energy in the image,"}, {"start": 24.0, "end": 27.0, "text": " because if I didn't compute many more subsequent bounces,"}, {"start": 27.0, "end": 32.0, "text": " I would accumulate more radiance, so I would get perhaps a brighter"}, {"start": 32.0, "end": 37.0, "text": " but more faithful in this to reality."}, {"start": 37.0, "end": 39.0, "text": " We can do much better than that."}, {"start": 39.0, "end": 46.0, "text": " Once more, there is a technique that can compute an infinite amount of samples,"}, {"start": 46.0, "end": 51.0, "text": " an infinite amount of depth, and that boggles the mind,"}, {"start": 51.0, "end": 54.0, "text": " because what is really happening,"}, {"start": 54.0, "end": 58.0, "text": " if I want to be able to compute physical reality,"}, {"start": 58.0, "end": 61.0, "text": " I would need to have a maximum depth of infinity."}, {"start": 61.0, "end": 64.0, "text": " I couldn't even compute one sample of the pixel,"}, {"start": 64.0, "end": 67.0, "text": " because I would have to bounce the array indefinitely."}, {"start": 67.0, "end": 70.0, "text": " But there's an mathematical technique that can keep you,"}, {"start": 70.0, "end": 74.0, "text": " the results, as if you computed an infinite number of bounces."}, {"start": 74.0, "end": 78.0, "text": " And this is against the statistics and probability,"}, {"start": 78.0, "end": 82.0, "text": " which is usually very difficult to wrap once mind"}, {"start": 82.0, "end": 87.0, "text": " around what is happening, but we can actually solve this problem."}, {"start": 87.0, "end": 90.0, "text": " So how can we overcome this?"}, {"start": 90.0, "end": 93.0, "text": " What we're looking for is an estimator that converges"}, {"start": 93.0, "end": 97.0, "text": " to the expected value of integral."}, {"start": 97.0, "end": 99.0, "text": " Okay, that's fine, that's fine."}, {"start": 99.0, "end": 103.0, "text": " I'm looking for the expected value parts."}, {"start": 103.0, "end": 107.0, "text": " Having the expected value, or looking for the expected value"}, {"start": 107.0, "end": 110.0, "text": " or something is one thing, but there's not only expected value"}, {"start": 110.0, "end": 112.0, "text": " that is variance."}, {"start": 112.0, "end": 115.0, "text": " So I may have multiple estimators,"}, {"start": 115.0, "end": 122.0, "text": " and what I'm looking for is one that has the lowest possible variance."}, {"start": 122.0, "end": 126.0, "text": " What I can do here is, after each step,"}, {"start": 126.0, "end": 131.0, "text": " by step, I need each bounce, I decide whether I will terminate the path,"}, {"start": 131.0, "end": 136.0, "text": " so I would stop this like that, or I would continue."}, {"start": 136.0, "end": 141.0, "text": " But if I continue, I multiply the collected radiance with the sun then,"}, {"start": 141.0, "end": 144.0, "text": " and the question is what should this happen?"}, {"start": 144.0, "end": 151.0, "text": " Now, this, I would like to relate to, for instance, Fredel's law."}, {"start": 151.0, "end": 156.0, "text": " In Fredel's law, we could compute what is the probability of reflection,"}, {"start": 156.0, "end": 158.0, "text": " and what is the probability of reflection?"}, {"start": 158.0, "end": 163.0, "text": " Like here, the last window, and with some probability,"}, {"start": 163.0, "end": 166.0, "text": " I will continue to migrate through the window,"}, {"start": 166.0, "end": 170.0, "text": " and with some probability, I will get a very effective."}, {"start": 170.0, "end": 175.0, "text": " Now, what I can do is, I can run many samples,"}, {"start": 175.0, "end": 180.0, "text": " and add that together, or what I can also do is,"}, {"start": 180.0, "end": 183.0, "text": " that I don't run many samples,"}, {"start": 183.0, "end": 189.0, "text": " I enter the window, and I compute that there's an 80% chance of reflection,"}, {"start": 189.0, "end": 192.0, "text": " and 20% for reflection."}, {"start": 192.0, "end": 196.0, "text": " And I will send out only one ray in each direction,"}, {"start": 196.0, "end": 201.0, "text": " but I would multiply this by relative probability of the effect."}, {"start": 201.0, "end": 206.0, "text": " So I'm not tracing 10,000 rays, I will send out one,"}, {"start": 206.0, "end": 212.0, "text": " and I will multiply it by 0.8 in one direction, and 0.2 in another direction."}, {"start": 212.0, "end": 216.0, "text": " And then, if I will do this, yes, I will compute more and more samples,"}, {"start": 216.0, "end": 219.0, "text": " but statistically, this is sound, so what this means,"}, {"start": 219.0, "end": 222.0, "text": " is that this converges to the expected value of the integral."}, {"start": 222.0, "end": 225.0, "text": " And rational that does the exact same thing,"}, {"start": 225.0, "end": 231.0, "text": " but it gives you an infinite number of boxes."}, {"start": 231.0, "end": 234.0, "text": " So, with a given probability, I stopped,"}, {"start": 234.0, "end": 236.0, "text": " and with a given probability, I continue,"}, {"start": 236.0, "end": 241.0, "text": " but I will multiply the collected variance with a factor."}, {"start": 241.0, "end": 243.0, "text": " And this factor for the front-end,"}, {"start": 243.0, "end": 252.0, "text": " is an example of the probability of reflection."}, {"start": 252.0, "end": 254.0, "text": " What does the algorithm look like?"}, {"start": 254.0, "end": 257.0, "text": " I choose a random variable, let's call it xe, on 0,1."}, {"start": 257.0, "end": 260.0, "text": " And with a given probability that's pi,"}, {"start": 260.0, "end": 264.0, "text": " I continue the light path after hitting something every round,"}, {"start": 264.0, "end": 266.0, "text": " and it flow a dice."}, {"start": 266.0, "end": 270.0, "text": " And if I have this probability, I will continue my light path,"}, {"start": 270.0, "end": 274.0, "text": " but I will multiply the collected variance with something."}, {"start": 274.0, "end": 278.0, "text": " And what gives you the end results, as what you would see in the textbook,"}, {"start": 278.0, "end": 283.0, "text": " I'll try to show you the gross process on from how someone can put together."}, {"start": 283.0, "end": 285.0, "text": " I will need to multiply it by something,"}, {"start": 285.0, "end": 288.0, "text": " I don't know what this something should be."}, {"start": 288.0, "end": 290.0, "text": " We will find out together."}, {"start": 290.0, "end": 292.0, "text": " And if I don't hit this probability,"}, {"start": 292.0, "end": 295.0, "text": " then I will terminate the light path,"}, {"start": 295.0, "end": 299.0, "text": " so we could imagine this as if I continue the light path,"}, {"start": 299.0, "end": 305.0, "text": " but I will multiply all the results even for gradient, pi, 0."}, {"start": 305.0, "end": 308.0, "text": " I spoiled 0 for the second question,"}, {"start": 308.0, "end": 313.0, "text": " so that's a damage, but you would have found this out in a second anyway."}, {"start": 313.0, "end": 316.0, "text": " So I'm looking for an expected value of something,"}, {"start": 316.0, "end": 320.0, "text": " that l i in the hat is an estimation, an estimator."}, {"start": 320.0, "end": 323.0, "text": " And on the right side, this is the actual L i."}, {"start": 323.0, "end": 326.0, "text": " So whatever happens in the middle is some magic,"}, {"start": 326.0, "end": 331.0, "text": " but the constraint is that the expected value of the estimator"}, {"start": 331.0, "end": 335.0, "text": " should be the same as the origin of what it is in the line of the input."}, {"start": 335.0, "end": 338.0, "text": " There is a probability of continuation,"}, {"start": 338.0, "end": 341.0, "text": " and if I don't hit this probability, then I will stop."}, {"start": 341.0, "end": 343.0, "text": " The stopping part is trivial,"}, {"start": 343.0, "end": 348.0, "text": " if I stop, then I will multiply this term with 0."}, {"start": 348.0, "end": 350.0, "text": " So imagine that I continue my light path,"}, {"start": 350.0, "end": 354.0, "text": " I won't be the wasted time until infinity,"}, {"start": 354.0, "end": 357.0, "text": " but it will be the pi 0."}, {"start": 357.0, "end": 361.0, "text": " Now the question is, what is the other question mark?"}, {"start": 361.0, "end": 364.0, "text": " What I know is that on the right side,"}, {"start": 364.0, "end": 367.0, "text": " I would want to get l i."}, {"start": 367.0, "end": 369.0, "text": " So forget this right term,"}, {"start": 369.0, "end": 372.0, "text": " what do I need to do with this expression on the left"}, {"start": 372.0, "end": 376.0, "text": " in order to get l i to the wrong side?"}, {"start": 376.0, "end": 378.0, "text": " Raise your hand if you know the answer."}, {"start": 378.0, "end": 382.0, "text": " I want you to take a few seconds and think about it."}, {"start": 382.0, "end": 387.0, "text": " What do I need to do to get l i from this expression?"}, {"start": 387.0, "end": 389.0, "text": " The rest is multiplied by 0,"}, {"start": 389.0, "end": 391.0, "text": " so this doesn't matter."}, {"start": 391.0, "end": 394.0, "text": " Raise your hand if you know."}, {"start": 399.0, "end": 401.0, "text": " Maybe, maybe."}, {"start": 401.0, "end": 402.0, "text": " Yes, please."}, {"start": 402.0, "end": 404.0, "text": " Yes, I have an answer."}, {"start": 404.0, "end": 406.0, "text": " Asked if you were OK."}, {"start": 406.0, "end": 409.0, "text": " I think that l i multiplied by 0."}, {"start": 409.0, "end": 410.0, "text": " Yes."}, {"start": 410.0, "end": 413.0, "text": " It was that."}, {"start": 413.0, "end": 416.0, "text": " I killed the pi because I don't see an epi here."}, {"start": 416.0, "end": 418.0, "text": " So there is going to be a fraction,"}, {"start": 418.0, "end": 421.0, "text": " and the denominator is going to be pi."}, {"start": 421.0, "end": 423.0, "text": " So I killed this guy,"}, {"start": 423.0, "end": 425.0, "text": " but there is no one in there,"}, {"start": 425.0, "end": 427.0, "text": " and I want someone in there,"}, {"start": 427.0, "end": 429.0, "text": " and that someone is the l i."}, {"start": 429.0, "end": 431.0, "text": " So I killed the pi with my fraction,"}, {"start": 431.0, "end": 434.0, "text": " and in the numerator there is going to be l i."}, {"start": 434.0, "end": 436.0, "text": " So if I do this,"}, {"start": 436.0, "end": 439.0, "text": " then what I am doing is going to be statistically solved."}, {"start": 439.0, "end": 442.0, "text": " And I try to give you the intuition again."}, {"start": 442.0, "end": 445.0, "text": " This takes time to wrap your head around."}, {"start": 445.0, "end": 448.0, "text": " It is almost like in the front-out equation"}, {"start": 448.0, "end": 454.0, "text": " that what you could do is you could send out one ray in one direction."}, {"start": 454.0, "end": 460.0, "text": " So you could send out 800 rays in one direction and sum them up."}, {"start": 460.0, "end": 463.0, "text": " What you could do is that I sent out only one ray,"}, {"start": 463.0, "end": 466.0, "text": " and I multiply it with 800."}, {"start": 466.0, "end": 469.0, "text": " And no, I would not get the same result,"}, {"start": 469.0, "end": 472.0, "text": " but I would get the same expected value."}, {"start": 472.0, "end": 475.0, "text": " And over time, the variance around this expected value"}, {"start": 475.0, "end": 477.0, "text": " would shrink if I do this many times."}, {"start": 477.0, "end": 483.0, "text": " So this is the intuition behind the whole thing."}, {"start": 483.0, "end": 485.0, "text": " What is the good choice for the pi?"}, {"start": 485.0, "end": 487.0, "text": " Because this has been a parameter so far."}, {"start": 487.0, "end": 490.0, "text": " What should I put in there?"}, {"start": 490.0, "end": 492.0, "text": " Well, with a little fraction,"}, {"start": 492.0, "end": 494.0, "text": " I could say it doesn't matter."}, {"start": 494.0, "end": 497.0, "text": " You could put many sensible choices in there,"}, {"start": 497.0, "end": 500.0, "text": " and it would work."}, {"start": 500.0, "end": 503.0, "text": " But quickly, let's review the cases where it would not work."}, {"start": 503.0, "end": 506.0, "text": " Well, obviously, there are two very stupid options."}, {"start": 506.0, "end": 509.0, "text": " If you put pi or zero,"}, {"start": 509.0, "end": 512.0, "text": " then this would mean that you would never continue your path."}, {"start": 512.0, "end": 513.0, "text": " You would always stop."}, {"start": 513.0, "end": 515.0, "text": " So this is obviously great."}, {"start": 515.0, "end": 517.0, "text": " What if I say pi equals one?"}, {"start": 517.0, "end": 519.0, "text": " Well, this means that I would always continue."}, {"start": 519.0, "end": 520.0, "text": " I would never stop."}, {"start": 520.0, "end": 522.0, "text": " Well, you can say that the mathematically this is sound,"}, {"start": 522.0, "end": 525.0, "text": " but you could never compute one sound or two pixel."}, {"start": 525.0, "end": 527.0, "text": " If you're on that condition,"}, {"start": 527.0, "end": 528.0, "text": " you would say I have a theory machine"}, {"start": 528.0, "end": 530.0, "text": " that you never stop."}, {"start": 530.0, "end": 532.0, "text": " It doesn't make too much sense"}, {"start": 532.0, "end": 534.0, "text": " if you're looking for practical solution."}, {"start": 534.0, "end": 538.0, "text": " Now, anything that it may mean the two is completely fine."}, {"start": 538.0, "end": 540.0, "text": " The only difference is,"}, {"start": 540.0, "end": 544.0, "text": " because I've showed you that the expected value"}, {"start": 544.0, "end": 547.0, "text": " is the same as the actual quantity done."}, {"start": 547.0, "end": 548.0, "text": " That I'm looking for."}, {"start": 548.0, "end": 551.0, "text": " But the variance is different."}, {"start": 551.0, "end": 554.0, "text": " So it is oscillating around the very same number,"}, {"start": 554.0, "end": 560.0, "text": " but the magnitude of the oscillation depends on this choice."}, {"start": 560.0, "end": 562.0, "text": " And what you can prove,"}, {"start": 562.0, "end": 565.0, "text": " but it is actually very easy to visualize"}, {"start": 565.0, "end": 567.0, "text": " that the good choice for the PI"}, {"start": 567.0, "end": 570.0, "text": " would usually be something that would sample"}, {"start": 570.0, "end": 572.0, "text": " brighter paths,"}, {"start": 572.0, "end": 573.0, "text": " longer,"}, {"start": 573.0, "end": 577.0, "text": " and darker paths I would want to terminate faster."}, {"start": 577.0, "end": 580.0, "text": " Because this is the same as matching the green function"}, {"start": 580.0, "end": 583.0, "text": " with the blue function with the green bars."}, {"start": 583.0, "end": 586.0, "text": " I would want to reconstruct the brighter regions"}, {"start": 586.0, "end": 589.0, "text": " more faithfully than darker regions."}, {"start": 589.0, "end": 591.0, "text": " Because this is what this means,"}, {"start": 591.0, "end": 593.0, "text": " smaller error."}, {"start": 593.0, "end": 595.0, "text": " So what you can plug in there"}, {"start": 595.0, "end": 598.0, "text": " is plenty of the elevator of the material."}, {"start": 598.0, "end": 601.0, "text": " So if you have a really bright, wide wall,"}, {"start": 601.0, "end": 603.0, "text": " then you would be the super high-quality"}, {"start": 603.0, "end": 607.0, "text": " if you would want to continue up."}, {"start": 607.0, "end": 612.0, "text": " But if you have a really dark object"}, {"start": 612.0, "end": 614.0, "text": " like the curtains,"}, {"start": 614.0, "end": 615.0, "text": " you either side of the room,"}, {"start": 615.0, "end": 616.0, "text": " you would want to stop"}, {"start": 616.0, "end": 618.0, "text": " with a much larger probability."}, {"start": 618.0, "end": 620.0, "text": " So this is how Russian will work."}, {"start": 620.0, "end": 621.0, "text": " We will also code this."}, {"start": 621.0, "end": 622.0, "text": " So in the next lecture,"}, {"start": 622.0, "end": 624.0, "text": " you will see the whole thing"}, {"start": 624.0, "end": 653.0, "text": " that we studied in code."}]
Two Minute Papers
https://www.youtube.com/watch?v=1ziudxJT884
TU Wien Rendering #26 - Low Discrepancy Sequences
In this segment we explore a subset of Quasi-Monte Carlo methods called low discrepancy series. Examples of this are the Halto and Van der Corput series. These are deterministically generated sample sequences that stratify well even in high dimensional Euclidean spaces. Surprisingly, randomly generated samples don't have this desirable property! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's talk about all these preferences series. What we have been talking about so far is WebOm something. This means that I have a random number generator, each generally samples, and this is the sample that I'm going to use. And many net addition for thinking that we could perhaps do much better than that. Because what I would be looking for is the hemisphere, and I would be shooting samples on the surface on this hemisphere. And I can do this deterministically. What if I have an algorithm that doesn't generate random numbers, but it will make sure that this hemisphere, if I have 100 samples, the samples are well distributed on this hemisphere. And if you do this, you may get much better convergence and much better looking results. So below, you can see a random number generator generated samples in 2D. And up there, the halter sequence, which is called below this frequency series, what this means is it's not completely random, but it tries to fill the space reasonably. Now this is not fibio. If you read after it, what you would think is that you would just get a grid. It would be very simple to get a grid, and just put points in the grid. And then you would have samples that are really well distributed. And this you can do it 2D. This you can do in one line, just given the space of this work step. But there are mathematical clues that tell you that this is absolutely terrible in higher dimensional. So if you have higher dimensional spaces, then this is not well specified. So the most commonly used sequences are the halter sequence. So the most commonly used sequences are the halter series, the soil series and water corpid series. You can do this low discrepancy sampling in many different ways. And this matter of years ago, an even distribution of the noise, because you are sampling these halus spheres reasonably, in a reasonably stratified way. So it cannot really be that one side of the hemisphere is sampled almost exhaustedly and the other one is completely neglected. So you would get images with a noise distribution that's better for you. So that's a plus. But what is even more important is that this is deterministic. So if you're entering an animation, imagine completely random sampling. If you have frame number one, you distribute your samples. Then comes frame number two. And then you distribute your samples in a completely different way. So the noise could look like this from frame one and that can be completely different. So until you have converged perfectly, you will have these issues that you call temporal flickering or temporal hearings issues. Because the noise looks like this on frame one, looks like that on frame two. And if you take 25 of these frames every second or you leave a door, then you will have really is probably incredibly different. You have computer and different things in every frame. And the solo series and all low discrepancy series help you with that. So they love to use the same thing as screen because of this reason. In the subsequent frames, you will compute the very same thing. And advantages. OK? Disadvantages. Well, these advantages are also huge. It's often not trivial to implement such a thing. If you take a look at this image, these walls are not textured. These are one color green. This is a one color green. This is a one color red, if you want. This should be rendered like this at all. This is a body image. And I have implemented the halter subplot. And the problem that I encountered was called correlating dimensions. And this is a serious problem that you can encounter. I will not go into the details. But you just messed up with one small detail. And you can get an image like that. Well, this is actually a delightful way of failing. I don't know about you, but most of my primary errors and these calculations are quite like this. They tend to look like this. So I usually read send-onation problems, or like that. So if you make a mistake in global innovation, rendering even your errors, it's better than in other fields.
[{"start": 0.0, "end": 4.0, "text": " Let's talk about all these preferences series."}, {"start": 4.0, "end": 7.0, "text": " What we have been talking about so far is WebOm something."}, {"start": 7.0, "end": 10.0, "text": " This means that I have a random number generator,"}, {"start": 10.0, "end": 14.0, "text": " each generally samples, and this is the sample that I'm going to use."}, {"start": 14.0, "end": 20.0, "text": " And many net addition for thinking that we could perhaps do much better than that."}, {"start": 20.0, "end": 24.0, "text": " Because what I would be looking for is the hemisphere,"}, {"start": 24.0, "end": 28.0, "text": " and I would be shooting samples on the surface on this hemisphere."}, {"start": 28.0, "end": 31.0, "text": " And I can do this deterministically."}, {"start": 31.0, "end": 35.0, "text": " What if I have an algorithm that doesn't generate random numbers,"}, {"start": 35.0, "end": 40.0, "text": " but it will make sure that this hemisphere, if I have 100 samples,"}, {"start": 40.0, "end": 44.0, "text": " the samples are well distributed on this hemisphere."}, {"start": 44.0, "end": 47.0, "text": " And if you do this, you may get much better convergence"}, {"start": 47.0, "end": 49.0, "text": " and much better looking results."}, {"start": 49.0, "end": 55.0, "text": " So below, you can see a random number generator generated samples in 2D."}, {"start": 55.0, "end": 61.0, "text": " And up there, the halter sequence, which is called below this frequency series,"}, {"start": 61.0, "end": 64.0, "text": " what this means is it's not completely random,"}, {"start": 64.0, "end": 70.0, "text": " but it tries to fill the space reasonably."}, {"start": 70.0, "end": 73.0, "text": " Now this is not fibio."}, {"start": 73.0, "end": 78.0, "text": " If you read after it, what you would think is that you would just get a grid."}, {"start": 78.0, "end": 83.0, "text": " It would be very simple to get a grid, and just put points in the grid."}, {"start": 83.0, "end": 87.0, "text": " And then you would have samples that are really well distributed."}, {"start": 87.0, "end": 89.0, "text": " And this you can do it 2D."}, {"start": 89.0, "end": 93.0, "text": " This you can do in one line, just given the space of this work step."}, {"start": 93.0, "end": 98.0, "text": " But there are mathematical clues that tell you that this is absolutely terrible"}, {"start": 98.0, "end": 100.0, "text": " in higher dimensional."}, {"start": 100.0, "end": 105.0, "text": " So if you have higher dimensional spaces, then this is not well specified."}, {"start": 105.0, "end": 110.0, "text": " So the most commonly used sequences are the halter sequence."}, {"start": 110.0, "end": 114.0, "text": " So the most commonly used sequences are the halter series,"}, {"start": 114.0, "end": 117.0, "text": " the soil series and water corpid series."}, {"start": 117.0, "end": 128.0, "text": " You can do this low discrepancy sampling in many different ways."}, {"start": 128.0, "end": 135.0, "text": " And this matter of years ago, an even distribution of the noise,"}, {"start": 135.0, "end": 140.0, "text": " because you are sampling these halus spheres reasonably,"}, {"start": 140.0, "end": 142.0, "text": " in a reasonably stratified way."}, {"start": 142.0, "end": 148.0, "text": " So it cannot really be that one side of the hemisphere is sampled almost exhaustedly"}, {"start": 148.0, "end": 151.0, "text": " and the other one is completely neglected."}, {"start": 151.0, "end": 155.0, "text": " So you would get images with a noise distribution that's better for you."}, {"start": 155.0, "end": 157.0, "text": " So that's a plus."}, {"start": 157.0, "end": 161.0, "text": " But what is even more important is that this is deterministic."}, {"start": 161.0, "end": 165.0, "text": " So if you're entering an animation, imagine completely random sampling."}, {"start": 165.0, "end": 168.0, "text": " If you have frame number one, you distribute your samples."}, {"start": 168.0, "end": 170.0, "text": " Then comes frame number two."}, {"start": 170.0, "end": 174.0, "text": " And then you distribute your samples in a completely different way."}, {"start": 174.0, "end": 181.0, "text": " So the noise could look like this from frame one and that can be completely different."}, {"start": 181.0, "end": 187.0, "text": " So until you have converged perfectly, you will have these issues that you call"}, {"start": 187.0, "end": 191.0, "text": " temporal flickering or temporal hearings issues."}, {"start": 191.0, "end": 195.0, "text": " Because the noise looks like this on frame one, looks like that on frame two."}, {"start": 195.0, "end": 199.0, "text": " And if you take 25 of these frames every second or you leave a door,"}, {"start": 199.0, "end": 203.0, "text": " then you will have really is probably incredibly different."}, {"start": 203.0, "end": 206.0, "text": " You have computer and different things in every frame."}, {"start": 206.0, "end": 210.0, "text": " And the solo series and all low discrepancy series help you with that."}, {"start": 210.0, "end": 215.0, "text": " So they love to use the same thing as screen because of this reason."}, {"start": 215.0, "end": 221.0, "text": " In the subsequent frames, you will compute the very same thing."}, {"start": 221.0, "end": 222.0, "text": " And advantages."}, {"start": 222.0, "end": 223.0, "text": " OK?"}, {"start": 223.0, "end": 224.0, "text": " Disadvantages."}, {"start": 224.0, "end": 228.0, "text": " Well, these advantages are also huge."}, {"start": 228.0, "end": 232.0, "text": " It's often not trivial to implement such a thing."}, {"start": 232.0, "end": 237.0, "text": " If you take a look at this image, these walls are not textured."}, {"start": 237.0, "end": 240.0, "text": " These are one color green."}, {"start": 240.0, "end": 242.0, "text": " This is a one color green."}, {"start": 242.0, "end": 245.0, "text": " This is a one color red, if you want."}, {"start": 245.0, "end": 248.0, "text": " This should be rendered like this at all."}, {"start": 248.0, "end": 252.0, "text": " This is a body image."}, {"start": 252.0, "end": 255.0, "text": " And I have implemented the halter subplot."}, {"start": 255.0, "end": 261.0, "text": " And the problem that I encountered was called correlating dimensions."}, {"start": 261.0, "end": 264.0, "text": " And this is a serious problem that you can encounter."}, {"start": 264.0, "end": 266.0, "text": " I will not go into the details."}, {"start": 266.0, "end": 268.0, "text": " But you just messed up with one small detail."}, {"start": 268.0, "end": 270.0, "text": " And you can get an image like that."}, {"start": 270.0, "end": 274.0, "text": " Well, this is actually a delightful way of failing."}, {"start": 274.0, "end": 278.0, "text": " I don't know about you, but most of my primary errors"}, {"start": 278.0, "end": 281.0, "text": " and these calculations are quite like this."}, {"start": 281.0, "end": 283.0, "text": " They tend to look like this."}, {"start": 283.0, "end": 287.0, "text": " So I usually read send-onation problems, or like that."}, {"start": 287.0, "end": 291.0, "text": " So if you make a mistake in global innovation,"}, {"start": 291.0, "end": 293.0, "text": " rendering even your errors,"}, {"start": 293.0, "end": 303.0, "text": " it's better than in other fields."}]
Two Minute Papers
https://www.youtube.com/watch?v=i6KDgYk5Nzg
TU Wien Rendering #25 - Path Tracing, Next Event Estimation
We finally have every tool in our hand to solve the Holy Rendering Equation! Furthermore, we extend it with next event estimation (in other words, explicit light sampling) to handle occluded and point light sources well. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's take a look at how the exact algorithm looks like. I have a recursive function where what I'm looking for first is if I have reached the maximum depth that I would like to render. This means that if I say that I will write together 5 bounces, then this is going to be 5. Did I reach this number of bounces? Yes, okay, then just top and return the back part of it. Then what I'm looking for is the nearest intersection. You remember from the previous lecture that this means parametric equations which I solved for p. So what I'm interested in is that I am intersecting a lot of objects and I'm only interested in the very first intersection. And if I didn't hit anything, then I will return a black color because there's no energy going in this through this way. Now, if I have the intersection of the object, I will be interested in the emission and the material of this object. The emission means that if this is a light source, then it's going to have emission and the material can be not retrieved by the idea of diffuse, glossy or some complicated multi-linear material. This I'm going to store on that. What's up next? Well, I would like to construct a new ray because I would face the next ray. I, especially to move right. This will start wherever I hit this object. So if I hit the table, I will create a new ray that starts from the table and I will set the outgoing direction according to some circumstances to find out. Let's say that what they have here, if it says that random unit vector, if you have a sphere ball where the object was hit, and this sounds like a diffuse case for me. So I generate a random unit vector on this and this is going to be the outgoing direction. Now let's add together the elements of the random equation. I have the cosine theta which is the light attenuation. I have the BRDF term and in the BRDF term it seems that here they have also included this cosine theta or which is the light attenuation. This is the albedo of the material. How much light is absorbed and how much is reflected. And then what I would like to do is I would call the very same function that you see in line of code number one. So this is a recursive function. I will start the same process again with a new ray, a new particle form and a new direction. And in the end if I have phrased a sufficient number of rays, then I will exit this recursion and I collect the result of this in this variable that's called reflected. And in the end this is the elegant representation of the random equation, the emission, the lb, plus the integrated function which is the BRDF times this reflected which is all the recursive terms. So this means that I shoot out this ray in the hemisphere and there is going to be many sub-city contences and I add up all this energy into the reflected incoming light. So this is the pseudo-fold. This is not something that you should try to compile or anything like that, but this is what we will code during the next lecture. This is just a schematic overview on what is happening exactly. It's actually very clear. We shoot the ray, we bounce it into the scene and then we hopefully hit the light source at some point. And even if we hit the light source we continue, but hitting the light source is important because this is where the emission term comes from. Let me show you what's going on if we don't hit light sources. So this lb is the emission term of the left side here. So we add this to the end result at every recursion step. And the fundamental question is that if we have a long light path that doesn't hit the light source, we are using completely random sampling. So maybe some smart important sampling. Well, that we never have this emission term. What does this mean? That the variance that we give, we get from the program is going to be zero. So the corollary of this is that only you will get variance, you will get an output from only light ads that hit the light source. If you don't hit the light source, you don't know where the light is coming from so you will return a black pixel. And this is obviously really bad thing because you're computing, turning out samples and samples and samples perhaps on your GPU, but it doesn't return you anything. So it's a very peculiar fact about simple life operating is that if we have a small light source, the convergence of your final result to this lower. Why? Small light sources, more variance, slower convergence. Because we need a random raised hit the light source. It's small, then we want to clear this off. Exactly. So the relative probability of hitting the light source is going to be less for a small light source. And up to the extreme where we have a point light source. And if we have a point light source that we will see that we will be in trouble, because what I would expect from my heart crisis to return something like this, what we imagine a point light source in here. But this is not what we will end up with. So I would expect it to return the correct results. Many people have reported many forums on the internet that, hey, I implemented it, but this is what I got. This doesn't work at all. All these pronounce law, smells law, totally turn over reflection, Monte Carlo intervention, for a black image. I mean, I could generate this with five lines of C++. Why do we even part? We will get nothing. Why is that? Point light source, black image. Why? Yes. Exactly. Exactly. So the point represents a location in mathematics. It does not have error. So technically, it is the same. Getting a point by source is impossible, because this is the same as what you would study in statistics and how we can deal with that. If you have one number in a continuous scale, what is the probability of hitting this number? 0, because that's the point. It has low surface error. It's infinitely small. We cannot hit it. So this is the reason of your black image. You read the coordinates on the internet, you don't find plenty of this. Now, we could also sum up our findings in the internet meme style, if you will. So if you would like to compute our face into the point by source, without the technique that is called next event estimation, then you usually will expect a wonderful image. But this is what you're not getting. Now, the first question is obviously, how will your work out this? What we can do is that every time we get something some object in the scene, we get the views for anything that in not light sources, we compute the direct effect of the light source on this point in the scene. So this is a schematic to show what is going on. So I start from the viewer, I hit this sphere, and I don't just start tracing the new gray outwards, but I will connect this point to the light source, and I will compute the direct illumination. This is the schematic for path tracing without the next event estimation, and this is with next event estimation. So at every station, I connect to the light source. In this case, this is actually included. In this case, this is the data ball, and in the third bounce, you will get some contributions from the light source. The question is, how do we do this exactly? Well, this was the topic of assignment 0. So the formula that you see in assignment 0 is exactly the very same thing as what you should be using. What was in there? Well, what we were interested in is that there was a term with the 4 pi, because if you have a light source that's a sphere, then what we were interested in, how much variance is emitted in one direction. So then you will need to divide by the area of the surface, which is a division by 4 pi, and there's going to be the attenuation curve, which is all squared. Same as in the gravitational law, or in the law of electric fields. It means that the further away I am from the light source, the less light is going to be. This is a really good technique because of multiple reasons. One of the reasons is that you will get contributions from every bounce during the computing to light. Before I proceed, I would like to tell you that here is L, we are talking about this LB, the emission curve. We are adding these parts of this emission curve in every bounce. So if I hit P1, I add this something. If I hit P2, I add this something. If I hit P3, then I also add this something. But when I hit the light source, I don't add the emission curve anymore because I will be adding it again. So this 1LB that you will add, when you hit the light source by chance, this is distributed into individual complexes. Why is this great? One, you can rather point light sources because the direct effect you can actually measure, but you cannot hit the light source itself. So let's get to 2, you will have less variance because it's not like I either hit the light source or I don't. I statistically always hit the light source unless there are recruiters. So I'm adding many samples with small variances, not one sample and lottery because you either win or you don't get anything. So I can lower the variance which means that my images will converge faster. And the other thing is that because there are contributions of every bounce, I can separate directly and redirect the illumination. So a lot of people do this in the industry because the movie industry is nowadays using power tracing. I cannot say that like as a whole and composing something statement, but for instance, this movie is now using global illumination. What do you think that most of the power tracing? Why? Because it looks insanely good and it is very simple. And it took them more than 20 years for them to replace their old systems which they really liked. And now they are using global illumination. And it has taken a long time but the benefits of global illumination are now 2D to S. And what they are doing is that they get a physically based result but this is not always what the artist is looking for. And if you have work together with artists, then they will say, okay, you have computed a beautiful image but I would like the shadows to get a bit brighter. The engineers say that, well, this is not possible. I can put it what will happen in physical reality and that's it. But the artists are interested in physical reality. They are interested in their own thoughts and their own artistic vision. And they would like to change the shadows. So you could technically make one of the light sources brighter and then the shadows would get brighter. But then the artist says, hey, but don't change anything else in the scene. Just change the shadows. And then you could pull out your knowledge of the rendering equation and look. The radius coming out from this point, you can't only surround this. So you cannot just make something brighter and the nearby things will also get brighter. You cannot circumvent that. What you can do with the next event destination is that you take, you generate an image from the first box. So you will get one image which is which you deposit the radiance that you measured in P1. That's an image. And then you create another image which will only contain the second file, P2 and upwards. So you would have multiple images and you could technically just end up all these images with simple addition and you would get physical reality. But if the artist says that I want stronger interactive illumination, then you would grab this buffer, this image that holds up the second and the higher or the advances. And you could do some Photoshop or you could do whatever you want without touching any of these. So you have a nice separation for directing the director illumination, moving the street. But moving the street, they love it. They are playing it all the time. And later you will see some algorithms that behave differently on the interactive illumination and differently on the interactive illumination. You can only do that if you separate these terms. So let's see path tracing now with next event estimation. I have the very first bounce. And before I continue my array, I will send the classical super, super classical shadow array to the light source. I'm going to choose the point of the light source. And I will add this direct contribution of the light source to this point. And then I continue. Let's go back to the terms. Sorry, we use many terms for the very same thing. This is why I write in all of these terms. Because if you do the forms, if you do the papers, we will see these terms and they all mean the same. So explicit light something, next event estimation, the very same thing. So I continue my array. And I also hit the light source with the shadow array. And then I continue on and on and on. And imagine that this third one is an outgoing rate that actually hits the light source. And if I do, I don't add the dark part in there because I did in the previous one. It's very important. Now you have seen the results for point light source, nothing versus something that's pretty happy. But even if you have a reasonably big light source, like the light, light side light source, I told you that you can't have variance suppression effect as well. So this is some amount of sample prefix. I think it's two maybe three samples to pixel. So this means that I grab one pixel and I send three ways to it. So it's three. Now, this you can do in two different ways because if you start to use renders, then you will see how this exactly happens. Some renders are rendering tiles. So what they do is that they start with some pixels. And if you say I want 1000 samples for pixel, then it will start take one or four or whatever number of threads you have on your machine. It would take four or four pixels and it will shoot more and more samples through. And after it got to 1000 samples, it will start and show you a really good and conversion pixel. And what we call progressive rendering is the opposite. You pick one pixel, you shoot or rate it, but only one. And then you go to the next. And then you go to the next. And then you will see an image that has some amount of noise and progressively you will get less and less noise. So this is what you see here is progressive render. Now, no next event estimation. So we only get contributions if we hit this light source in there. If we don't, you will get a black sample. Now, look closely. This is with next event estimation. So there's a huge difference. Such a simple technique can speed up the rendering of medicines with orders of brain. You can also play with this program by the way. This is implemented on shader toy. So when you read this at home, just click on the link and play with it. It's amazing.
[{"start": 0.0, "end": 4.0, "text": " Let's take a look at how the exact algorithm looks like."}, {"start": 6.0, "end": 15.0, "text": " I have a recursive function where what I'm looking for first is if I have reached the maximum depth that I would like to render."}, {"start": 15.0, "end": 21.0, "text": " This means that if I say that I will write together 5 bounces, then this is going to be 5."}, {"start": 21.0, "end": 28.0, "text": " Did I reach this number of bounces? Yes, okay, then just top and return the back part of it."}, {"start": 28.0, "end": 33.0, "text": " Then what I'm looking for is the nearest intersection."}, {"start": 33.0, "end": 39.0, "text": " You remember from the previous lecture that this means parametric equations which I solved for p."}, {"start": 39.0, "end": 48.0, "text": " So what I'm interested in is that I am intersecting a lot of objects and I'm only interested in the very first intersection."}, {"start": 48.0, "end": 58.0, "text": " And if I didn't hit anything, then I will return a black color because there's no energy going in this through this way."}, {"start": 58.0, "end": 65.0, "text": " Now, if I have the intersection of the object, I will be interested in the emission and the material of this object."}, {"start": 65.0, "end": 79.0, "text": " The emission means that if this is a light source, then it's going to have emission and the material can be not retrieved by the idea of diffuse, glossy or some complicated multi-linear material."}, {"start": 79.0, "end": 83.0, "text": " This I'm going to store on that."}, {"start": 83.0, "end": 90.0, "text": " What's up next? Well, I would like to construct a new ray because I would face the next ray."}, {"start": 90.0, "end": 96.0, "text": " I, especially to move right. This will start wherever I hit this object."}, {"start": 96.0, "end": 106.0, "text": " So if I hit the table, I will create a new ray that starts from the table and I will set the outgoing direction according to some circumstances to find out."}, {"start": 106.0, "end": 117.0, "text": " Let's say that what they have here, if it says that random unit vector, if you have a sphere ball where the object was hit, and this sounds like a diffuse case for me."}, {"start": 117.0, "end": 124.0, "text": " So I generate a random unit vector on this and this is going to be the outgoing direction."}, {"start": 124.0, "end": 127.0, "text": " Now let's add together the elements of the random equation."}, {"start": 127.0, "end": 131.0, "text": " I have the cosine theta which is the light attenuation."}, {"start": 131.0, "end": 140.0, "text": " I have the BRDF term and in the BRDF term it seems that here they have also included this cosine theta or which is the light attenuation."}, {"start": 140.0, "end": 148.0, "text": " This is the albedo of the material. How much light is absorbed and how much is reflected."}, {"start": 148.0, "end": 158.0, "text": " And then what I would like to do is I would call the very same function that you see in line of code number one."}, {"start": 158.0, "end": 165.0, "text": " So this is a recursive function. I will start the same process again with a new ray, a new particle form and a new direction."}, {"start": 165.0, "end": 176.0, "text": " And in the end if I have phrased a sufficient number of rays, then I will exit this recursion and I collect the result of this in this variable that's called reflected."}, {"start": 176.0, "end": 192.0, "text": " And in the end this is the elegant representation of the random equation, the emission, the lb, plus the integrated function which is the BRDF times this reflected which is all the recursive terms."}, {"start": 192.0, "end": 203.0, "text": " So this means that I shoot out this ray in the hemisphere and there is going to be many sub-city contences and I add up all this energy into the reflected incoming light."}, {"start": 203.0, "end": 216.0, "text": " So this is the pseudo-fold. This is not something that you should try to compile or anything like that, but this is what we will code during the next lecture."}, {"start": 216.0, "end": 223.0, "text": " This is just a schematic overview on what is happening exactly. It's actually very clear."}, {"start": 223.0, "end": 228.0, "text": " We shoot the ray, we bounce it into the scene and then we hopefully hit the light source at some point."}, {"start": 228.0, "end": 239.0, "text": " And even if we hit the light source we continue, but hitting the light source is important because this is where the emission term comes from."}, {"start": 239.0, "end": 248.0, "text": " Let me show you what's going on if we don't hit light sources. So this lb is the emission term of the left side here."}, {"start": 248.0, "end": 253.0, "text": " So we add this to the end result at every recursion step."}, {"start": 253.0, "end": 263.0, "text": " And the fundamental question is that if we have a long light path that doesn't hit the light source, we are using completely random sampling."}, {"start": 263.0, "end": 273.0, "text": " So maybe some smart important sampling. Well, that we never have this emission term."}, {"start": 273.0, "end": 279.0, "text": " What does this mean? That the variance that we give, we get from the program is going to be zero."}, {"start": 279.0, "end": 288.0, "text": " So the corollary of this is that only you will get variance, you will get an output from only light ads that hit the light source."}, {"start": 288.0, "end": 298.0, "text": " If you don't hit the light source, you don't know where the light is coming from so you will return a black pixel."}, {"start": 298.0, "end": 310.0, "text": " And this is obviously really bad thing because you're computing, turning out samples and samples and samples perhaps on your GPU, but it doesn't return you anything."}, {"start": 310.0, "end": 323.0, "text": " So it's a very peculiar fact about simple life operating is that if we have a small light source, the convergence of your final result to this lower."}, {"start": 323.0, "end": 325.0, "text": " Why?"}, {"start": 325.0, "end": 332.0, "text": " Small light sources, more variance, slower convergence."}, {"start": 332.0, "end": 340.0, "text": " Because we need a random raised hit the light source. It's small, then we want to clear this off."}, {"start": 340.0, "end": 344.0, "text": " Exactly. So the relative probability of hitting the light source is going to be less for a small light source."}, {"start": 344.0, "end": 353.0, "text": " And up to the extreme where we have a point light source. And if we have a point light source that we will see that we will be in trouble,"}, {"start": 353.0, "end": 362.0, "text": " because what I would expect from my heart crisis to return something like this, what we imagine a point light source in here."}, {"start": 362.0, "end": 366.0, "text": " But this is not what we will end up with."}, {"start": 366.0, "end": 378.0, "text": " So I would expect it to return the correct results. Many people have reported many forums on the internet that, hey, I implemented it, but this is what I got."}, {"start": 378.0, "end": 388.0, "text": " This doesn't work at all. All these pronounce law, smells law, totally turn over reflection, Monte Carlo intervention, for a black image."}, {"start": 388.0, "end": 399.0, "text": " I mean, I could generate this with five lines of C++. Why do we even part? We will get nothing. Why is that?"}, {"start": 399.0, "end": 404.0, "text": " Point light source, black image. Why?"}, {"start": 404.0, "end": 410.0, "text": " Yes. Exactly."}, {"start": 410.0, "end": 421.0, "text": " Exactly. So the point represents a location in mathematics. It does not have error."}, {"start": 421.0, "end": 432.0, "text": " So technically, it is the same. Getting a point by source is impossible, because this is the same as what you would study in statistics and how we can deal with that."}, {"start": 432.0, "end": 439.0, "text": " If you have one number in a continuous scale, what is the probability of hitting this number? 0, because that's the point."}, {"start": 439.0, "end": 447.0, "text": " It has low surface error. It's infinitely small. We cannot hit it."}, {"start": 447.0, "end": 455.0, "text": " So this is the reason of your black image. You read the coordinates on the internet, you don't find plenty of this."}, {"start": 455.0, "end": 474.0, "text": " Now, we could also sum up our findings in the internet meme style, if you will. So if you would like to compute our face into the point by source, without the technique that is called next event estimation, then you usually will expect a wonderful image."}, {"start": 474.0, "end": 479.0, "text": " But this is what you're not getting."}, {"start": 479.0, "end": 497.0, "text": " Now, the first question is obviously, how will your work out this? What we can do is that every time we get something some object in the scene, we get the views for anything that in not light sources, we compute the direct effect of the light source on this point in the scene."}, {"start": 497.0, "end": 515.0, "text": " So this is a schematic to show what is going on. So I start from the viewer, I hit this sphere, and I don't just start tracing the new gray outwards, but I will connect this point to the light source, and I will compute the direct illumination."}, {"start": 515.0, "end": 528.0, "text": " This is the schematic for path tracing without the next event estimation, and this is with next event estimation. So at every station, I connect to the light source. In this case, this is actually included."}, {"start": 528.0, "end": 536.0, "text": " In this case, this is the data ball, and in the third bounce, you will get some contributions from the light source."}, {"start": 536.0, "end": 553.0, "text": " The question is, how do we do this exactly? Well, this was the topic of assignment 0. So the formula that you see in assignment 0 is exactly the very same thing as what you should be using."}, {"start": 553.0, "end": 570.0, "text": " What was in there? Well, what we were interested in is that there was a term with the 4 pi, because if you have a light source that's a sphere, then what we were interested in, how much variance is emitted in one direction."}, {"start": 570.0, "end": 583.0, "text": " So then you will need to divide by the area of the surface, which is a division by 4 pi, and there's going to be the attenuation curve, which is all squared."}, {"start": 583.0, "end": 595.0, "text": " Same as in the gravitational law, or in the law of electric fields. It means that the further away I am from the light source, the less light is going to be."}, {"start": 595.0, "end": 608.0, "text": " This is a really good technique because of multiple reasons. One of the reasons is that you will get contributions from every bounce during the computing to light."}, {"start": 608.0, "end": 616.0, "text": " Before I proceed, I would like to tell you that here is L, we are talking about this LB, the emission curve."}, {"start": 616.0, "end": 629.0, "text": " We are adding these parts of this emission curve in every bounce. So if I hit P1, I add this something. If I hit P2, I add this something. If I hit P3, then I also add this something."}, {"start": 629.0, "end": 644.0, "text": " But when I hit the light source, I don't add the emission curve anymore because I will be adding it again. So this 1LB that you will add, when you hit the light source by chance, this is distributed into individual complexes."}, {"start": 644.0, "end": 653.0, "text": " Why is this great? One, you can rather point light sources because the direct effect you can actually measure, but you cannot hit the light source itself."}, {"start": 653.0, "end": 665.0, "text": " So let's get to 2, you will have less variance because it's not like I either hit the light source or I don't. I statistically always hit the light source unless there are recruiters."}, {"start": 665.0, "end": 676.0, "text": " So I'm adding many samples with small variances, not one sample and lottery because you either win or you don't get anything."}, {"start": 676.0, "end": 682.0, "text": " So I can lower the variance which means that my images will converge faster."}, {"start": 682.0, "end": 702.0, "text": " And the other thing is that because there are contributions of every bounce, I can separate directly and redirect the illumination. So a lot of people do this in the industry because the movie industry is nowadays using power tracing."}, {"start": 702.0, "end": 714.0, "text": " I cannot say that like as a whole and composing something statement, but for instance, this movie is now using global illumination. What do you think that most of the power tracing?"}, {"start": 714.0, "end": 726.0, "text": " Why? Because it looks insanely good and it is very simple. And it took them more than 20 years for them to replace their old systems which they really liked."}, {"start": 726.0, "end": 738.0, "text": " And now they are using global illumination. And it has taken a long time but the benefits of global illumination are now 2D to S."}, {"start": 738.0, "end": 748.0, "text": " And what they are doing is that they get a physically based result but this is not always what the artist is looking for."}, {"start": 748.0, "end": 758.0, "text": " And if you have work together with artists, then they will say, okay, you have computed a beautiful image but I would like the shadows to get a bit brighter."}, {"start": 758.0, "end": 765.0, "text": " The engineers say that, well, this is not possible. I can put it what will happen in physical reality and that's it."}, {"start": 765.0, "end": 774.0, "text": " But the artists are interested in physical reality. They are interested in their own thoughts and their own artistic vision. And they would like to change the shadows."}, {"start": 774.0, "end": 781.0, "text": " So you could technically make one of the light sources brighter and then the shadows would get brighter."}, {"start": 781.0, "end": 786.0, "text": " But then the artist says, hey, but don't change anything else in the scene. Just change the shadows."}, {"start": 786.0, "end": 794.0, "text": " And then you could pull out your knowledge of the rendering equation and look. The radius coming out from this point, you can't only surround this."}, {"start": 794.0, "end": 810.0, "text": " So you cannot just make something brighter and the nearby things will also get brighter. You cannot circumvent that. What you can do with the next event destination is that you take, you generate an image from the first box."}, {"start": 810.0, "end": 825.0, "text": " So you will get one image which is which you deposit the radiance that you measured in P1. That's an image. And then you create another image which will only contain the second file, P2 and upwards."}, {"start": 825.0, "end": 833.0, "text": " So you would have multiple images and you could technically just end up all these images with simple addition and you would get physical reality."}, {"start": 833.0, "end": 844.0, "text": " But if the artist says that I want stronger interactive illumination, then you would grab this buffer, this image that holds up the second and the higher or the advances."}, {"start": 844.0, "end": 850.0, "text": " And you could do some Photoshop or you could do whatever you want without touching any of these."}, {"start": 850.0, "end": 856.0, "text": " So you have a nice separation for directing the director illumination, moving the street."}, {"start": 856.0, "end": 869.0, "text": " But moving the street, they love it. They are playing it all the time. And later you will see some algorithms that behave differently on the interactive illumination and differently on the interactive illumination."}, {"start": 869.0, "end": 874.0, "text": " You can only do that if you separate these terms."}, {"start": 874.0, "end": 889.0, "text": " So let's see path tracing now with next event estimation. I have the very first bounce. And before I continue my array, I will send the classical super, super classical shadow array to the light source."}, {"start": 889.0, "end": 895.0, "text": " I'm going to choose the point of the light source. And I will add this direct contribution of the light source to this point."}, {"start": 895.0, "end": 906.0, "text": " And then I continue. Let's go back to the terms. Sorry, we use many terms for the very same thing. This is why I write in all of these terms."}, {"start": 906.0, "end": 912.0, "text": " Because if you do the forms, if you do the papers, we will see these terms and they all mean the same."}, {"start": 912.0, "end": 918.0, "text": " So explicit light something, next event estimation, the very same thing."}, {"start": 918.0, "end": 927.0, "text": " So I continue my array. And I also hit the light source with the shadow array. And then I continue on and on and on."}, {"start": 927.0, "end": 937.0, "text": " And imagine that this third one is an outgoing rate that actually hits the light source. And if I do, I don't add the dark part in there because I did in the previous one."}, {"start": 937.0, "end": 942.0, "text": " It's very important."}, {"start": 942.0, "end": 949.0, "text": " Now you have seen the results for point light source, nothing versus something that's pretty happy."}, {"start": 949.0, "end": 961.0, "text": " But even if you have a reasonably big light source, like the light, light side light source, I told you that you can't have variance suppression effect as well."}, {"start": 961.0, "end": 975.0, "text": " So this is some amount of sample prefix. I think it's two maybe three samples to pixel. So this means that I grab one pixel and I send three ways to it. So it's three."}, {"start": 975.0, "end": 984.0, "text": " Now, this you can do in two different ways because if you start to use renders, then you will see how this exactly happens."}, {"start": 984.0, "end": 1000.0, "text": " Some renders are rendering tiles. So what they do is that they start with some pixels. And if you say I want 1000 samples for pixel, then it will start take one or four or whatever number of threads you have on your machine."}, {"start": 1000.0, "end": 1015.0, "text": " It would take four or four pixels and it will shoot more and more samples through. And after it got to 1000 samples, it will start and show you a really good and conversion pixel."}, {"start": 1015.0, "end": 1027.0, "text": " And what we call progressive rendering is the opposite. You pick one pixel, you shoot or rate it, but only one. And then you go to the next. And then you go to the next."}, {"start": 1027.0, "end": 1039.0, "text": " And then you will see an image that has some amount of noise and progressively you will get less and less noise. So this is what you see here is progressive render."}, {"start": 1039.0, "end": 1064.0, "text": " Now, no next event estimation. So we only get contributions if we hit this light source in there. If we don't, you will get a black sample. Now, look closely. This is with next event estimation. So there's a huge difference. Such a simple technique can speed up the rendering of medicines with orders of brain."}, {"start": 1064.0, "end": 1075.0, "text": " You can also play with this program by the way. This is implemented on shader toy. So when you read this at home, just click on the link and play with it. It's amazing."}]
Two Minute Papers
https://www.youtube.com/watch?v=zZZ4xW0WaY0
TU Wien Rendering #24 - Importance Sampling
Monte Carlo integration is a fantastic tool, but it's not necessarily efficient if we don't do it right! Solving the rendering equation requires a lot of computational resources, we better use our math kung-fu to better squeeze every drop of performance from the renderer. By drawing samples from our function with a probability proportional to their function value (importance sampling), we can substantially improve our convergence speed. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's talk about important sampling, because we have always been talking about uniform distribution. We will see that it's not usually a great idea to sample any function with uniform distribution. What I usually look for is that I have a function that I would like to reconstruct and I have a fixed sample project. From this project, like x samples or x samples per pixel, I would like to get the best estimation possible. Now, we have written the formula for important sampling. This important sampling was before when I divided by the P of x, because I don't only have the F, I also take into consideration the C. And there I can fly in and or reverse up to this distribution. It can be uniform distribution, it can be a Gaussian distribution, it can be many things. Now, take a look at this. I would like to integrate this function, which is the new line. So, it's a spiky function and imagine that the green bars is the actual sampling distribution that I use. It doesn't look like a good idea. Can anyone have a green bar? Because the green bars are too high on the right side and too low in the middle. Why is this a problem? It's not representing the actual distribution. Exactly. So, it has to represent the actual distribution that we would like to sample. Why? Well, let's give a few slides. So, if the function takes the higher value at some regions, this means that if I miss out on the reconstruction of this region, then my error is going to be high. So, what you can say is that if there is like a Gaussian function or spiky function, I would want to put more samples where the spiky is, because that's a large error. So, if I can reconstruct this large error better, I'm doing much better as if I would be sampling the parts that are actually have very small numbers. So, the flat regions that are almost zero. So, let's put more samples to the regions where the function is actually larger. And if we do this correctly, then what we're doing is called important sampling. So, what we're looking for is that I have these green bars and these green bars should match the blue function. So, if I can sample it again, I am looking for the expected value of f over p. So, I divided and multiplied with p of x in the expected value formula. And the question is, what should be the p that I applied in here? So, this can be the uniform distribution on a p or it can be allowed with very distribution. What would be a good distribution? Usually, what is proportional to the function? The function is large somewhere, the sampling distribution has to sample that region often. So, it also should be large. If it's small, in different regions, that it also should be small. And it's also said that if there are regions where the function is zero, I don't want to put samples there at all, because this must be reconstructed. The current below the function is zero. And we will deal with reconstructed functions like that. But for now, imagine that I have in my hand the function that is represent the people. Now, we have talked a bit about this. So, this better, this should give me quite a bit of an advantage, because otherwise it's not worth any bit. So, this is a rendered image with no important sampling. That look closely, you will see now results with important sampling. So, for example, butchered, it was running for the same amount of time. And this is the difference that you have in simple important sampling. So, this means, wherever there is more light, I will put more samples. And the darker regions, I will make a lecture with my samples. Let's take a look at another example. You can see how noisy this region is next to the car. And with important sampling, this is accounted for much better. Now, we are finally at the moment where we can attempt to solve the rendering equation. So, this infinite dimensional singular, this problem child of death, is so difficult that it seems at first that the one should ever block better than the drive. But now, it seems that we have every tool in order to solve it. So, just again, the intuition, the left side after the equality sign means that there are objects in the light sources, if you will. And this I have to account for. But this is not the only source of light. As these objects emit light, then these will be other objects that reflect this amount of light. This means that I am in an amount of light and I also reflect an amount of light, taking into consideration also light attenuation and the VRDF, which is the material properties of the object that I have at hand. So, let's say that I relate this to Monte Carlo integration. So, again, the formula, I am sampling F over E. This is equivalent for integrating F over X from A to B. Now, what is F? F is what you see up here on the right, this whole thing, sorry, just integral part. And P will be something. So, I just substitute the very same thing here on the right side. So, incoming light times the VRDF times the light attenuation factor. And there is going to be the P, which is now a sampling probability for outgoing direction. So, this means that I hit an object and I need to have a choice which outgoing direction should I set for? Where should I continue in this direction? Remindering. So, this is going to be one direction on the hemisphere. This is the Monte Carlo estimator for the actual integral. And let's imagine that we are trying to solve this for a diffuse object. So, a diffuse VRDF is rho over pi. Normally, it was 1 over pi, why? How can a VRDF be just a number? What you see? The perfect diffuse material means that all possible outgoing directions have the same probability. If I hit this table, if it would be perfectly diffuse, we talked about the fact that this is actually velocity. But if it would be perfectly diffuse, why is the light inverse? I have here. I will hit it somewhere and the outgoing direction can be anywhere from this illumination hemisphere. They all have the same probability. What does rho mean? rho is the orbital in the material. Because if I say 1 over pi, this means that every ray that comes in will have a outgoing rate. So, this object would be completely reflective. It wouldn't absorb completely why it wouldn't absorb any influence. Most objects are not like that. So, this absorption is way reflective dependent and we can represent this as rho. Now, how does the equation look like? I just substituted rho over pi for the VRDF. So, it seems that we know everything in this one just the incoming radius. So, what do we do with this sampling distribution? When we hit this diffuse object, we send out samples and we try to collect the incoming radians, which is the LI with this sampling distribution. And the question is, for this case, what would be a good sampling probability? Does it function to sample the diffuse here? Now, what we said is that this peak, the denominator, should be proportional to the numerator. Now, LI, we don't know. This is some part that we cannot really estimate. Because I would have to send many samples out on this hemisphere to know exactly how much light is coming. But by the time I get to know how much light is coming in, I've done the sampling. So, then I am not interested in the sampling distribution because I have a committee. So, this part will leave out for me for you sampling. This we cannot, I know as of now. But this rho over pi times cosine of theta, we can't deal with. So, let's imagine this sampling distribution, which is cosine of theta over pi. And the goal of this is that these people will kill each other. I have a cosine theta in the numerator and the denominator and the same in pi. So, this only, so only this part will remain there. I can technically also put the LV dough of this given material in the sampling distribution. But let's, let's be general for how? So, in the end, I have this simple equation. Look at this. This is what is going to be the solution of this new teni-dimensional integral. What it says is that I'm going to send samples on this hemisphere and I'm going to average it. Because we can identify by that. That's it. And then if you do something like this and you have the volume at very far, then you can render a mercury throughout from the gamemode slots.
[{"start": 0.0, "end": 9.0, "text": " Let's talk about important sampling, because we have always been talking about uniform distribution."}, {"start": 9.0, "end": 19.0, "text": " We will see that it's not usually a great idea to sample any function with uniform distribution."}, {"start": 19.0, "end": 28.0, "text": " What I usually look for is that I have a function that I would like to reconstruct and I have a fixed sample project."}, {"start": 28.0, "end": 36.0, "text": " From this project, like x samples or x samples per pixel, I would like to get the best estimation possible."}, {"start": 36.0, "end": 40.0, "text": " Now, we have written the formula for important sampling."}, {"start": 40.0, "end": 46.0, "text": " This important sampling was before when I divided by the P of x, because I don't only have the F,"}, {"start": 46.0, "end": 52.0, "text": " I also take into consideration the C. And there I can fly in and or reverse up to this distribution."}, {"start": 52.0, "end": 58.0, "text": " It can be uniform distribution, it can be a Gaussian distribution, it can be many things."}, {"start": 58.0, "end": 63.0, "text": " Now, take a look at this."}, {"start": 63.0, "end": 88.0, "text": " I would like to integrate this function, which is the new line."}, {"start": 88.0, "end": 97.0, "text": " So, it's a spiky function and imagine that the green bars is the actual sampling distribution that I use."}, {"start": 97.0, "end": 103.0, "text": " It doesn't look like a good idea. Can anyone have a green bar?"}, {"start": 103.0, "end": 114.0, "text": " Because the green bars are too high on the right side and too low in the middle."}, {"start": 114.0, "end": 121.0, "text": " Why is this a problem?"}, {"start": 121.0, "end": 126.0, "text": " It's not representing the actual distribution."}, {"start": 126.0, "end": 132.0, "text": " Exactly. So, it has to represent the actual distribution that we would like to sample."}, {"start": 132.0, "end": 138.0, "text": " Why? Well, let's give a few slides."}, {"start": 138.0, "end": 146.0, "text": " So, if the function takes the higher value at some regions, this means that if I miss out on the reconstruction of this region,"}, {"start": 146.0, "end": 153.0, "text": " then my error is going to be high. So, what you can say is that if there is like a Gaussian function or spiky function,"}, {"start": 153.0, "end": 159.0, "text": " I would want to put more samples where the spiky is, because that's a large error."}, {"start": 159.0, "end": 167.0, "text": " So, if I can reconstruct this large error better, I'm doing much better as if I would be sampling the parts"}, {"start": 167.0, "end": 174.0, "text": " that are actually have very small numbers. So, the flat regions that are almost zero."}, {"start": 174.0, "end": 179.0, "text": " So, let's put more samples to the regions where the function is actually larger."}, {"start": 179.0, "end": 185.0, "text": " And if we do this correctly, then what we're doing is called important sampling."}, {"start": 185.0, "end": 193.0, "text": " So, what we're looking for is that I have these green bars and these green bars should match the blue function."}, {"start": 193.0, "end": 199.0, "text": " So, if I can sample it again, I am looking for the expected value of f over p."}, {"start": 199.0, "end": 204.0, "text": " So, I divided and multiplied with p of x in the expected value formula."}, {"start": 204.0, "end": 208.0, "text": " And the question is, what should be the p that I applied in here?"}, {"start": 208.0, "end": 215.0, "text": " So, this can be the uniform distribution on a p or it can be allowed with very distribution."}, {"start": 215.0, "end": 220.0, "text": " What would be a good distribution? Usually, what is proportional to the function?"}, {"start": 220.0, "end": 225.0, "text": " The function is large somewhere, the sampling distribution has to sample that region often."}, {"start": 225.0, "end": 231.0, "text": " So, it also should be large. If it's small, in different regions, that it also should be small."}, {"start": 231.0, "end": 235.0, "text": " And it's also said that if there are regions where the function is zero,"}, {"start": 235.0, "end": 239.0, "text": " I don't want to put samples there at all, because this must be reconstructed."}, {"start": 239.0, "end": 244.0, "text": " The current below the function is zero."}, {"start": 244.0, "end": 255.0, "text": " And we will deal with reconstructed functions like that. But for now, imagine that I have in my hand the function that is represent the people."}, {"start": 255.0, "end": 263.0, "text": " Now, we have talked a bit about this. So, this better, this should give me quite a bit of an advantage,"}, {"start": 263.0, "end": 266.0, "text": " because otherwise it's not worth any bit."}, {"start": 266.0, "end": 269.0, "text": " So, this is a rendered image with no important sampling."}, {"start": 269.0, "end": 274.0, "text": " That look closely, you will see now results with important sampling."}, {"start": 274.0, "end": 278.0, "text": " So, for example, butchered, it was running for the same amount of time."}, {"start": 278.0, "end": 282.0, "text": " And this is the difference that you have in simple important sampling."}, {"start": 282.0, "end": 287.0, "text": " So, this means, wherever there is more light, I will put more samples."}, {"start": 287.0, "end": 290.0, "text": " And the darker regions, I will make a lecture with my samples."}, {"start": 290.0, "end": 293.0, "text": " Let's take a look at another example."}, {"start": 293.0, "end": 298.0, "text": " You can see how noisy this region is next to the car."}, {"start": 298.0, "end": 303.0, "text": " And with important sampling, this is accounted for much better."}, {"start": 303.0, "end": 311.0, "text": " Now, we are finally at the moment where we can attempt to solve the rendering equation."}, {"start": 311.0, "end": 315.0, "text": " So, this infinite dimensional singular, this problem child of death,"}, {"start": 315.0, "end": 321.0, "text": " is so difficult that it seems at first that the one should ever block better than the drive."}, {"start": 321.0, "end": 326.0, "text": " But now, it seems that we have every tool in order to solve it."}, {"start": 326.0, "end": 332.0, "text": " So, just again, the intuition, the left side after the equality sign means"}, {"start": 332.0, "end": 337.0, "text": " that there are objects in the light sources, if you will."}, {"start": 337.0, "end": 340.0, "text": " And this I have to account for."}, {"start": 340.0, "end": 342.0, "text": " But this is not the only source of light."}, {"start": 342.0, "end": 350.0, "text": " As these objects emit light, then these will be other objects that reflect this amount of light."}, {"start": 350.0, "end": 356.0, "text": " This means that I am in an amount of light and I also reflect an amount of light,"}, {"start": 356.0, "end": 361.0, "text": " taking into consideration also light attenuation and the VRDF,"}, {"start": 361.0, "end": 366.0, "text": " which is the material properties of the object that I have at hand."}, {"start": 366.0, "end": 370.0, "text": " So, let's say that I relate this to Monte Carlo integration."}, {"start": 370.0, "end": 374.0, "text": " So, again, the formula, I am sampling F over E."}, {"start": 374.0, "end": 379.0, "text": " This is equivalent for integrating F over X from A to B."}, {"start": 379.0, "end": 383.0, "text": " Now, what is F? F is what you see up here on the right,"}, {"start": 383.0, "end": 387.0, "text": " this whole thing, sorry, just integral part."}, {"start": 387.0, "end": 389.0, "text": " And P will be something."}, {"start": 389.0, "end": 393.0, "text": " So, I just substitute the very same thing here on the right side."}, {"start": 393.0, "end": 399.0, "text": " So, incoming light times the VRDF times the light attenuation factor."}, {"start": 399.0, "end": 405.0, "text": " And there is going to be the P, which is now a sampling probability for outgoing direction."}, {"start": 405.0, "end": 411.0, "text": " So, this means that I hit an object and I need to have a choice which outgoing direction should I set for?"}, {"start": 411.0, "end": 414.0, "text": " Where should I continue in this direction?"}, {"start": 414.0, "end": 416.0, "text": " Remindering."}, {"start": 416.0, "end": 419.0, "text": " So, this is going to be one direction on the hemisphere."}, {"start": 419.0, "end": 423.0, "text": " This is the Monte Carlo estimator for the actual integral."}, {"start": 423.0, "end": 427.0, "text": " And let's imagine that we are trying to solve this for a diffuse object."}, {"start": 427.0, "end": 430.0, "text": " So, a diffuse VRDF is rho over pi."}, {"start": 430.0, "end": 434.0, "text": " Normally, it was 1 over pi, why?"}, {"start": 434.0, "end": 437.0, "text": " How can a VRDF be just a number?"}, {"start": 437.0, "end": 438.0, "text": " What you see?"}, {"start": 438.0, "end": 445.0, "text": " The perfect diffuse material means that all possible outgoing directions have the same probability."}, {"start": 445.0, "end": 451.0, "text": " If I hit this table, if it would be perfectly diffuse, we talked about the fact that this is actually velocity."}, {"start": 451.0, "end": 455.0, "text": " But if it would be perfectly diffuse, why is the light inverse?"}, {"start": 455.0, "end": 456.0, "text": " I have here."}, {"start": 456.0, "end": 461.0, "text": " I will hit it somewhere and the outgoing direction can be anywhere from this illumination hemisphere."}, {"start": 461.0, "end": 463.0, "text": " They all have the same probability."}, {"start": 463.0, "end": 465.0, "text": " What does rho mean?"}, {"start": 465.0, "end": 467.0, "text": " rho is the orbital in the material."}, {"start": 467.0, "end": 474.0, "text": " Because if I say 1 over pi, this means that every ray that comes in will have a outgoing rate."}, {"start": 474.0, "end": 478.0, "text": " So, this object would be completely reflective."}, {"start": 478.0, "end": 482.0, "text": " It wouldn't absorb completely why it wouldn't absorb any influence."}, {"start": 482.0, "end": 485.0, "text": " Most objects are not like that."}, {"start": 485.0, "end": 491.0, "text": " So, this absorption is way reflective dependent and we can represent this as rho."}, {"start": 491.0, "end": 494.0, "text": " Now, how does the equation look like?"}, {"start": 494.0, "end": 498.0, "text": " I just substituted rho over pi for the VRDF."}, {"start": 498.0, "end": 506.0, "text": " So, it seems that we know everything in this one just the incoming radius."}, {"start": 506.0, "end": 509.0, "text": " So, what do we do with this sampling distribution?"}, {"start": 509.0, "end": 516.0, "text": " When we hit this diffuse object, we send out samples and we try to collect the incoming radians,"}, {"start": 516.0, "end": 520.0, "text": " which is the LI with this sampling distribution."}, {"start": 520.0, "end": 528.0, "text": " And the question is, for this case, what would be a good sampling probability?"}, {"start": 528.0, "end": 532.0, "text": " Does it function to sample the diffuse here?"}, {"start": 532.0, "end": 539.0, "text": " Now, what we said is that this peak, the denominator, should be proportional to the numerator."}, {"start": 539.0, "end": 541.0, "text": " Now, LI, we don't know."}, {"start": 541.0, "end": 544.0, "text": " This is some part that we cannot really estimate."}, {"start": 544.0, "end": 549.0, "text": " Because I would have to send many samples out on this hemisphere to know exactly how much light is coming."}, {"start": 549.0, "end": 554.0, "text": " But by the time I get to know how much light is coming in, I've done the sampling."}, {"start": 554.0, "end": 559.0, "text": " So, then I am not interested in the sampling distribution because I have a committee."}, {"start": 559.0, "end": 562.0, "text": " So, this part will leave out for me for you sampling."}, {"start": 562.0, "end": 565.0, "text": " This we cannot, I know as of now."}, {"start": 565.0, "end": 570.0, "text": " But this rho over pi times cosine of theta, we can't deal with."}, {"start": 570.0, "end": 576.0, "text": " So, let's imagine this sampling distribution, which is cosine of theta over pi."}, {"start": 576.0, "end": 581.0, "text": " And the goal of this is that these people will kill each other."}, {"start": 581.0, "end": 585.0, "text": " I have a cosine theta in the numerator and the denominator and the same in pi."}, {"start": 585.0, "end": 590.0, "text": " So, this only, so only this part will remain there."}, {"start": 590.0, "end": 597.0, "text": " I can technically also put the LV dough of this given material in the sampling distribution."}, {"start": 597.0, "end": 599.0, "text": " But let's, let's be general for how?"}, {"start": 599.0, "end": 604.0, "text": " So, in the end, I have this simple equation."}, {"start": 604.0, "end": 610.0, "text": " Look at this. This is what is going to be the solution of this new teni-dimensional integral."}, {"start": 610.0, "end": 616.0, "text": " What it says is that I'm going to send samples on this hemisphere and I'm going to average it."}, {"start": 616.0, "end": 618.0, "text": " Because we can identify by that."}, {"start": 618.0, "end": 619.0, "text": " That's it."}, {"start": 619.0, "end": 623.0, "text": " And then if you do something like this and you have the volume at very far,"}, {"start": 623.0, "end": 635.0, "text": " then you can render a mercury throughout from the gamemode slots."}]
Two Minute Papers
https://www.youtube.com/watch?v=Su6mJp6NYY4
TU Wien Rendering #23 - Monte Carlo Integration: The Solution
In segment #17, we encountered the problem with Monte Carlo integration: in some cases, it seemed to work well, not so much in others. What went wrong? Instead of just giving you the answer, I'll try to shed light on the nature of the problem from multiple angles and we will then successfully crack this nut together. The important lesson here is not only the solution to this problem, but you can learn how to approach and formalize a problem that you only have an intuitive understanding of. I hope this will help help you along your journey of expanding your knowledge. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
We have covered some problems. So we wanted to integrate this function two times sine squared of x from 0 to pi. And through engineering or through mathematics, we realized that this should be pi. And what we did is that we ramped the code that would integrate this through multi-hound iteration. And we got one instead. So there is some problem. There is some insufficient knowledge that we have that we have to remedy in some way. So why don't we take a look at another example which will reveal what we need for. So let's integrate this unbelievably difficult function from 1 to 0. Obviously this is x squared over 2. And the brackets show you that we have to substitute this for 1 to 5. What we get in mean n is 12. Now, let's do Monte Carlo integration. Let's pretend that we have integrate this function as we need like 2, analytically. So I take three completely random samples of this function. What does it mean? That I evaluate f of x at 1. Now, if I evaluate this x at 1, I have to use the get 1. I evaluate it at 3 and I also get that 3. So I have three samples now. And what I do is I simply average that. So this is 1 plus 3 plus 5 over 3. The end result is 3. But it shouldn't be that, right? Because the end result through analytic integration is exactly 4 times that. So something is definitely wrong with this Monte Carlo integration scheme. So what we know is that 3 is exactly 1, 4 of 12. So we see that there is a difference of the factor of 4. And if you take a closer look at the integration domain then you will see that 4 is exactly the size of the integration domain. You are integrating from 1 to 5. So just empirically, if we don't, this is 1 angle to look at the problem and to form it. You will see multiple angles. This is more like the engineering way of solving things. You don't know how to derive the full and correct solution. But you see that there is a factor of 4. 4 in the size of the integration domain. Well why don't we multiply with that and see what happens? And obviously it works. So if we multiply with the size of the integration domain, we get the result that we are looking for. So let's change the code. I multiply the previous function with the integration domain, which is from 0 to pi. And this is what I want to apply. And obviously I will get pi as an result, which is the correct solution for the previous integral. Now this is great and this looks very simple. And apparently this technique seems to work. But we still don't really know what is happening here. So we should use some black magic or mathematics, if you do, to see what is going on in the road. So imagine that we sample a function with the uniform distribution on 0 power. What does it mean? I have an interval from 0 to pi. And I generate random numbers on it. And every single number has the same probability. So this function would look like one of our pi, regardless of x in the parameter. Because it doesn't matter which part of the domain I choose, it will have the same probability with the future. Now, what we are essentially doing is integrating a function f of x multiplied by this something probability. Why? Because imagine that some regions of the function would have 0 probability to be sample. So imagine that I'm integrating from 0 to pi. But I will only take samples from 0 to 2. So there is a region in the function that I'm never going to visit. And I don't integrate this part. So tip that's one intuition. The other intuition is that if I draw samples not with uniform distribution, but with a different distribution, that in the average that I compute some regions of the function would be over exactly. Because I have a higher chance of sampling those. So what we are doing is, multiplying this f of x with a sampling probability of x. Now, this p of x is, in this case, to 1 over pi, the uniform distribution, which is obviously a constant. So get out of mind the role. And in the end, we have the integral of the function over pi. But this is not what I'm looking for. I just want to integrate the function itself. So I need to make this pi disappear. So I have this 1 over pi multiplier. What do I need to multiply with to get only this function? What should the question mark be? The flower. Excellent. Exactly. So I just killed this 1 over pi multiplier, which is this f of x sampling distribution. And if you take a look at it, yes, this is also the size of the integration for me. So this is a bit more rigorous. A bit more rigorous way to understand what is going on. This is through a derivation. Not just empirical stuff. What should I multiply with? We know a bit more about what is happening. I have a sampling distribution that I need to get it on. So if I have to 1 over pi multiplier, I got the 1 incorrectly. And if I use this scalar multiplier that I'm looking for, then I will get to the correct solution. Let's examine the whole thing. A bit 40, please. Different angles. I would like to show you how to solve the same problem in multiple different angles. So the super quick probability theory we can. We have an expected value. This is what we're looking for. What is an expected value? An expected value means that there is a value of something and there's a probability of getting these values. So let's take the expected value of the large score. How does it work? I can roll from 1 to 6. And they all have the same probability. All roles have the same probability. 1, 6. So the values are 1, 2, up to 6. And the probability is all the same, 1, 6. And if I have this up, then this says that the expected value of the large score is 3.5. Well, this means that if I need to guess what the next large score would be, then this would be the best value in order to minimize the error from the expected outcome. Now, if we would like to compute the expected value of something, then this means that I take the values that this something can take and I multiply it with the probability for this event. For instance, it is impossible to roll seven with the dice. So theoretically, you could put as the something a seven in there, but it would have zero probability. Therefore, you could not show up in the sum. And this is the discrete case. For the continuous case, we don't really need to do anything very serious. We just changed the summation to integration. So we are not using the discrete sum. But we are integrating continuous functions and we're using continuous subway distributions. Now, let's introduce this notation. What I'm looking for is the expected value of this function f of x after an n amount of samples. Because in multicore, you need to add more and more samples to get a more f whole representation of t. Now, what this means is f is the something and p is the something distribution. What we can do is that we can create a discrete sum that takes samples of this function and then multiplies with the size of the domain. And obviously, since we are taking the sum, we need to divide it by f. Because the more sample and the number of samples, the more samples we take from the function, the larger the number you get. So this is the averaging part. Now, you have to take a look at always the keep looking at the relevant quantities. So the expected value of this f of x does mean that in the integration, I won't apply it with this something probability. And on the right side in the Monte Carlo estimate, I will have the same quantity as on the left side. So if I'm looking for the expected value of x, then I will sample f of x. Now, if you take a look at that, you can see that this is just an approximation. This is not exactly the interval that we're looking for. But there is a multitude of theorems in computer science that show you that if you could use an infinite amount of samples, then you wouldn't approach the actual interval. And most courses on Monte Carlo integration show you different ways of proving this. But this is not what we are interested in. We would just believe that this is what is happening. It's actually very intuitive. Why this is happening? You remember seeing this sign wave that we sample with all these two and that ball. So you could see that if you have a lot of samples, you will get a good estimation of the error under the curve. Now, let's try to use different sample distributions. I mean, a few minutes, you will see why this would be a good idea in some cases. So I would like to integrate this f of x. I am now doing the transformation that is the identity acceleration. I didn't do anything to my f of x. I multiplied by p of x and then I divided by. So this is almost like a scalar multiplier and then I divided the same number. I get the very same thing. But if I would like to write that this is the expected value of something, then this will look a bit different because f over p is the something and p of x is the sample problem. So what we have now is the expected value of f over p. And the question is, what is the Monte Carlo estimator for this? And what we concluded in the previous slides that this should be the very same quantity as what I see in the expected value. So I will be something f over p. So I am not only something f. I am something f over the arbitrary chosen probability distribution. Now there are some good readings on how to do this well and why this is useful. So if you would like to know more about this, please read some of these documents. They are really well written and that's a rare thing nowadays because I have seen lots of not so well written guys on Monte Carlo integration. I need you to do a very long time to find something that has the quality that I should give out rather than to study. Now let's solve the actual example that we have previously with this formula. So f over p times p. So I am still integrating only f. And the sampling distribution was this two times sine square x. This was the function that we wanted to integrate and one over pi is the sampling distribution probability, sorry, uniform distribution over 1 to pi. So and yet in fact the integral of the original function. So I am looking for the expected value that's f over p. So I am going to sample in my code f over p. Let's put this in source code. If you look here, I now divide by the sampling distribution. So it's 1 over v minus a. So this means 1 over pi b and a. This a should have been 0 in this case. So I apologize for that differences in the code. I put the 2.5 in there because if you always a is always 0, then you may write code that works for integration from 0 to something but not 1 to something. So this is a cool thing to check if you have disappointed. So I apologize this a should be 0. But if you compute the actual result that you would be looking for, then you will get your pi. So this is the f. The first term in the sampling line 36. And after the division we have it. Wonderful. So this works. And from multiple angles we now understand how exactly this thing is working. Now if you write the good one to power integration routine and you solve the rendering equation with this. What you want to see is that as you add more samples, you will see first the really noisy image. And then as you add more and more samples this noise will slowly clean up. And if you think back in the previous lecture of mine, we have talked about over and under estimations of the integral. And this is exactly what shows up also in images. If we are trying to sample a function, I would like to be interested in the radiance. But as I add more and more samples, before I converge, I will get values that are larger than the actual intensities. And I will have values that are smaller. So this is what shows up visually as noise. So what you are looking for is always this samples per pixel metric. And when you have a noisy image, you would need to know how many samples I have used per pixel. And if it's still noisy, then you would need to add more samples. This is also some visualization on the evolution of the image after hundreds and then 100,000 samples. Depending on the algorithm, there are multiple ways of solving the rendering equation. You could have smarter algorithms that take longer to compute one sample because they are doing some smart magic. That this would mean that you would need less samples per pixel to get the first image. And the first algorithm that you use to study is actually the naive algorithm for hard tracing. And usually it is a tremendous amount of samples to compute an image. But since it is a simple algorithm, you can use your GPU or CPU to dish out a lot of samples per pixels in every second. Now, a bit of a beauty break, this is what we can get if we implement such a hard tracing. This was rather a bit luckscrutter. And some recent example. That's everyone who this is. Just raise your hand. Okay, how often people? Okay, excellent. So this is actually a margarine material from the Game of Thrones. And anyone has me and spoilers. I will be on that page. Okay. So please. And this is actuality because the Game of Thrones is running. Obviously, we all love the show. And there's also skin being rendered. So there's tons of stuff. So this is kind of. And you can solve this with a simple part. So that we will put together the theoretical part in the second half of this lecture. And then we will implement the next lecture. So when I see renders like this, what I feel is only comparable to religious spiritual wonder. It is absolutely amazing that we can compute something like this using only mathematics. These very simple things that I have shown you. And the other really cool thing is that we are writing these algorithms. We are creating products that use these algorithms. And these are given to world class artists who are just as good as an artist as we are engineers. And they are also giving it their best to create more and more free, cool models. And we can work together to create stuff like that. So this is absolutely amazing.
[{"start": 0.0, "end": 3.0, "text": " We have covered some problems."}, {"start": 3.0, "end": 6.0, "text": " So we wanted to integrate this function"}, {"start": 6.0, "end": 10.0, "text": " two times sine squared of x from 0 to pi."}, {"start": 10.0, "end": 12.0, "text": " And through engineering or through mathematics,"}, {"start": 12.0, "end": 15.0, "text": " we realized that this should be pi."}, {"start": 15.0, "end": 18.0, "text": " And what we did is that we ramped the code"}, {"start": 18.0, "end": 22.0, "text": " that would integrate this through multi-hound iteration."}, {"start": 22.0, "end": 25.0, "text": " And we got one instead."}, {"start": 25.0, "end": 27.0, "text": " So there is some problem."}, {"start": 27.0, "end": 30.0, "text": " There is some insufficient knowledge that we have"}, {"start": 30.0, "end": 32.0, "text": " that we have to remedy in some way."}, {"start": 32.0, "end": 34.0, "text": " So why don't we take a look at another example"}, {"start": 34.0, "end": 37.0, "text": " which will reveal what we need for."}, {"start": 37.0, "end": 40.0, "text": " So let's integrate this unbelievably difficult function"}, {"start": 40.0, "end": 41.0, "text": " from 1 to 0."}, {"start": 41.0, "end": 43.0, "text": " Obviously this is x squared over 2."}, {"start": 43.0, "end": 46.0, "text": " And the brackets show you that we have to substitute this"}, {"start": 46.0, "end": 48.0, "text": " for 1 to 5."}, {"start": 48.0, "end": 51.0, "text": " What we get in mean n is 12."}, {"start": 51.0, "end": 54.0, "text": " Now, let's do Monte Carlo integration."}, {"start": 54.0, "end": 57.0, "text": " Let's pretend that we have integrate this function"}, {"start": 57.0, "end": 59.0, "text": " as we need like 2, analytically."}, {"start": 59.0, "end": 63.0, "text": " So I take three completely random samples of this function."}, {"start": 63.0, "end": 64.0, "text": " What does it mean?"}, {"start": 64.0, "end": 67.0, "text": " That I evaluate f of x at 1."}, {"start": 67.0, "end": 71.0, "text": " Now, if I evaluate this x at 1, I have to use the get 1."}, {"start": 71.0, "end": 74.0, "text": " I evaluate it at 3 and I also get that 3."}, {"start": 74.0, "end": 76.0, "text": " So I have three samples now."}, {"start": 76.0, "end": 79.0, "text": " And what I do is I simply average that."}, {"start": 79.0, "end": 83.0, "text": " So this is 1 plus 3 plus 5 over 3."}, {"start": 83.0, "end": 85.0, "text": " The end result is 3."}, {"start": 85.0, "end": 87.0, "text": " But it shouldn't be that, right?"}, {"start": 87.0, "end": 90.0, "text": " Because the end result through analytic integration"}, {"start": 90.0, "end": 93.0, "text": " is exactly 4 times that."}, {"start": 93.0, "end": 97.0, "text": " So something is definitely wrong with this Monte Carlo integration scheme."}, {"start": 97.0, "end": 104.0, "text": " So what we know is that 3 is exactly 1, 4 of 12."}, {"start": 104.0, "end": 109.0, "text": " So we see that there is a difference of the factor of 4."}, {"start": 109.0, "end": 112.0, "text": " And if you take a closer look at the integration domain"}, {"start": 112.0, "end": 116.0, "text": " then you will see that 4 is exactly the size of the integration domain."}, {"start": 116.0, "end": 118.0, "text": " You are integrating from 1 to 5."}, {"start": 118.0, "end": 124.0, "text": " So just empirically, if we don't,"}, {"start": 124.0, "end": 128.0, "text": " this is 1 angle to look at the problem and to form it."}, {"start": 128.0, "end": 129.0, "text": " You will see multiple angles."}, {"start": 129.0, "end": 132.0, "text": " This is more like the engineering way of solving things."}, {"start": 132.0, "end": 137.0, "text": " You don't know how to derive the full and correct solution."}, {"start": 137.0, "end": 140.0, "text": " But you see that there is a factor of 4."}, {"start": 140.0, "end": 142.0, "text": " 4 in the size of the integration domain."}, {"start": 142.0, "end": 145.0, "text": " Well why don't we multiply with that and see what happens?"}, {"start": 145.0, "end": 146.0, "text": " And obviously it works."}, {"start": 146.0, "end": 150.0, "text": " So if we multiply with the size of the integration domain,"}, {"start": 150.0, "end": 152.0, "text": " we get the result that we are looking for."}, {"start": 152.0, "end": 154.0, "text": " So let's change the code."}, {"start": 154.0, "end": 160.0, "text": " I multiply the previous function with the integration domain,"}, {"start": 160.0, "end": 162.0, "text": " which is from 0 to pi."}, {"start": 162.0, "end": 165.0, "text": " And this is what I want to apply."}, {"start": 165.0, "end": 168.0, "text": " And obviously I will get pi as an result,"}, {"start": 168.0, "end": 171.0, "text": " which is the correct solution for the previous integral."}, {"start": 171.0, "end": 176.0, "text": " Now this is great and this looks very simple."}, {"start": 176.0, "end": 179.0, "text": " And apparently this technique seems to work."}, {"start": 179.0, "end": 182.0, "text": " But we still don't really know what is happening here."}, {"start": 182.0, "end": 185.0, "text": " So we should use some black magic or mathematics,"}, {"start": 185.0, "end": 189.0, "text": " if you do, to see what is going on in the road."}, {"start": 189.0, "end": 192.0, "text": " So imagine that we sample a function"}, {"start": 192.0, "end": 195.0, "text": " with the uniform distribution on 0 power."}, {"start": 195.0, "end": 198.0, "text": " What does it mean?"}, {"start": 198.0, "end": 201.0, "text": " I have an interval from 0 to pi."}, {"start": 201.0, "end": 204.0, "text": " And I generate random numbers on it."}, {"start": 204.0, "end": 208.0, "text": " And every single number has the same probability."}, {"start": 208.0, "end": 212.0, "text": " So this function would look like one of our pi,"}, {"start": 212.0, "end": 216.0, "text": " regardless of x in the parameter."}, {"start": 216.0, "end": 220.0, "text": " Because it doesn't matter which part of the domain I choose,"}, {"start": 220.0, "end": 222.0, "text": " it will have the same probability with the future."}, {"start": 222.0, "end": 228.0, "text": " Now, what we are essentially doing is integrating a function"}, {"start": 228.0, "end": 232.0, "text": " f of x multiplied by this something probability."}, {"start": 232.0, "end": 233.0, "text": " Why?"}, {"start": 233.0, "end": 237.0, "text": " Because imagine that some regions of the function"}, {"start": 237.0, "end": 240.0, "text": " would have 0 probability to be sample."}, {"start": 240.0, "end": 244.0, "text": " So imagine that I'm integrating from 0 to pi."}, {"start": 244.0, "end": 247.0, "text": " But I will only take samples from 0 to 2."}, {"start": 247.0, "end": 249.0, "text": " So there is a region in the function"}, {"start": 249.0, "end": 251.0, "text": " that I'm never going to visit."}, {"start": 251.0, "end": 254.0, "text": " And I don't integrate this part."}, {"start": 254.0, "end": 255.0, "text": " So tip that's one intuition."}, {"start": 255.0, "end": 258.0, "text": " The other intuition is that if I draw samples"}, {"start": 258.0, "end": 261.0, "text": " not with uniform distribution, but with a different distribution,"}, {"start": 261.0, "end": 264.0, "text": " that in the average that I compute some regions"}, {"start": 264.0, "end": 266.0, "text": " of the function would be over exactly."}, {"start": 266.0, "end": 270.0, "text": " Because I have a higher chance of sampling those."}, {"start": 270.0, "end": 272.0, "text": " So what we are doing is,"}, {"start": 272.0, "end": 276.0, "text": " multiplying this f of x with a sampling probability of x."}, {"start": 276.0, "end": 278.0, "text": " Now, this p of x is, in this case,"}, {"start": 278.0, "end": 281.0, "text": " to 1 over pi, the uniform distribution,"}, {"start": 281.0, "end": 283.0, "text": " which is obviously a constant."}, {"start": 283.0, "end": 286.0, "text": " So get out of mind the role."}, {"start": 286.0, "end": 291.0, "text": " And in the end, we have the integral of the function over pi."}, {"start": 291.0, "end": 293.0, "text": " But this is not what I'm looking for."}, {"start": 293.0, "end": 296.0, "text": " I just want to integrate the function itself."}, {"start": 296.0, "end": 298.0, "text": " So I need to make this pi disappear."}, {"start": 298.0, "end": 301.0, "text": " So I have this 1 over pi multiplier."}, {"start": 301.0, "end": 304.0, "text": " What do I need to multiply with to get only this function?"}, {"start": 304.0, "end": 307.0, "text": " What should the question mark be?"}, {"start": 307.0, "end": 310.0, "text": " The flower."}, {"start": 310.0, "end": 312.0, "text": " Excellent."}, {"start": 312.0, "end": 313.0, "text": " Exactly."}, {"start": 313.0, "end": 316.0, "text": " So I just killed this 1 over pi multiplier,"}, {"start": 316.0, "end": 320.0, "text": " which is this f of x sampling distribution."}, {"start": 320.0, "end": 322.0, "text": " And if you take a look at it,"}, {"start": 322.0, "end": 326.0, "text": " yes, this is also the size of the integration for me."}, {"start": 326.0, "end": 329.0, "text": " So this is a bit more rigorous."}, {"start": 329.0, "end": 331.0, "text": " A bit more rigorous way to understand what is going on."}, {"start": 331.0, "end": 332.0, "text": " This is through a derivation."}, {"start": 332.0, "end": 334.0, "text": " Not just empirical stuff."}, {"start": 334.0, "end": 335.0, "text": " What should I multiply with?"}, {"start": 335.0, "end": 338.0, "text": " We know a bit more about what is happening."}, {"start": 338.0, "end": 341.0, "text": " I have a sampling distribution that I need to get it on."}, {"start": 341.0, "end": 344.0, "text": " So if I have to 1 over pi multiplier,"}, {"start": 344.0, "end": 346.0, "text": " I got the 1 incorrectly."}, {"start": 346.0, "end": 350.0, "text": " And if I use this scalar multiplier that I'm looking for,"}, {"start": 350.0, "end": 353.0, "text": " then I will get to the correct solution."}, {"start": 356.0, "end": 358.0, "text": " Let's examine the whole thing."}, {"start": 358.0, "end": 359.0, "text": " A bit 40, please."}, {"start": 359.0, "end": 360.0, "text": " Different angles."}, {"start": 360.0, "end": 364.0, "text": " I would like to show you how to solve the same problem in multiple different angles."}, {"start": 364.0, "end": 369.0, "text": " So the super quick probability theory we can."}, {"start": 369.0, "end": 371.0, "text": " We have an expected value."}, {"start": 371.0, "end": 373.0, "text": " This is what we're looking for."}, {"start": 373.0, "end": 374.0, "text": " What is an expected value?"}, {"start": 374.0, "end": 377.0, "text": " An expected value means that there is a value of something"}, {"start": 377.0, "end": 380.0, "text": " and there's a probability of getting these values."}, {"start": 380.0, "end": 383.0, "text": " So let's take the expected value of the large score."}, {"start": 383.0, "end": 384.0, "text": " How does it work?"}, {"start": 384.0, "end": 386.0, "text": " I can roll from 1 to 6."}, {"start": 386.0, "end": 388.0, "text": " And they all have the same probability."}, {"start": 388.0, "end": 390.0, "text": " All roles have the same probability."}, {"start": 390.0, "end": 392.0, "text": " 1, 6."}, {"start": 392.0, "end": 395.0, "text": " So the values are 1, 2, up to 6."}, {"start": 395.0, "end": 399.0, "text": " And the probability is all the same, 1, 6."}, {"start": 399.0, "end": 406.0, "text": " And if I have this up, then this says that the expected value of the large score is 3.5."}, {"start": 406.0, "end": 414.0, "text": " Well, this means that if I need to guess what the next large score would be,"}, {"start": 414.0, "end": 421.0, "text": " then this would be the best value in order to minimize the error from the expected outcome."}, {"start": 421.0, "end": 427.0, "text": " Now, if we would like to compute the expected value of something,"}, {"start": 427.0, "end": 431.0, "text": " then this means that I take the values that this something can take"}, {"start": 431.0, "end": 435.0, "text": " and I multiply it with the probability for this event."}, {"start": 435.0, "end": 439.0, "text": " For instance, it is impossible to roll seven with the dice."}, {"start": 439.0, "end": 442.0, "text": " So theoretically, you could put as the something a seven in there,"}, {"start": 442.0, "end": 444.0, "text": " but it would have zero probability."}, {"start": 444.0, "end": 447.0, "text": " Therefore, you could not show up in the sum."}, {"start": 447.0, "end": 453.0, "text": " And this is the discrete case. For the continuous case, we don't really need to do anything very serious."}, {"start": 453.0, "end": 456.0, "text": " We just changed the summation to integration."}, {"start": 456.0, "end": 459.0, "text": " So we are not using the discrete sum."}, {"start": 459.0, "end": 467.0, "text": " But we are integrating continuous functions and we're using continuous subway distributions."}, {"start": 467.0, "end": 470.0, "text": " Now, let's introduce this notation."}, {"start": 470.0, "end": 474.0, "text": " What I'm looking for is the expected value of this function f of x"}, {"start": 474.0, "end": 480.0, "text": " after an n amount of samples. Because in multicore, you need to add more and more samples"}, {"start": 480.0, "end": 485.0, "text": " to get a more f whole representation of t."}, {"start": 485.0, "end": 492.0, "text": " Now, what this means is f is the something and p is the something distribution."}, {"start": 492.0, "end": 499.0, "text": " What we can do is that we can create a discrete sum that takes samples of this function"}, {"start": 499.0, "end": 504.0, "text": " and then multiplies with the size of the domain. And obviously, since we are taking the sum,"}, {"start": 504.0, "end": 508.0, "text": " we need to divide it by f. Because the more sample and the number of samples,"}, {"start": 508.0, "end": 512.0, "text": " the more samples we take from the function, the larger the number you get."}, {"start": 512.0, "end": 516.0, "text": " So this is the averaging part."}, {"start": 516.0, "end": 522.0, "text": " Now, you have to take a look at always the keep looking at the relevant quantities."}, {"start": 522.0, "end": 526.0, "text": " So the expected value of this f of x"}, {"start": 526.0, "end": 531.0, "text": " does mean that in the integration, I won't apply it with this something probability."}, {"start": 531.0, "end": 537.0, "text": " And on the right side in the Monte Carlo estimate, I will have the same quantity as on the left side."}, {"start": 537.0, "end": 544.0, "text": " So if I'm looking for the expected value of x, then I will sample f of x."}, {"start": 544.0, "end": 550.0, "text": " Now, if you take a look at that, you can see that this is just an approximation."}, {"start": 550.0, "end": 553.0, "text": " This is not exactly the interval that we're looking for."}, {"start": 553.0, "end": 558.0, "text": " But there is a multitude of theorems in computer science that show you that"}, {"start": 558.0, "end": 564.0, "text": " if you could use an infinite amount of samples, then you wouldn't approach the actual interval."}, {"start": 564.0, "end": 571.0, "text": " And most courses on Monte Carlo integration show you different ways of proving this."}, {"start": 571.0, "end": 577.0, "text": " But this is not what we are interested in. We would just believe that this is what is happening."}, {"start": 577.0, "end": 584.0, "text": " It's actually very intuitive. Why this is happening? You remember seeing this sign wave that we sample"}, {"start": 584.0, "end": 588.0, "text": " with all these two and that ball. So you could see that if you have a lot of samples,"}, {"start": 588.0, "end": 593.0, "text": " you will get a good estimation of the error under the curve."}, {"start": 593.0, "end": 596.0, "text": " Now, let's try to use different sample distributions."}, {"start": 596.0, "end": 601.0, "text": " I mean, a few minutes, you will see why this would be a good idea in some cases."}, {"start": 601.0, "end": 604.0, "text": " So I would like to integrate this f of x."}, {"start": 604.0, "end": 609.0, "text": " I am now doing the transformation that is the identity acceleration."}, {"start": 609.0, "end": 614.0, "text": " I didn't do anything to my f of x. I multiplied by p of x and then I divided by."}, {"start": 614.0, "end": 619.0, "text": " So this is almost like a scalar multiplier and then I divided the same number."}, {"start": 619.0, "end": 626.0, "text": " I get the very same thing. But if I would like to write that this is the expected value of something,"}, {"start": 626.0, "end": 633.0, "text": " then this will look a bit different because f over p is the something and p of x is the sample problem."}, {"start": 633.0, "end": 640.0, "text": " So what we have now is the expected value of f over p."}, {"start": 640.0, "end": 645.0, "text": " And the question is, what is the Monte Carlo estimator for this?"}, {"start": 645.0, "end": 650.0, "text": " And what we concluded in the previous slides that this should be the very same quantity"}, {"start": 650.0, "end": 655.0, "text": " as what I see in the expected value. So I will be something f over p."}, {"start": 655.0, "end": 665.0, "text": " So I am not only something f. I am something f over the arbitrary chosen probability distribution."}, {"start": 665.0, "end": 670.0, "text": " Now there are some good readings on how to do this well and why this is useful."}, {"start": 670.0, "end": 674.0, "text": " So if you would like to know more about this, please read some of these documents."}, {"start": 674.0, "end": 683.0, "text": " They are really well written and that's a rare thing nowadays because I have seen lots of not so well written guys on Monte Carlo integration."}, {"start": 683.0, "end": 692.0, "text": " I need you to do a very long time to find something that has the quality that I should give out rather than to study."}, {"start": 692.0, "end": 697.0, "text": " Now let's solve the actual example that we have previously with this formula."}, {"start": 697.0, "end": 704.0, "text": " So f over p times p. So I am still integrating only f."}, {"start": 704.0, "end": 709.0, "text": " And the sampling distribution was this two times sine square x."}, {"start": 709.0, "end": 716.0, "text": " This was the function that we wanted to integrate and one over pi is the sampling distribution probability, sorry,"}, {"start": 716.0, "end": 719.0, "text": " uniform distribution over 1 to pi."}, {"start": 719.0, "end": 725.0, "text": " So and yet in fact the integral of the original function."}, {"start": 725.0, "end": 730.0, "text": " So I am looking for the expected value that's f over p."}, {"start": 730.0, "end": 733.0, "text": " So I am going to sample in my code f over p."}, {"start": 733.0, "end": 740.0, "text": " Let's put this in source code. If you look here, I now divide by the sampling distribution."}, {"start": 740.0, "end": 746.0, "text": " So it's 1 over v minus a. So this means 1 over pi b and a."}, {"start": 746.0, "end": 752.0, "text": " This a should have been 0 in this case. So I apologize for that differences in the code."}, {"start": 752.0, "end": 759.0, "text": " I put the 2.5 in there because if you always a is always 0,"}, {"start": 759.0, "end": 766.0, "text": " then you may write code that works for integration from 0 to something but not 1 to something."}, {"start": 766.0, "end": 771.0, "text": " So this is a cool thing to check if you have disappointed."}, {"start": 771.0, "end": 773.0, "text": " So I apologize this a should be 0."}, {"start": 773.0, "end": 778.0, "text": " But if you compute the actual result that you would be looking for, then you will get your pi."}, {"start": 778.0, "end": 782.0, "text": " So this is the f."}, {"start": 782.0, "end": 785.0, "text": " The first term in the sampling line 36."}, {"start": 785.0, "end": 789.0, "text": " And after the division we have it."}, {"start": 789.0, "end": 796.0, "text": " Wonderful. So this works. And from multiple angles we now understand how exactly this thing is working."}, {"start": 796.0, "end": 805.0, "text": " Now if you write the good one to power integration routine and you solve the rendering equation with this."}, {"start": 805.0, "end": 813.0, "text": " What you want to see is that as you add more samples, you will see first the really noisy image."}, {"start": 813.0, "end": 817.0, "text": " And then as you add more and more samples this noise will slowly clean up."}, {"start": 817.0, "end": 827.0, "text": " And if you think back in the previous lecture of mine, we have talked about over and under estimations of the integral."}, {"start": 827.0, "end": 831.0, "text": " And this is exactly what shows up also in images."}, {"start": 831.0, "end": 836.0, "text": " If we are trying to sample a function, I would like to be interested in the radiance."}, {"start": 836.0, "end": 842.0, "text": " But as I add more and more samples, before I converge, I will get values that are larger than the actual intensities."}, {"start": 842.0, "end": 850.0, "text": " And I will have values that are smaller. So this is what shows up visually as noise."}, {"start": 850.0, "end": 856.0, "text": " So what you are looking for is always this samples per pixel metric."}, {"start": 856.0, "end": 862.0, "text": " And when you have a noisy image, you would need to know how many samples I have used per pixel."}, {"start": 862.0, "end": 865.0, "text": " And if it's still noisy, then you would need to add more samples."}, {"start": 865.0, "end": 874.0, "text": " This is also some visualization on the evolution of the image after hundreds and then 100,000 samples."}, {"start": 874.0, "end": 880.0, "text": " Depending on the algorithm, there are multiple ways of solving the rendering equation."}, {"start": 880.0, "end": 887.0, "text": " You could have smarter algorithms that take longer to compute one sample because they are doing some smart magic."}, {"start": 887.0, "end": 893.0, "text": " That this would mean that you would need less samples per pixel to get the first image."}, {"start": 893.0, "end": 899.0, "text": " And the first algorithm that you use to study is actually the naive algorithm for hard tracing."}, {"start": 899.0, "end": 904.0, "text": " And usually it is a tremendous amount of samples to compute an image."}, {"start": 904.0, "end": 914.0, "text": " But since it is a simple algorithm, you can use your GPU or CPU to dish out a lot of samples per pixels in every second."}, {"start": 914.0, "end": 923.0, "text": " Now, a bit of a beauty break, this is what we can get if we implement such a hard tracing."}, {"start": 923.0, "end": 928.0, "text": " This was rather a bit luckscrutter. And some recent example."}, {"start": 928.0, "end": 932.0, "text": " That's everyone who this is. Just raise your hand."}, {"start": 932.0, "end": 937.0, "text": " Okay, how often people? Okay, excellent."}, {"start": 937.0, "end": 944.0, "text": " So this is actually a margarine material from the Game of Thrones. And anyone has me and spoilers."}, {"start": 944.0, "end": 948.0, "text": " I will be on that page. Okay. So please."}, {"start": 948.0, "end": 953.0, "text": " And this is actuality because the Game of Thrones is running."}, {"start": 953.0, "end": 957.0, "text": " Obviously, we all love the show. And there's also skin being rendered."}, {"start": 957.0, "end": 960.0, "text": " So there's tons of stuff. So this is kind of."}, {"start": 960.0, "end": 963.0, "text": " And you can solve this with a simple part."}, {"start": 963.0, "end": 969.0, "text": " So that we will put together the theoretical part in the second half of this lecture."}, {"start": 969.0, "end": 972.0, "text": " And then we will implement the next lecture."}, {"start": 972.0, "end": 982.0, "text": " So when I see renders like this, what I feel is only comparable to religious spiritual wonder."}, {"start": 982.0, "end": 989.0, "text": " It is absolutely amazing that we can compute something like this using only mathematics."}, {"start": 989.0, "end": 994.0, "text": " These very simple things that I have shown you."}, {"start": 994.0, "end": 998.0, "text": " And the other really cool thing is that we are writing these algorithms."}, {"start": 998.0, "end": 1001.0, "text": " We are creating products that use these algorithms."}, {"start": 1001.0, "end": 1009.0, "text": " And these are given to world class artists who are just as good as an artist as we are engineers."}, {"start": 1009.0, "end": 1014.0, "text": " And they are also giving it their best to create more and more free, cool models."}, {"start": 1014.0, "end": 1020.0, "text": " And we can work together to create stuff like that. So this is absolutely amazing."}]
Two Minute Papers
https://www.youtube.com/watch?v=Ash4Q06ZcHU
TU Wien Rendering #19 - Space Partitioning 1
This lecture is held by Thomas Auzinger. Space partitioning helps us to alleviate the problem of intersecting a ray of light against every object in the scene. It turns out that we can often throw away half of the objects with every intersection test! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, so welcome to today's rendering lecture. This is going to be unit 4. We will have two parts in it. So one, the first part will be spatial acceleration structures and the next part will be tone mapping. I will hold the next three lectures. So this one and two more and then I'll take it over again. So spatial acceleration structures. So where are we? The rendering pipeline, as it was shown in the first lecture, so we start with a 3D scene, performs some kind of light simulation, generate an image out of it that's going to be displayed. So spatial acceleration structures are central to the light simulation because they increase the efficiency of ray shooting. So as you heard last time, ray-based methodologies, so from geometric optics, are mainly employed to enable photorealistic rendering. And in this work, you have to shoot a lot of rays. So usually in the order of million to billion. And if you can cut down the computational cost of this procedure, then you will gain significant speed ups. So to summarize, so generally the Monte Carlo method uses ray shooting to sample the integrand of the rendering equation, as shown in the last time. So usually you have to compute the closest intersection with the scene. So this is equivalent to computing the local visibility. So how far does it travel through the scene before it hits its first object? And this is usually very expensive for a large amount of scene objects, because if you start with one ray and you want to check, does it intersect any of my scene set triangles, then if you have millions of triangles, then each ray has to check all the millions of triangles, which one is the first that intersect. If you have millions of rays, you see that this is a quadratic explosion, and you will not converge in any reasonable time to a high quality image. So the naive approach would be just to determine the intersection with each object. So the object could now be usually triangles can also be non-linear surface patches or whatever you want to use. So if you just go through all the objects, one after the other and check which one is the closest, you have to go through all the objects, so it's a linear approach. So the complexity is over n. A better approach would be to reorganize all the objects in your scene, say the triangles, in some kind of spatial hierarchy. So that I know that say in the left half of this room are these triangles in the right half of these triangles. And then if I have a ray that I know that it only travels through one half of the room, then it can, evidably, discard half of the triangles in my scene and don't need to intersect against them. So this approach, I mean it's a bit more sophisticated than that, leads then to a sub-linear complexity. So eventually it gets close to the rhythmic. So, I mean this is a very old topic. So this popped up very soon when ray tracing was used. So there are many methodologies that were looked into and two main techniques, say, are considered the state of the art. So there are KD trees and the other are bounding volume hierarchies. So a KD tree subdivides the space itself. So your scene is situated in a surrounding space, three-dimensional. You then cut this space into pieces. As can be seen in this example on the right hand side, here the space in which the object resides is just a square. And each object is just a point. And as you see with recursive subdivision of the space, you group the objects together in a spatially local volumes. And if you the right side gives you the subdivision of the space itself and where the object's lying those, but each split of the space can be seen as a construction of a binary tree. So you start off, have the root node, which is the whole space, then you try to find some kind of good cut through the space so that approximately half of the objects are in one half and half of the objects and the other. So it doesn't make sense to start off with the whole volume and then separate a very small part from it. Because every ray has to start its traversal of the tree at the root node. And then it has the decision, am I in the peak volume or do I have to check the small volume too? If you have a lot of small volumes, then this is inefficient again. So what you want to do is to place or to get the criteria that you are not going to have to intersect a lot of triangles as far up in the three years possible. So in this example, the first cut, the vertical cut through the whole space, subdivides the objects approximately in half. So that half of the objects are left of the cut, half of the objects are right of the cut. So this would be a cut plane through the volume, but it's the same procedure. And then you recursively subdivide the parts, so the two sub volumes that you generated with the first cut. Also try again to have half of the objects there, half of the objects there. And you continue with this procedure until you have one object per volume. I mean, of course, you can also terminate earlier. So if you are okay with having 100 triangles in each leaf node of the tree, then you have to check through all these 100 triangles if you enter the subspace. But the main advantage you gain with this is that if you have some ray through this volume, then you can do very quick checks against the subspaces. So you know that all the subspace here are rectangles in a volume. They would be boxes and you can do very quick intersection test against boxes. And if you know that you're not going to intersect the box, which is one test, but there are thousands of triangles in this box, then you can immediately discard all these triangles for your real intersection test. So you only have to check the triangle intersections in those boxes that you checked beforehand that you intersect. And you can imagine that if you have huge areas that you don't intersect, you gain a lot of speed because you don't do unnecessary work. So KT3s subdivide the space. And then you have to, and then in the sub-volumes, the objects life. Another approach, a bounding volume hierarchies, there you group the objects together. So you take, you start with the triangles, and then you say, I put close triangles into groups. And then you again build up a tree structure, but this tree structure now depends on the triangles. So the fundamental unit there is a triangle, not a subspace of your whole scene volume. So now they have advantages and disadvantages, otherwise you would only take the better one. So KT3s, they are usually faster for traversal on the CPU. Here I mean a multi-core CPUs, but they have usually a larger amount of nodes, and they have duplicate references. Because if we go back to this example, here we have points. Okay, a point can, does not have a spatial extent. But if you imagine that you have triangles and you cut through the whole volume, then it could be that you cut through triangles. And then you have two possibilities. Either you just add the triangle to both volumes, so you get duplicate references. That means your triangle discard is less efficient, because you have to check against this triangle if you're in the left or in the right half. Or you cut the triangle itself and add one half there, one half there. But cutting a lot of scene content is computationally expensive. So this would then degrade the performance of the KT3 generation. Bounding volume hierarchies are very popular for GPUs and multi-core architectures, so the CUNE-Fi for example. So they got more attention in recent research, because most of the current work tried to implement. So the spatial hierarchy generation on GPUs or other highly parallel architectures. They are also easier to update, because imagine you have a moving object inside your scene. A KT3 cuts the whole volume apart, and then if you have an object moving from one sub volume to another, you would have to update the whole KT3, because you don't really have a grasp on at which level you have to edit it. Bounding volume hierarchies on the other side, they group objects together. So there you can just, you have the option of ignoring dynamic complication, because say you have two objects and B that are close together. So you generate your Bounding volume hierarchy, so they are grouped together at some level of the three. And if they then move apart, the grouping is not influenced. The only thing that happens is that the Bounding volume that holds both groups gets larger and larger. So what happens is that your spatial hierarchy gets more inefficient, because say a lot of empty spaces generated in between object and B. So raise the travel exactly through the gap between them, they would still have to check A and B. If you would then update your Bounding volume hierarchy to acknowledge that they are spatially separated, then they would be cut at a, they would be put into different branches of the three at a different level. But you don't have to do that. So in Bounding volume hierarchies, dynamic scenes just degrade your performance, but don't invalidate your whole hierarchy. Because in KT trees, if you move from one sub volume to the other, you have to update this in the whole tree. And this could be quite complicated, because traversing the tree for higher than the next scene can be very costly. And another advantage for Bounding volume hierarchies is that every object is only in one tree leaf. I mean, this is naturally because it's constructed that tree. But a negative point of them are that the nodes can spatially overlap. So if you put two triangles that are close by each other into different nodes of the Bounding volume hierarchy, then you still generate the box around them to do a fast intersection test. But if the triangles are, say, right next to each other, then a simple box will have some overlap. So the Bounding volume hierarchy can be inefficient if you generate a lot of boxes with content in it that overlap to a large extent.
[{"start": 0.0, "end": 9.0, "text": " Okay, so welcome to today's rendering lecture. This is going to be unit 4. We will have two parts in it."}, {"start": 9.0, "end": 16.0, "text": " So one, the first part will be spatial acceleration structures and the next part will be tone mapping."}, {"start": 16.0, "end": 25.0, "text": " I will hold the next three lectures. So this one and two more and then I'll take it over again."}, {"start": 25.0, "end": 32.0, "text": " So spatial acceleration structures. So where are we?"}, {"start": 32.0, "end": 39.0, "text": " The rendering pipeline, as it was shown in the first lecture, so we start with a 3D scene,"}, {"start": 39.0, "end": 46.0, "text": " performs some kind of light simulation, generate an image out of it that's going to be displayed."}, {"start": 46.0, "end": 57.0, "text": " So spatial acceleration structures are central to the light simulation because they increase the efficiency of ray shooting."}, {"start": 57.0, "end": 68.0, "text": " So as you heard last time, ray-based methodologies, so from geometric optics, are mainly employed to enable photorealistic rendering."}, {"start": 68.0, "end": 77.0, "text": " And in this work, you have to shoot a lot of rays. So usually in the order of million to billion."}, {"start": 77.0, "end": 86.0, "text": " And if you can cut down the computational cost of this procedure, then you will gain significant speed ups."}, {"start": 86.0, "end": 98.0, "text": " So to summarize, so generally the Monte Carlo method uses ray shooting to sample the integrand of the rendering equation, as shown in the last time."}, {"start": 98.0, "end": 103.0, "text": " So usually you have to compute the closest intersection with the scene."}, {"start": 103.0, "end": 115.0, "text": " So this is equivalent to computing the local visibility. So how far does it travel through the scene before it hits its first object?"}, {"start": 115.0, "end": 135.0, "text": " And this is usually very expensive for a large amount of scene objects, because if you start with one ray and you want to check, does it intersect any of my scene set triangles, then if you have millions of triangles, then each ray has to check all the millions of triangles, which one is the first that intersect."}, {"start": 135.0, "end": 147.0, "text": " If you have millions of rays, you see that this is a quadratic explosion, and you will not converge in any reasonable time to a high quality image."}, {"start": 147.0, "end": 154.0, "text": " So the naive approach would be just to determine the intersection with each object."}, {"start": 154.0, "end": 165.0, "text": " So the object could now be usually triangles can also be non-linear surface patches or whatever you want to use."}, {"start": 165.0, "end": 177.0, "text": " So if you just go through all the objects, one after the other and check which one is the closest, you have to go through all the objects, so it's a linear approach."}, {"start": 177.0, "end": 188.0, "text": " So the complexity is over n. A better approach would be to reorganize all the objects in your scene, say the triangles, in some kind of spatial hierarchy."}, {"start": 188.0, "end": 197.0, "text": " So that I know that say in the left half of this room are these triangles in the right half of these triangles."}, {"start": 197.0, "end": 212.0, "text": " And then if I have a ray that I know that it only travels through one half of the room, then it can, evidably, discard half of the triangles in my scene and don't need to intersect against them."}, {"start": 212.0, "end": 223.0, "text": " So this approach, I mean it's a bit more sophisticated than that, leads then to a sub-linear complexity."}, {"start": 223.0, "end": 228.0, "text": " So eventually it gets close to the rhythmic."}, {"start": 228.0, "end": 238.0, "text": " So, I mean this is a very old topic. So this popped up very soon when ray tracing was used."}, {"start": 238.0, "end": 249.0, "text": " So there are many methodologies that were looked into and two main techniques, say, are considered the state of the art."}, {"start": 249.0, "end": 256.0, "text": " So there are KD trees and the other are bounding volume hierarchies."}, {"start": 256.0, "end": 268.0, "text": " So a KD tree subdivides the space itself. So your scene is situated in a surrounding space, three-dimensional."}, {"start": 268.0, "end": 282.0, "text": " You then cut this space into pieces. As can be seen in this example on the right hand side, here the space in which the object resides is just a square."}, {"start": 282.0, "end": 286.0, "text": " And each object is just a point."}, {"start": 286.0, "end": 301.0, "text": " And as you see with recursive subdivision of the space, you group the objects together in a spatially local volumes."}, {"start": 301.0, "end": 320.0, "text": " And if you the right side gives you the subdivision of the space itself and where the object's lying those, but each split of the space can be seen as a construction of a binary tree."}, {"start": 320.0, "end": 337.0, "text": " So you start off, have the root node, which is the whole space, then you try to find some kind of good cut through the space so that approximately half of the objects are in one half and half of the objects and the other."}, {"start": 337.0, "end": 351.0, "text": " So it doesn't make sense to start off with the whole volume and then separate a very small part from it. Because every ray has to start its traversal of the tree at the root node."}, {"start": 351.0, "end": 357.0, "text": " And then it has the decision, am I in the peak volume or do I have to check the small volume too?"}, {"start": 357.0, "end": 374.0, "text": " If you have a lot of small volumes, then this is inefficient again. So what you want to do is to place or to get the criteria that you are not going to have to intersect a lot of triangles as far up in the three years possible."}, {"start": 374.0, "end": 391.0, "text": " So in this example, the first cut, the vertical cut through the whole space, subdivides the objects approximately in half. So that half of the objects are left of the cut, half of the objects are right of the cut."}, {"start": 391.0, "end": 410.0, "text": " So this would be a cut plane through the volume, but it's the same procedure. And then you recursively subdivide the parts, so the two sub volumes that you generated with the first cut."}, {"start": 410.0, "end": 424.0, "text": " Also try again to have half of the objects there, half of the objects there. And you continue with this procedure until you have one object per volume."}, {"start": 424.0, "end": 439.0, "text": " I mean, of course, you can also terminate earlier. So if you are okay with having 100 triangles in each leaf node of the tree, then you have to check through all these 100 triangles if you enter the subspace."}, {"start": 439.0, "end": 453.0, "text": " But the main advantage you gain with this is that if you have some ray through this volume, then you can do very quick checks against the subspaces."}, {"start": 453.0, "end": 480.0, "text": " So you know that all the subspace here are rectangles in a volume. They would be boxes and you can do very quick intersection test against boxes. And if you know that you're not going to intersect the box, which is one test, but there are thousands of triangles in this box, then you can immediately discard all these triangles for your real intersection test."}, {"start": 480.0, "end": 490.0, "text": " So you only have to check the triangle intersections in those boxes that you checked beforehand that you intersect."}, {"start": 490.0, "end": 506.0, "text": " And you can imagine that if you have huge areas that you don't intersect, you gain a lot of speed because you don't do unnecessary work."}, {"start": 506.0, "end": 516.0, "text": " So KT3s subdivide the space. And then you have to, and then in the sub-volumes, the objects life."}, {"start": 516.0, "end": 533.0, "text": " Another approach, a bounding volume hierarchies, there you group the objects together. So you take, you start with the triangles, and then you say, I put close triangles into groups."}, {"start": 533.0, "end": 552.0, "text": " And then you again build up a tree structure, but this tree structure now depends on the triangles. So the fundamental unit there is a triangle, not a subspace of your whole scene volume."}, {"start": 552.0, "end": 566.0, "text": " So now they have advantages and disadvantages, otherwise you would only take the better one. So KT3s, they are usually faster for traversal on the CPU."}, {"start": 566.0, "end": 578.0, "text": " Here I mean a multi-core CPUs, but they have usually a larger amount of nodes, and they have duplicate references."}, {"start": 578.0, "end": 588.0, "text": " Because if we go back to this example, here we have points. Okay, a point can, does not have a spatial extent."}, {"start": 588.0, "end": 597.0, "text": " But if you imagine that you have triangles and you cut through the whole volume, then it could be that you cut through triangles."}, {"start": 597.0, "end": 615.0, "text": " And then you have two possibilities. Either you just add the triangle to both volumes, so you get duplicate references. That means your triangle discard is less efficient, because you have to check against this triangle if you're in the left or in the right half."}, {"start": 615.0, "end": 635.0, "text": " Or you cut the triangle itself and add one half there, one half there. But cutting a lot of scene content is computationally expensive. So this would then degrade the performance of the KT3 generation."}, {"start": 635.0, "end": 646.0, "text": " Bounding volume hierarchies are very popular for GPUs and multi-core architectures, so the CUNE-Fi for example."}, {"start": 646.0, "end": 658.0, "text": " So they got more attention in recent research, because most of the current work tried to implement."}, {"start": 658.0, "end": 668.0, "text": " So the spatial hierarchy generation on GPUs or other highly parallel architectures."}, {"start": 668.0, "end": 675.0, "text": " They are also easier to update, because imagine you have a moving object inside your scene."}, {"start": 675.0, "end": 694.0, "text": " A KT3 cuts the whole volume apart, and then if you have an object moving from one sub volume to another, you would have to update the whole KT3, because you don't really have a grasp on at which level you have to edit it."}, {"start": 694.0, "end": 713.0, "text": " Bounding volume hierarchies on the other side, they group objects together. So there you can just, you have the option of ignoring dynamic complication, because say you have two objects and B that are close together."}, {"start": 713.0, "end": 726.0, "text": " So you generate your Bounding volume hierarchy, so they are grouped together at some level of the three. And if they then move apart, the grouping is not influenced."}, {"start": 726.0, "end": 734.0, "text": " The only thing that happens is that the Bounding volume that holds both groups gets larger and larger."}, {"start": 734.0, "end": 745.0, "text": " So what happens is that your spatial hierarchy gets more inefficient, because say a lot of empty spaces generated in between object and B."}, {"start": 745.0, "end": 753.0, "text": " So raise the travel exactly through the gap between them, they would still have to check A and B."}, {"start": 753.0, "end": 767.0, "text": " If you would then update your Bounding volume hierarchy to acknowledge that they are spatially separated, then they would be cut at a, they would be put into different branches of the three at a different level."}, {"start": 767.0, "end": 780.0, "text": " But you don't have to do that. So in Bounding volume hierarchies, dynamic scenes just degrade your performance, but don't invalidate your whole hierarchy."}, {"start": 780.0, "end": 788.0, "text": " Because in KT trees, if you move from one sub volume to the other, you have to update this in the whole tree."}, {"start": 788.0, "end": 800.0, "text": " And this could be quite complicated, because traversing the tree for higher than the next scene can be very costly."}, {"start": 800.0, "end": 813.0, "text": " And another advantage for Bounding volume hierarchies is that every object is only in one tree leaf. I mean, this is naturally because it's constructed that tree."}, {"start": 813.0, "end": 821.0, "text": " But a negative point of them are that the nodes can spatially overlap."}, {"start": 821.0, "end": 834.0, "text": " So if you put two triangles that are close by each other into different nodes of the Bounding volume hierarchy, then you still generate the box around them to do a fast intersection test."}, {"start": 834.0, "end": 842.0, "text": " But if the triangles are, say, right next to each other, then a simple box will have some overlap."}, {"start": 842.0, "end": 857.0, "text": " So the Bounding volume hierarchy can be inefficient if you generate a lot of boxes with content in it that overlap to a large extent."}]
Two Minute Papers
https://www.youtube.com/watch?v=ua8Aaf-XIO8
TU Wien Rendering #20 - Space Partitioning 2
This lecture is held by Thomas Auzinger. Space partitioning helps us to alleviate the problem of intersecting a ray of light against every object in the scene. It turns out that we can often throw away half of the objects with every intersection test! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Some detail on bonding volume hierarchy. So, I mean, you take the object, say the triangles and group them somehow together. I mean, there are a lot of ways to do that. This is a combinatorial explosion. So you cannot just say, test each possibility and check which is the best one. So usually, this is seen dependent. So you don't know in beforehand which bonding volume hierarchy would give you the best performance. Because it could be that through the light propagation, light very seldomly enters, say, one half of the room. Because there is a wall with only a small hole. If you do not know that, then you would treat both rooms with the same priority and put them very high up in the tree hierarchy. But if light only travels very seldomly to half of the room, then you could make one huge note for only half of the room. And spend all your detail on the thing where actually something happens. So the same usually dictates what kind of hierarchies optimal. But this doesn't make too much sense to take this into account. Because if you have to run the light simulation to know what is the most optimal spatial hierarchy, then you have done the light simulation already. So you need to use some kind of heuristic that works for general scenes and build a hierarchy that optimizes this heuristic. And the most popular one is the surface area of heuristic where you compute a cost for the whole hierarchy and try to find the one with the lowest cost. And here, just to quickly show the formula, you can read this in detail in the references I provide on the last slide of the lecture. But here you sum up two components. So the costs of the inner nodes of the trees and the costs of the leaf nodes. Because as we already know, the object, so the triangles are in the leaf node of the tree. So all the intermediate nodes are just different groupings. So from fine to coarse. But they do not contain content. So they just say if I hit a bounding box of some intermediate node, then it tells me year my next level are these two bounding boxes continue with them. Then you check the next two bounding boxes in this volume and continue recursively until you hit all the leaf nodes that are appropriate. So that lie along your ray. And the costs. So there is a inner cost associated with getting from one bounding box to the bounding boxes that lie in it. And the cost of the leaf nodes, which are also the sexual costs of the triangles themselves. So see in this formula is the cost of checking which bounding boxes are appropriate for continuing through the tree. And the cost for the leaf node is the same for it. The TN is the cost for the triangle intersections. And now the heuristic enters via the surface areas of objects. Because the main assumption in this heuristic is that you have that race lie randomly in your scene. So you don't know in beforehand in which direction light will travel. So you just assume a random rate is tribution. And then you check how probable is it that a hit certain objects. So objects with a small surface area are less probable, larger triangles are more probable. And what you want to do is that you give very good groupings. As a groupings that have a high chance that you actually hit something in them or that you can exclude a lot of this. And this is here shown as a ratio of the surface areas. So AN is the surface area of node N. This is the volume of the bounding box which has a certain surface. And it's also dictates how probable is it that I hit this bounding box. And then you have the surface area of the root, so the level above. So I have my own bounding box at a certain level of the tree and my root that contains me is a larger bounding box that at least has the extent of my current one. But what you want to do is that you want to minimize this cost so that you want to have a large surface area of the root but a small surface area of your current bounding box. Because this means that you can exclude a lot of volume in the space. So if you're going through a huge bounding box and you want to decide where do I have to continue, then the smaller the continuation is, the more descriptive it is where the same content is. And then you want to have a small bounding box that is the same as the one for the final leaf nodes where the triangles are. And now you try to build a whole hierarchy that optimizes this cost. So this is not that you can decide at every level or at every level you decide what is the best ratio here that I can achieve. And then this gives you how your grouping has to be done. And there are different heuristics in the recent literature that take some more information of the scene into account. So for example, the surface area heuristic not only assumes random redistribution in the scene, but also assumes that they are infinitely long. So that they just travel through the whole scene and are not blocked by objects. This is taken into account with more sophisticated heuristics and there are references on this. And so, yeah, I have a dynamic scene, then the bound box can get larger. Yes, one object moves away. So a leaf node can be larger than its root because the two objects move from the bottom and so that leaf gets larger. Now you have to account for that. So you would have to update. You would have to propagate this information up the tree. Otherwise, it would fail, so to say, because if you do not hit the root node, yeah. I mean, this you have to propagate up, but it's the way of the propagation is clear. So it's just a grouping upwards till you have contained even with the dynamic update what's happening. But with KD trees, this is not so easy because the space itself is activated. So you have to somehow determine where is the object moving to in which other part of the tree, which is not simple because it could be that it moves into another leaf node. But the leaf node could be split already at the very top level of the tree. So to find the other leaf node where your KD tree object moves into, you have to go up and down the whole tree. So this is much more costly, much more complicated. Here you just propagated upwards till it's okay. So, I mean, the surface area of heuristic is just that, a heuristic, but it's still expensive to compute the optimal tree for that. So there is not necessarily a unique solution with the minimal surface area of heuristic, but there is one. And since this is expensive, there are also methods how to approximate. So not to develop a hierarchy with the optimal cost, but with one that's good enough for the purpose. And usually this is a trade off. So the more time you invest to build your spatial hierarchy, the better its quality gets. And in turn, the more efficient the light simulations. So if you don't spend a lot of time to build your hierarchy, you have bad quality, inefficient rate or virtual during the global illumination simulation. You are actual rendering takes longer. But if you spend more time on the hierarchy, then it has better properties for a propagation. So your lighting simulation is more efficient and it's faster. But you see that there is some kind of trade off. So I mean, if I and usually this is encoded and or start again, usually depends on how complex your light simulation is. So if I want to trace say 1000 rays, then the cost of this is very, very low. So I just need a, I can live with a very approximated hierarchy. So the hierarchy quality can be very bad, but because I shoot so less rays, I will not feel the difference too much. But if I shoot rate counts in the billions, then even a small increase in optimality of the hierarchy will give you significant gains in your rendering time. So what you see here in this graph, this I just showed that you get a feeling what are different methods there. I put the reference to the actual method to the actual paper where this is from right next to it. So what you see here are different methods on how to generate bounding volume hierarchies with the surface area heuristic. So as you see the blue line, the SPVH has very low call or start again. So what you see here is on the x axis, the number of rays that you will shoot in your light thing simulation. So that means that the more you go to the right, the more complex the light simulation is, the more quality you want of the final rendering, the deeper you go into reflection and reflection levels, things like that. On the other hand, on the y axis, you see how many rays the lighting simulation can trace per second. So that means that the higher you go up the y axis, the faster your lighting simulations. And now you have to find some trade off. So SPVH constructs very good spatial hierarchies, but it's also very slow. That means that for lighting simulation that only use a few million rays, the performance is very bad. Because most of the time is taken to build a spatial hierarchy. So the press, the edge, it takes longer to build the hierarchy than to do the actual rendering, which doesn't make too much sense. But if you go into the into one, the other is for computing your final image, then it starts to pay off because you have a very high performance. So you can trace in this example on the hardware, 400 million rays per second. BL, BVH, HL, BVH, on the other hand, is a method to quickly get a spatial hierarchy that's not very optimal. So you see that for scenes with only a few million rays, you already get close to the final performance. 200 million rays per second, and you are much faster than SPVH here. But the more rays you shoot, the more you are heard by the missing optimality of your hierarchy. And it's that there is some kind of sweet spot around 10 giga rays where SPVH gets actually better than HL, BVH. And in this paper, they propose another method that is faster to construct. So you see it in the green dotted line. So you quickly, it already gives a significant performance, increasements, even for smaller simulation. So already at 100 million rays, you are better than HL. And you get, but you get quickly close to the performance of SPVH. So this is in this paper, this shows that, yeah, they found a very good intermediate method that's only a bit less optimal than the state of the art before. So I advise you to look into this paper. You see a lot of interesting things there. So how to port BVH, a construction on the GPU, parallelization issues, and other smart tweaks. So I give you a literature. So in PBRD, it's the chapter four. And since this is inherently a geometrical problem, so you want to know where are triangles in the scene, the same hierarchies can also be used for collision detection. Because for collision detection, if you want to know, could two objects collide, then you have to be spatially near to each other. So if I know that they are far apart already in the, through the bounding boxes of the tree, then I can ignore this and not compute the exact intersection between them. And there are several papers here. So the work of IngoVide, more or less, started this whole business in this thesis. And then I also give some recent papers that usually looking to how to do this fast in the GPU. So this is more or less the current trend now. There are also upcoming works to do the same on this Intel, many core architecture, so the Xeon file. Good. This concludes the first part of this lecture. Are there any questions? If not, then I continue with something completely different now. I mean, this is a very technical topic. If you want to implement it, then you have to look into the papers anyhow, because I cannot layout here all the issues with coding. I mean, it would be super boring. And on the other hand, it's also the surface area heuristic in itself has proved worthful. But I mean, there are a lot of different approaches. So approximation of this small part, approximation of this small part. So there are many papers that focus on different partial problems in the whole in the whole research problem. So going through a lot of literature is also suboptimal because due to the rapidly increasing hardware capabilities, the tunnel is also quite fast. So things that were super smart approaches say four years ago, do not cut it anymore because GPUs now have completely different functionality and can do certain aspects more efficiently. So this is a rapidly developing topic since years already. So if you want to implement that, have a look at the current literature. There are a few standard papers like the one of the of IngoVide, which have lasting contributions, but mostly in between are small optimizations that are focused on things that are perhaps not relevant anymore. Okay, good. Let's put this.
[{"start": 0.0, "end": 3.0, "text": " Some detail on bonding volume hierarchy."}, {"start": 3.0, "end": 8.0, "text": " So, I mean, you take the object, say the triangles and group them somehow together."}, {"start": 8.0, "end": 11.0, "text": " I mean, there are a lot of ways to do that."}, {"start": 11.0, "end": 14.0, "text": " This is a combinatorial explosion."}, {"start": 14.0, "end": 19.0, "text": " So you cannot just say, test each possibility and check which is the best one."}, {"start": 19.0, "end": 22.0, "text": " So usually, this is seen dependent."}, {"start": 22.0, "end": 29.0, "text": " So you don't know in beforehand which bonding volume hierarchy would give you the best performance."}, {"start": 29.0, "end": 38.0, "text": " Because it could be that through the light propagation, light very seldomly enters, say, one half of the room."}, {"start": 38.0, "end": 41.0, "text": " Because there is a wall with only a small hole."}, {"start": 41.0, "end": 50.0, "text": " If you do not know that, then you would treat both rooms with the same priority and put them very high up in the tree hierarchy."}, {"start": 50.0, "end": 58.0, "text": " But if light only travels very seldomly to half of the room, then you could make one huge note for only half of the room."}, {"start": 58.0, "end": 63.0, "text": " And spend all your detail on the thing where actually something happens."}, {"start": 63.0, "end": 69.0, "text": " So the same usually dictates what kind of hierarchies optimal."}, {"start": 69.0, "end": 75.0, "text": " But this doesn't make too much sense to take this into account."}, {"start": 75.0, "end": 84.0, "text": " Because if you have to run the light simulation to know what is the most optimal spatial hierarchy, then you have done the light simulation already."}, {"start": 84.0, "end": 95.0, "text": " So you need to use some kind of heuristic that works for general scenes and build a hierarchy that optimizes this heuristic."}, {"start": 95.0, "end": 107.0, "text": " And the most popular one is the surface area of heuristic where you compute a cost for the whole hierarchy and try to find the one with the lowest cost."}, {"start": 107.0, "end": 118.0, "text": " And here, just to quickly show the formula, you can read this in detail in the references I provide on the last slide of the lecture."}, {"start": 118.0, "end": 122.0, "text": " But here you sum up two components."}, {"start": 122.0, "end": 128.0, "text": " So the costs of the inner nodes of the trees and the costs of the leaf nodes."}, {"start": 128.0, "end": 134.0, "text": " Because as we already know, the object, so the triangles are in the leaf node of the tree."}, {"start": 134.0, "end": 139.0, "text": " So all the intermediate nodes are just different groupings."}, {"start": 139.0, "end": 142.0, "text": " So from fine to coarse."}, {"start": 142.0, "end": 144.0, "text": " But they do not contain content."}, {"start": 144.0, "end": 158.0, "text": " So they just say if I hit a bounding box of some intermediate node, then it tells me year my next level are these two bounding boxes continue with them."}, {"start": 158.0, "end": 167.0, "text": " Then you check the next two bounding boxes in this volume and continue recursively until you hit all the leaf nodes that are appropriate."}, {"start": 167.0, "end": 170.0, "text": " So that lie along your ray."}, {"start": 170.0, "end": 172.0, "text": " And the costs."}, {"start": 172.0, "end": 183.0, "text": " So there is a inner cost associated with getting from one bounding box to the bounding boxes that lie in it."}, {"start": 183.0, "end": 192.0, "text": " And the cost of the leaf nodes, which are also the sexual costs of the triangles themselves."}, {"start": 192.0, "end": 205.0, "text": " So see in this formula is the cost of checking which bounding boxes are appropriate for continuing through the tree."}, {"start": 205.0, "end": 215.0, "text": " And the cost for the leaf node is the same for it. The TN is the cost for the triangle intersections."}, {"start": 215.0, "end": 223.0, "text": " And now the heuristic enters via the surface areas of objects."}, {"start": 223.0, "end": 232.0, "text": " Because the main assumption in this heuristic is that you have that race lie randomly in your scene."}, {"start": 232.0, "end": 238.0, "text": " So you don't know in beforehand in which direction light will travel."}, {"start": 238.0, "end": 242.0, "text": " So you just assume a random rate is tribution."}, {"start": 242.0, "end": 250.0, "text": " And then you check how probable is it that a hit certain objects."}, {"start": 250.0, "end": 259.0, "text": " So objects with a small surface area are less probable, larger triangles are more probable."}, {"start": 259.0, "end": 266.0, "text": " And what you want to do is that you give very good groupings."}, {"start": 266.0, "end": 276.0, "text": " As a groupings that have a high chance that you actually hit something in them or that you can exclude a lot of this."}, {"start": 276.0, "end": 282.0, "text": " And this is here shown as a ratio of the surface areas."}, {"start": 282.0, "end": 292.0, "text": " So AN is the surface area of node N. This is the volume of the bounding box which has a certain surface."}, {"start": 292.0, "end": 298.0, "text": " And it's also dictates how probable is it that I hit this bounding box."}, {"start": 298.0, "end": 304.0, "text": " And then you have the surface area of the root, so the level above."}, {"start": 304.0, "end": 318.0, "text": " So I have my own bounding box at a certain level of the tree and my root that contains me is a larger bounding box that at least has the extent of my current one."}, {"start": 318.0, "end": 332.0, "text": " But what you want to do is that you want to minimize this cost so that you want to have a large surface area of the root but a small surface area of your current bounding box."}, {"start": 332.0, "end": 338.0, "text": " Because this means that you can exclude a lot of volume in the space."}, {"start": 338.0, "end": 355.0, "text": " So if you're going through a huge bounding box and you want to decide where do I have to continue, then the smaller the continuation is, the more descriptive it is where the same content is."}, {"start": 355.0, "end": 362.0, "text": " And then you want to have a small bounding box that is the same as the one for the final leaf nodes where the triangles are."}, {"start": 362.0, "end": 368.0, "text": " And now you try to build a whole hierarchy that optimizes this cost."}, {"start": 368.0, "end": 378.0, "text": " So this is not that you can decide at every level or at every level you decide what is the best ratio here that I can achieve."}, {"start": 378.0, "end": 383.0, "text": " And then this gives you how your grouping has to be done."}, {"start": 383.0, "end": 392.0, "text": " And there are different heuristics in the recent literature that take some more information of the scene into account."}, {"start": 392.0, "end": 403.0, "text": " So for example, the surface area heuristic not only assumes random redistribution in the scene, but also assumes that they are infinitely long."}, {"start": 403.0, "end": 419.0, "text": " So that they just travel through the whole scene and are not blocked by objects. This is taken into account with more sophisticated heuristics and there are references on this."}, {"start": 419.0, "end": 433.0, "text": " And so, yeah, I have a dynamic scene, then the bound box can get larger. Yes, one object moves away."}, {"start": 433.0, "end": 442.0, "text": " So a leaf node can be larger than its root because the two objects move from the bottom and so that leaf gets larger."}, {"start": 442.0, "end": 450.0, "text": " Now you have to account for that. So you would have to update. You would have to propagate this information up the tree."}, {"start": 450.0, "end": 457.0, "text": " Otherwise, it would fail, so to say, because if you do not hit the root node, yeah."}, {"start": 457.0, "end": 474.0, "text": " I mean, this you have to propagate up, but it's the way of the propagation is clear. So it's just a grouping upwards till you have contained even with the dynamic update what's happening."}, {"start": 474.0, "end": 492.0, "text": " But with KD trees, this is not so easy because the space itself is activated. So you have to somehow determine where is the object moving to in which other part of the tree, which is not simple because it could be that it moves into another leaf node."}, {"start": 492.0, "end": 506.0, "text": " But the leaf node could be split already at the very top level of the tree. So to find the other leaf node where your KD tree object moves into, you have to go up and down the whole tree."}, {"start": 506.0, "end": 515.0, "text": " So this is much more costly, much more complicated. Here you just propagated upwards till it's okay."}, {"start": 515.0, "end": 530.0, "text": " So, I mean, the surface area of heuristic is just that, a heuristic, but it's still expensive to compute the optimal tree for that."}, {"start": 530.0, "end": 539.0, "text": " So there is not necessarily a unique solution with the minimal surface area of heuristic, but there is one."}, {"start": 539.0, "end": 556.0, "text": " And since this is expensive, there are also methods how to approximate. So not to develop a hierarchy with the optimal cost, but with one that's good enough for the purpose."}, {"start": 556.0, "end": 569.0, "text": " And usually this is a trade off. So the more time you invest to build your spatial hierarchy, the better its quality gets."}, {"start": 569.0, "end": 584.0, "text": " And in turn, the more efficient the light simulations. So if you don't spend a lot of time to build your hierarchy, you have bad quality, inefficient rate or virtual during the global illumination simulation."}, {"start": 584.0, "end": 595.0, "text": " You are actual rendering takes longer. But if you spend more time on the hierarchy, then it has better properties for a propagation."}, {"start": 595.0, "end": 603.0, "text": " So your lighting simulation is more efficient and it's faster. But you see that there is some kind of trade off."}, {"start": 603.0, "end": 618.0, "text": " So I mean, if I and usually this is encoded and or start again, usually depends on how complex your light simulation is."}, {"start": 618.0, "end": 635.0, "text": " So if I want to trace say 1000 rays, then the cost of this is very, very low. So I just need a, I can live with a very approximated hierarchy."}, {"start": 635.0, "end": 660.0, "text": " So the hierarchy quality can be very bad, but because I shoot so less rays, I will not feel the difference too much. But if I shoot rate counts in the billions, then even a small increase in optimality of the hierarchy will give you significant gains in your rendering time."}, {"start": 660.0, "end": 669.0, "text": " So what you see here in this graph, this I just showed that you get a feeling what are different methods there."}, {"start": 669.0, "end": 676.0, "text": " I put the reference to the actual method to the actual paper where this is from right next to it."}, {"start": 676.0, "end": 686.0, "text": " So what you see here are different methods on how to generate bounding volume hierarchies with the surface area heuristic."}, {"start": 686.0, "end": 703.0, "text": " So as you see the blue line, the SPVH has very low call or start again."}, {"start": 703.0, "end": 732.0, "text": " So what you see here is on the x axis, the number of rays that you will shoot in your light thing simulation. So that means that the more you go to the right, the more complex the light simulation is, the more quality you want of the final rendering, the deeper you go into reflection and reflection levels, things like that."}, {"start": 732.0, "end": 750.0, "text": " On the other hand, on the y axis, you see how many rays the lighting simulation can trace per second. So that means that the higher you go up the y axis, the faster your lighting simulations."}, {"start": 750.0, "end": 774.0, "text": " And now you have to find some trade off. So SPVH constructs very good spatial hierarchies, but it's also very slow. That means that for lighting simulation that only use a few million rays, the performance is very bad."}, {"start": 774.0, "end": 787.0, "text": " Because most of the time is taken to build a spatial hierarchy. So the press, the edge, it takes longer to build the hierarchy than to do the actual rendering, which doesn't make too much sense."}, {"start": 787.0, "end": 810.0, "text": " But if you go into the into one, the other is for computing your final image, then it starts to pay off because you have a very high performance. So you can trace in this example on the hardware, 400 million rays per second."}, {"start": 810.0, "end": 833.0, "text": " BL, BVH, HL, BVH, on the other hand, is a method to quickly get a spatial hierarchy that's not very optimal. So you see that for scenes with only a few million rays, you already get close to the final performance."}, {"start": 833.0, "end": 849.0, "text": " 200 million rays per second, and you are much faster than SPVH here. But the more rays you shoot, the more you are heard by the missing optimality of your hierarchy."}, {"start": 849.0, "end": 867.0, "text": " And it's that there is some kind of sweet spot around 10 giga rays where SPVH gets actually better than HL, BVH. And in this paper, they propose another method that is faster to construct."}, {"start": 867.0, "end": 888.0, "text": " So you see it in the green dotted line. So you quickly, it already gives a significant performance, increasements, even for smaller simulation. So already at 100 million rays, you are better than HL."}, {"start": 888.0, "end": 906.0, "text": " And you get, but you get quickly close to the performance of SPVH. So this is in this paper, this shows that, yeah, they found a very good intermediate method that's only a bit less optimal than the state of the art before."}, {"start": 906.0, "end": 922.0, "text": " So I advise you to look into this paper. You see a lot of interesting things there. So how to port BVH, a construction on the GPU, parallelization issues, and other smart tweaks."}, {"start": 922.0, "end": 943.0, "text": " So I give you a literature. So in PBRD, it's the chapter four. And since this is inherently a geometrical problem, so you want to know where are triangles in the scene, the same hierarchies can also be used for collision detection."}, {"start": 943.0, "end": 964.0, "text": " Because for collision detection, if you want to know, could two objects collide, then you have to be spatially near to each other. So if I know that they are far apart already in the, through the bounding boxes of the tree, then I can ignore this and not compute the exact intersection between them."}, {"start": 964.0, "end": 982.0, "text": " And there are several papers here. So the work of IngoVide, more or less, started this whole business in this thesis. And then I also give some recent papers that usually looking to how to do this fast in the GPU."}, {"start": 982.0, "end": 997.0, "text": " So this is more or less the current trend now. There are also upcoming works to do the same on this Intel, many core architecture, so the Xeon file."}, {"start": 997.0, "end": 1012.0, "text": " Good. This concludes the first part of this lecture. Are there any questions? If not, then I continue with something completely different now. I mean, this is a very technical topic."}, {"start": 1012.0, "end": 1037.0, "text": " If you want to implement it, then you have to look into the papers anyhow, because I cannot layout here all the issues with coding. I mean, it would be super boring. And on the other hand, it's also the surface area heuristic in itself has proved worthful."}, {"start": 1037.0, "end": 1054.0, "text": " But I mean, there are a lot of different approaches. So approximation of this small part, approximation of this small part. So there are many papers that focus on different partial problems in the whole in the whole research problem."}, {"start": 1054.0, "end": 1075.0, "text": " So going through a lot of literature is also suboptimal because due to the rapidly increasing hardware capabilities, the tunnel is also quite fast. So things that were super smart approaches say four years ago,"}, {"start": 1075.0, "end": 1092.0, "text": " do not cut it anymore because GPUs now have completely different functionality and can do certain aspects more efficiently. So this is a rapidly developing topic since years already."}, {"start": 1092.0, "end": 1113.0, "text": " So if you want to implement that, have a look at the current literature. There are a few standard papers like the one of the of IngoVide, which have lasting contributions, but mostly in between are small optimizations that are focused on things that are perhaps not relevant anymore."}, {"start": 1113.0, "end": 1125.0, "text": " Okay, good. Let's put this."}]
Two Minute Papers
https://www.youtube.com/watch?v=s6i8AV-m4W8
TU Wien Rendering #21 - Tone Mapping Basics
This lecture is held by Thomas Auzinger. In the first lecture, we discussed that we're trying to simulate light transport and measure radiance. That sounds indeed wonderful, but we can't display radiance on our display device, can we? We have to convert it to RGB somehow. It turns out that it's not such a trivial problem! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Now to the second part. It's quite different, so it's tone mapping. This is concerned in our so before we had optimization for the light simulation. Now we will look at the very end of our rendering pipeline. So the issue of showing the image output on a display. Because the problem is that a light simulation outputs radians. So it's the how much light travels along one direction when it comes from a small surface patch. So you collect radians with your camera, recorded, but somehow your display expects RGB values. So the radians can carry some color information. So you could, for example, trace the R, the G, the B value, Genuo independently. You can do something with more fidelity like spectral ray tracing. So your trace raised that where not only radians is carried along the ray, but spectral radians. So radians of a certain wavelength of the light. This, for example, you can see this effect in Prisons where you split white light into rainbow colors. So if you do not perform spectral ray tracing there and just assume we have white radians, then you cannot simulate this effect. Because then the refraction, so the ray geometry changes with the wavelength that's associated to it. So your reflection angles get different and this causes the split into the rainbow colors. So, but in some way you have radians as output. So to show them on displays or to bring them, you need to convert your images somehow from radians to RGB. And there is an inherent problem with that because radians of light simulation have a huge range because they try to simulate real world physics. And in real world, you have a huge difference between dark and bright. So for example, if you take the ratio between the surface at say ground level in the Earth atmosphere that is either illuminated by the sun or the moon, the difference is a factor of 800,000. And then imagine that the patch on the ground is either white or black. This causes another difference in the reflected radians quantities of approximately a factor 100. So if you want to do say a general light simulation and you can expect that illumination by a sun and moon should be in there, white and black surfaces should also be in there. You have to somehow cope with a ratio between the darkest and the brightest values of 80 million. So is it relevant to do that? So can people even see the difference between that? And yes, they can. So since these are highly relevant features of our world to differentiate dark from bright, especially under very bright and dark conditions. So imagine a caveman in the woods at night. It will be very good to see small contrast differences that could indicate predators. So there was a evolutionary forcing to develop a visual system that also can take advantage of this huge dynamic range of radians values. So for example, how does this build up? So for the receptors in our eyes, they have chemical bleaching when they are hit by light particles. So this can be regulated biochemically to enable adaptation in a range of two orders of magnitude. So just by regulating the biochemical properties of photoreceptors, you can adapt the eye to a different in dark and bright of 100. The pupil size gives you another order of magnitude, so a factor of 10, and the neural adaptation is more or less the single processing. So what to do actually with the changing signal because your receptors get bleached. So the dynamic range of human vision is approximately 100 million. So that means that we can perceive, in fact, the dynamic range of realistic conditions in our atmosphere. But then you have, so the output of a light simulation, if it takes this into account, can be, I can have a difference of 80 million, a factor of 80 million between dark and bright, and then you want to show this on the display. So the technology for a standard display gives you a dark to bright ratio of approximately 1000. And if you use 8-bit encoding of your values, then you only have 265 values for 8-bit channels. That means that somehow our technology is immensely inadequate to show realistic scenarios. In some way, this is also good because you get not blinded by your display, but we have to somehow account for that. So just taking the radians output and converting it to images will carry some problems with it. Yeah. But why 8-bit image files for visually witnessed files are 24 bits of information? But usually you split them into the different color values. So you have 8 bits for R, 8 bits for G, 8 bits for B. And somehow imagine that this gives you how much red light is at this pixel. And yeah, you go to 266. Of course, for the whole color gamut, you have more values, but if you just look at the dark bright ratio of a single color, this is approximately what you have. So now tone mapping is the methods that was developed to convert this problem. So the output of a light simulation, as already said, high dynamic range because of the real world dynamic range of brightness values. And display devices usually have a low dynamic range. So what you need to do is to compress the range of our output somehow. And this is referred to as either tone mapping or tone reproduction. These are the names that you find in literature. So there are two sub-issues here. One is range compression. So how to convert high dynamic range luminance to low dynamic range luminance. This is the content of this lecture. And then you still have luminances, but usually there are standardized color spaces in which images are stored. So you don't store your own whole-brune formats, which take certain wavelength and then gives the luminance values for those. But there are standardized ways to do that. And this is covered in a different lecture about color. You see the lecture number here. So I refer you to this lecture if you want to know more about color. And here now I only explain range compression to some extent. So now a graphic example. So here we have a bright, a single bright light source with no ambient light in the scene. That means that all contrast comes only from this one light source via global illumination. So if you take, if you would photograph the scene and take a short exposure, this is what you get. So all the dimly illuminated parts completely disappear in black. And but you see details in previously two bright scenes. So that led to overexposure of your camera. So for example here, you now see the outline, so the silhouette of the bulb, for example. A very long exposure leads to overexposure of most of the image. So you do not see any detail anymore close to the lamp or at directly illuminated surfaces. But what you see is that previously dimly illuminated objects in the background now are well perceivable. So this is what you get if you take only small parts of your dynamic range and map those to an image. And what could you do to combine all this information into one image? So the most trivial thing you can imagine is I just divide by the maximum of the scene. This gives me a nice luminance values from 0 to 1. And this I can map to whatever range I want. The problem with this is that usually the maximum is not indicative of the whole scene. The maximum is usually extremely bright. So when you're divide by the maximum, the only thing you see is a single reflection in the close to the light bulb. If you clamp it, so let's say I have a range of luminances that I'm interested in, everything below is clamped by a 0. Everything above my maximum that I decide to have is completely white. This is exactly the problem that becomes obvious with high dynamic range that parts are undexposed or the parts are always exposed. So clamping also doesn't get you something. You have to take the whole range. You cannot just ignore certain parts. So one approach that gives you nice results is exponential mapping. There, you assume exponential distribution in your luminance values and then rescale this exponential function into a linear one. That means that very bright values get scaled down. Low values get scaled up. But you account for the very bright spots. So these are the things that get scaled the most. So the bright values at the bar get some reasonable white, but they do not dominate the whole scene. A more sophisticated approach is the Reinhardt tool map developed by Reinhardt. And this is the one example that we present in this lecture. As you can imagine, there are many tool map. So everyone tried some approach that works very well for a certain subset of scenes. Some methods were optimized for parallel hardware or for certain hardware architectures to give more speed there. Some presented rough approximation that work in real time. So there is already a quite wide bunch of tool map to choose from. And Reinhardt tool map is one of the most common. So if you look into some, at an open source of commercial rendering software, this is usually one of the things that is implemented. So and there is also some additional information that put this into context. There is some approach in digital photography to take multiple exposure images. So to combine exposures of the same scene that are taken with different exposure times into one image. So this is called hydonomic range photography. And usually they use similar methodologies. So the Reinhardt tool map can also be used to combine or for HDR photography. But if you put HDR photography into Google, you usually end up with images like that. And this is not what tool mapping is about. Tool mapping is a perceptually and physically validated approach that gives you realistic impressions of the scene and is not an artistic effect. So it's not that you take tone mapping, look at the image and say, I would like to have it to have more contrast in there. And then you change your tone map. I mean, this is not it didn't stand that use and the community of HDR photography to a large extent over uses this capability of the system. For example, this photograph here is a completely botched implementation or use of HDR range compression. For example, things that you see that are not correct. The haloes around the balloons. So suddenly, why the sky is brighter around the silhouettes of the balloons? This is an effect that is present in the visual system to some extent, but not to this extent. So some people think that this is nice HDR photography, but this is just wrong, simply a stat. If you like that your images look like that, then this is an artist's decision, not tone mapping. This is just contrast enhancement, you can call it like that. Also, the colors in this image get screwed up because they are now oversaturated. But this is just a warning. The tone mapping that you encounter for light simulations should not look like that because then you have a run your application. So if you would take this photo with this one exposure? I mean, this is a combination of different exposure. But also some mind-blowing. It's not just a combination of different exposure, but also some... No, it is a combination of different exposures and it's taken with a tone mapping approach. You have to do the combination somehow. As said before, you can add a device by the maximum and combine them, clamping. But usually the tone mapping give you a more realistic result, and for example, why is it obvious that here already multiple exposures were combined? Because in the very left of the left balloon, you see that you still see some kind of detail in the wood in behind. But at the same time, you can all less directly look into the sun without it having blurred half the screen. So that means that already a huge dynamic range was compressed here. But it should not look like that. It should look like that if it's done correctly. And this would be the correct application for this photograph. So it looks somehow realistic. It doesn't over-saturate the colors. It only has a very light halo around the silhouettes, which is also an effect that's present in the visual system. So this is what you should aim for. So now about tone mapping itself. There are two large different classes. So one are the global tone mapas and the other are the local. So the global tone mapas use a mapping function that converts radians at a certain pixel to say RGB value if you select this color space. And this mapping function is uniform. What I mean here is not that it's uniform. It produces the uniform output value. So you don't get an image with a single color value. But the function itself just takes as input the radians at a certain pixel and outputs RGB value. And this function is then used for all pixels of the image. More complex methodologies are local tone mapas, which not only take a single pixel into account, but also its neighbor's. And this is perceptually motivated because as you have seen before, this contrast or brightness adaptation of your eyes. In the photo receptors, for example, this is done locally. So a single photo receptor adapts to different brightnesses. So that means that you tone mapping in the human eye is a local behavior. But there are there are reasons to employ both of them. So the global tone mapas, they are fast because they have a single mapping function. So you take you can execute this function parallel on each pixel. So this makes it perfectly usable for GPU approaches, for example. But you'll incur some loss of detail because you cannot locally look as if this is already a dark patch of my scene. So I can enhance the contrast more there. Or is it at a dark bright boundary where I do not do it that much. And the local tone mapas allow this. So they allow a local contrast enhancement, but they are slow because you not only have to say, look at the pixel, but also its neighborhood. The neighborhood grows quadratically if you enlarge it. So I'm there you will incur a different complexity in your problem.
[{"start": 0.0, "end": 2.0, "text": " Now to the second part."}, {"start": 4.12, "end": 6.88, "text": " It's quite different, so it's tone mapping."}, {"start": 8.0, "end": 11.64, "text": " This is concerned in our so before we had"}, {"start": 12.48, "end": 19.12, "text": " optimization for the light simulation. Now we will look at the very end of our rendering pipeline."}, {"start": 19.240000000000002, "end": 22.92, "text": " So the issue of showing the"}, {"start": 23.76, "end": 26.240000000000002, "text": " image output on a display."}, {"start": 26.24, "end": 33.16, "text": " Because the problem is that a light simulation outputs radians."}, {"start": 33.56, "end": 40.8, "text": " So it's the how much light travels along one direction when it comes from a small surface patch."}, {"start": 41.239999999999995, "end": 44.28, "text": " So you collect radians with your camera,"}, {"start": 46.519999999999996, "end": 48.519999999999996, "text": " recorded,"}, {"start": 48.519999999999996, "end": 52.239999999999995, "text": " but somehow your display expects RGB values."}, {"start": 52.24, "end": 56.24, "text": " So the radians can carry some color information."}, {"start": 56.800000000000004, "end": 58.800000000000004, "text": " So you could, for example,"}, {"start": 59.480000000000004, "end": 62.800000000000004, "text": " trace the R, the G, the B value,"}, {"start": 64.16, "end": 73.12, "text": " Genuo independently. You can do something with more fidelity like spectral ray tracing. So your trace raised that"}, {"start": 74.36, "end": 79.76, "text": " where not only radians is carried along the ray, but spectral radians."}, {"start": 79.76, "end": 84.04, "text": " So radians of a certain wavelength of the light."}, {"start": 84.76, "end": 88.4, "text": " This, for example, you can see this effect in"}, {"start": 89.0, "end": 94.60000000000001, "text": " Prisons where you split white light into rainbow colors. So if you do not"}, {"start": 95.64, "end": 98.48, "text": " perform spectral ray tracing there and"}, {"start": 98.96000000000001, "end": 103.68, "text": " just assume we have white radians, then you cannot simulate this effect."}, {"start": 104.24000000000001, "end": 106.76, "text": " Because then the refraction,"}, {"start": 106.76, "end": 113.52000000000001, "text": " so the ray geometry changes with the wavelength that's associated to it."}, {"start": 113.52000000000001, "end": 119.16000000000001, "text": " So your reflection angles get different and this causes the split into the rainbow colors."}, {"start": 120.36, "end": 126.2, "text": " So, but in some way you have radians as output."}, {"start": 127.08000000000001, "end": 133.0, "text": " So to show them on displays or to bring them,"}, {"start": 133.0, "end": 136.6, "text": " you need to convert your images"}, {"start": 137.24, "end": 139.56, "text": " somehow from radians to RGB."}, {"start": 140.2, "end": 147.0, "text": " And there is an inherent problem with that because radians of"}, {"start": 148.16, "end": 154.76, "text": " light simulation have a huge range because they try to simulate real world physics."}, {"start": 155.4, "end": 157.4, "text": " And in real world,"}, {"start": 157.4, "end": 163.32, "text": " you have a huge difference between dark and bright."}, {"start": 164.0, "end": 166.0, "text": " So for example,"}, {"start": 166.68, "end": 174.12, "text": " if you take the ratio between the surface at"}, {"start": 174.56, "end": 181.92000000000002, "text": " say ground level in the Earth atmosphere that is either illuminated by"}, {"start": 181.92, "end": 187.16, "text": " the sun or the moon, the difference is a factor of 800,000."}, {"start": 188.07999999999998, "end": 193.2, "text": " And then imagine that the patch on the ground is either white or black."}, {"start": 193.79999999999998, "end": 202.0, "text": " This causes another difference in the reflected radians quantities of approximately a factor 100."}, {"start": 202.32, "end": 209.27999999999997, "text": " So if you want to do say a general light simulation and you can expect that"}, {"start": 209.28, "end": 216.0, "text": " illumination by a sun and moon should be in there, white and black surfaces should also be in there."}, {"start": 216.32, "end": 224.08, "text": " You have to somehow cope with a ratio between the darkest and the brightest values of 80 million."}, {"start": 229.2, "end": 231.68, "text": " So is it relevant to do that?"}, {"start": 232.24, "end": 236.08, "text": " So can people even see the difference between that?"}, {"start": 236.08, "end": 238.56, "text": " And yes, they can."}, {"start": 238.8, "end": 241.84, "text": " So since these are highly relevant"}, {"start": 244.08, "end": 246.96, "text": " features of our world to differentiate"}, {"start": 248.08, "end": 253.60000000000002, "text": " dark from bright, especially under very bright and dark conditions."}, {"start": 253.92000000000002, "end": 257.2, "text": " So imagine a caveman in the woods at night."}, {"start": 257.44, "end": 263.84000000000003, "text": " It will be very good to see small contrast differences that could indicate predators."}, {"start": 263.84, "end": 279.84, "text": " So there was a evolutionary forcing to develop a visual system that also can take advantage of this huge dynamic range of radians values."}, {"start": 281.03999999999996, "end": 284.0, "text": " So for example, how does this build up?"}, {"start": 284.0, "end": 293.2, "text": " So for the receptors in our eyes, they have chemical bleaching when they are hit by light particles."}, {"start": 293.2, "end": 303.2, "text": " So this can be regulated biochemically to enable adaptation in a range of two orders of magnitude."}, {"start": 303.92, "end": 315.91999999999996, "text": " So just by regulating the biochemical properties of photoreceptors, you can adapt the eye to a different in dark and bright of 100."}, {"start": 315.92, "end": 323.92, "text": " The pupil size gives you another order of magnitude, so a factor of 10, and the neural adaptation is more or less the single processing."}, {"start": 323.92, "end": 329.92, "text": " So what to do actually with the changing signal because your receptors get bleached."}, {"start": 330.64000000000004, "end": 334.64, "text": " So the dynamic range of human vision is approximately 100 million."}, {"start": 334.64, "end": 350.64, "text": " So that means that we can perceive, in fact, the dynamic range of realistic conditions in our atmosphere."}, {"start": 350.64, "end": 370.64, "text": " But then you have, so the output of a light simulation, if it takes this into account, can be, I can have a difference of 80 million, a factor of 80 million between dark and bright, and then you want to show this on the display."}, {"start": 370.64, "end": 379.84, "text": " So the technology for a standard display gives you a dark to bright ratio of approximately 1000."}, {"start": 379.84, "end": 391.44, "text": " And if you use 8-bit encoding of your values, then you only have 265 values for 8-bit channels."}, {"start": 391.44, "end": 401.44, "text": " That means that somehow our technology is immensely inadequate to show realistic scenarios."}, {"start": 403.44, "end": 414.64, "text": " In some way, this is also good because you get not blinded by your display, but we have to somehow account for that."}, {"start": 414.64, "end": 423.03999999999996, "text": " So just taking the radians output and converting it to images will carry some problems with it."}, {"start": 425.03999999999996, "end": 425.53999999999996, "text": " Yeah."}, {"start": 425.53999999999996, "end": 433.03999999999996, "text": " But why 8-bit image files for visually witnessed files are 24 bits of information?"}, {"start": 433.04, "end": 450.24, "text": " But usually you split them into the different color values. So you have 8 bits for R, 8 bits for G, 8 bits for B. And somehow imagine that this gives you how much red light is at this pixel."}, {"start": 450.24, "end": 453.04, "text": " And yeah, you go to 266."}, {"start": 453.04, "end": 463.44, "text": " Of course, for the whole color gamut, you have more values, but if you just look at the dark bright ratio of a single color, this is approximately what you have."}, {"start": 463.44, "end": 474.64000000000004, "text": " So now tone mapping is the methods that was developed to convert this problem."}, {"start": 474.64, "end": 485.84, "text": " So the output of a light simulation, as already said, high dynamic range because of the real world dynamic range of brightness values."}, {"start": 485.84, "end": 490.24, "text": " And display devices usually have a low dynamic range."}, {"start": 490.24, "end": 495.44, "text": " So what you need to do is to compress the range of our output somehow."}, {"start": 495.44, "end": 499.44, "text": " And this is referred to as either tone mapping or tone reproduction."}, {"start": 499.44, "end": 505.04, "text": " These are the names that you find in literature."}, {"start": 505.04, "end": 510.04, "text": " So there are two sub-issues here."}, {"start": 510.04, "end": 511.84, "text": " One is range compression."}, {"start": 511.84, "end": 519.24, "text": " So how to convert high dynamic range luminance to low dynamic range luminance."}, {"start": 519.24, "end": 521.64, "text": " This is the content of this lecture."}, {"start": 521.64, "end": 531.04, "text": " And then you still have luminances, but usually there are standardized color spaces in which images are stored."}, {"start": 531.04, "end": 542.4399999999999, "text": " So you don't store your own whole-brune formats, which take certain wavelength and then gives the luminance values for those."}, {"start": 542.4399999999999, "end": 545.64, "text": " But there are standardized ways to do that."}, {"start": 545.64, "end": 553.64, "text": " And this is covered in a different lecture about color. You see the lecture number here."}, {"start": 553.64, "end": 558.04, "text": " So I refer you to this lecture if you want to know more about color."}, {"start": 558.04, "end": 566.04, "text": " And here now I only explain range compression to some extent."}, {"start": 566.04, "end": 573.04, "text": " So now a graphic example."}, {"start": 573.04, "end": 581.24, "text": " So here we have a bright, a single bright light source with no ambient light in the scene."}, {"start": 581.24, "end": 592.24, "text": " That means that all contrast comes only from this one light source via global illumination."}, {"start": 592.24, "end": 600.4399999999999, "text": " So if you take, if you would photograph the scene and take a short exposure, this is what you get."}, {"start": 600.44, "end": 608.44, "text": " So all the dimly illuminated parts completely disappear in black."}, {"start": 608.44, "end": 617.24, "text": " And but you see details in previously two bright scenes."}, {"start": 617.24, "end": 620.24, "text": " So that led to overexposure of your camera."}, {"start": 620.24, "end": 629.84, "text": " So for example here, you now see the outline, so the silhouette of the bulb, for example."}, {"start": 629.84, "end": 635.24, "text": " A very long exposure leads to overexposure of most of the image."}, {"start": 635.24, "end": 643.44, "text": " So you do not see any detail anymore close to the lamp or at directly illuminated surfaces."}, {"start": 643.44, "end": 654.0400000000001, "text": " But what you see is that previously dimly illuminated objects in the background now are well perceivable."}, {"start": 654.04, "end": 665.64, "text": " So this is what you get if you take only small parts of your dynamic range and map those to an image."}, {"start": 665.64, "end": 670.4399999999999, "text": " And what could you do to combine all this information into one image?"}, {"start": 670.4399999999999, "end": 676.4399999999999, "text": " So the most trivial thing you can imagine is I just divide by the maximum of the scene."}, {"start": 676.4399999999999, "end": 681.4399999999999, "text": " This gives me a nice luminance values from 0 to 1."}, {"start": 681.44, "end": 684.44, "text": " And this I can map to whatever range I want."}, {"start": 684.44, "end": 692.0400000000001, "text": " The problem with this is that usually the maximum is not indicative of the whole scene."}, {"start": 692.0400000000001, "end": 694.84, "text": " The maximum is usually extremely bright."}, {"start": 694.84, "end": 705.24, "text": " So when you're divide by the maximum, the only thing you see is a single reflection in the close to the light bulb."}, {"start": 705.24, "end": 716.44, "text": " If you clamp it, so let's say I have a range of luminances that I'm interested in, everything below is clamped by a 0."}, {"start": 716.44, "end": 723.44, "text": " Everything above my maximum that I decide to have is completely white."}, {"start": 723.44, "end": 733.64, "text": " This is exactly the problem that becomes obvious with high dynamic range that parts are undexposed or the parts are always exposed."}, {"start": 733.64, "end": 735.84, "text": " So clamping also doesn't get you something."}, {"start": 735.84, "end": 738.04, "text": " You have to take the whole range."}, {"start": 738.04, "end": 742.64, "text": " You cannot just ignore certain parts."}, {"start": 742.64, "end": 748.4399999999999, "text": " So one approach that gives you nice results is exponential mapping."}, {"start": 748.4399999999999, "end": 760.4399999999999, "text": " There, you assume exponential distribution in your luminance values and then rescale this exponential function into a linear one."}, {"start": 760.44, "end": 766.6400000000001, "text": " That means that very bright values get scaled down."}, {"start": 766.6400000000001, "end": 768.84, "text": " Low values get scaled up."}, {"start": 768.84, "end": 773.6400000000001, "text": " But you account for the very bright spots."}, {"start": 773.6400000000001, "end": 776.84, "text": " So these are the things that get scaled the most."}, {"start": 776.84, "end": 787.24, "text": " So the bright values at the bar get some reasonable white, but they do not dominate the whole scene."}, {"start": 787.24, "end": 795.64, "text": " A more sophisticated approach is the Reinhardt tool map developed by Reinhardt."}, {"start": 795.64, "end": 802.64, "text": " And this is the one example that we present in this lecture."}, {"start": 802.64, "end": 805.04, "text": " As you can imagine, there are many tool map."}, {"start": 805.04, "end": 817.04, "text": " So everyone tried some approach that works very well for a certain subset of scenes."}, {"start": 817.04, "end": 829.64, "text": " Some methods were optimized for parallel hardware or for certain hardware architectures to give more speed there."}, {"start": 829.64, "end": 833.8399999999999, "text": " Some presented rough approximation that work in real time."}, {"start": 833.8399999999999, "end": 840.8399999999999, "text": " So there is already a quite wide bunch of tool map to choose from."}, {"start": 840.8399999999999, "end": 844.64, "text": " And Reinhardt tool map is one of the most common."}, {"start": 844.64, "end": 852.64, "text": " So if you look into some, at an open source of commercial rendering software,"}, {"start": 852.64, "end": 857.64, "text": " this is usually one of the things that is implemented."}, {"start": 857.64, "end": 868.64, "text": " So and there is also some additional information that put this into context."}, {"start": 868.64, "end": 882.64, "text": " There is some approach in digital photography to take multiple exposure images."}, {"start": 882.64, "end": 896.04, "text": " So to combine exposures of the same scene that are taken with different exposure times into one image."}, {"start": 896.04, "end": 899.8399999999999, "text": " So this is called hydonomic range photography."}, {"start": 899.8399999999999, "end": 904.04, "text": " And usually they use similar methodologies."}, {"start": 904.04, "end": 911.4399999999999, "text": " So the Reinhardt tool map can also be used to combine or for HDR photography."}, {"start": 911.4399999999999, "end": 921.8399999999999, "text": " But if you put HDR photography into Google, you usually end up with images like that."}, {"start": 921.8399999999999, "end": 924.04, "text": " And this is not what tool mapping is about."}, {"start": 924.04, "end": 932.04, "text": " Tool mapping is a perceptually and physically validated approach"}, {"start": 932.04, "end": 938.4399999999999, "text": " that gives you realistic impressions of the scene and is not an artistic effect."}, {"start": 938.4399999999999, "end": 942.8399999999999, "text": " So it's not that you take tone mapping, look at the image and say,"}, {"start": 942.8399999999999, "end": 946.24, "text": " I would like to have it to have more contrast in there."}, {"start": 946.24, "end": 948.04, "text": " And then you change your tone map."}, {"start": 948.04, "end": 957.4399999999999, "text": " I mean, this is not it didn't stand that use and the community of HDR photography to a large extent"}, {"start": 957.4399999999999, "end": 961.64, "text": " over uses this capability of the system."}, {"start": 961.64, "end": 973.64, "text": " For example, this photograph here is a completely botched implementation or use of HDR range compression."}, {"start": 973.64, "end": 979.24, "text": " For example, things that you see that are not correct."}, {"start": 979.24, "end": 982.24, "text": " The haloes around the balloons."}, {"start": 982.24, "end": 990.64, "text": " So suddenly, why the sky is brighter around the silhouettes of the balloons?"}, {"start": 990.64, "end": 1001.64, "text": " This is an effect that is present in the visual system to some extent, but not to this extent."}, {"start": 1001.64, "end": 1010.84, "text": " So some people think that this is nice HDR photography, but this is just wrong, simply a stat."}, {"start": 1010.84, "end": 1018.84, "text": " If you like that your images look like that, then this is an artist's decision, not tone mapping."}, {"start": 1018.84, "end": 1023.04, "text": " This is just contrast enhancement, you can call it like that."}, {"start": 1023.04, "end": 1030.44, "text": " Also, the colors in this image get screwed up because they are now oversaturated."}, {"start": 1030.44, "end": 1032.04, "text": " But this is just a warning."}, {"start": 1032.04, "end": 1043.44, "text": " The tone mapping that you encounter for light simulations should not look like that because then you have a run your application."}, {"start": 1043.44, "end": 1048.44, "text": " So if you would take this photo with this one exposure?"}, {"start": 1048.44, "end": 1051.44, "text": " I mean, this is a combination of different exposure."}, {"start": 1051.44, "end": 1055.44, "text": " But also some mind-blowing."}, {"start": 1055.44, "end": 1060.44, "text": " It's not just a combination of different exposure, but also some..."}, {"start": 1060.44, "end": 1068.44, "text": " No, it is a combination of different exposures and it's taken with a tone mapping approach."}, {"start": 1068.44, "end": 1070.44, "text": " You have to do the combination somehow."}, {"start": 1070.44, "end": 1076.44, "text": " As said before, you can add a device by the maximum and combine them, clamping."}, {"start": 1076.44, "end": 1088.44, "text": " But usually the tone mapping give you a more realistic result, and for example, why is it obvious that here already multiple exposures were combined?"}, {"start": 1088.44, "end": 1100.44, "text": " Because in the very left of the left balloon, you see that you still see some kind of detail in the wood in behind."}, {"start": 1100.44, "end": 1107.44, "text": " But at the same time, you can all less directly look into the sun without it having blurred half the screen."}, {"start": 1107.44, "end": 1111.44, "text": " So that means that already a huge dynamic range was compressed here."}, {"start": 1111.44, "end": 1114.44, "text": " But it should not look like that."}, {"start": 1114.44, "end": 1121.44, "text": " It should look like that if it's done correctly."}, {"start": 1121.44, "end": 1126.44, "text": " And this would be the correct application for this photograph."}, {"start": 1126.44, "end": 1131.44, "text": " So it looks somehow realistic. It doesn't over-saturate the colors."}, {"start": 1131.44, "end": 1144.44, "text": " It only has a very light halo around the silhouettes, which is also an effect that's present in the visual system."}, {"start": 1144.44, "end": 1148.44, "text": " So this is what you should aim for."}, {"start": 1148.44, "end": 1153.44, "text": " So now about tone mapping itself."}, {"start": 1153.44, "end": 1157.44, "text": " There are two large different classes."}, {"start": 1157.44, "end": 1161.44, "text": " So one are the global tone mapas and the other are the local."}, {"start": 1161.44, "end": 1173.44, "text": " So the global tone mapas use a mapping function that converts radians at a certain pixel to say RGB value if you select this color space."}, {"start": 1173.44, "end": 1176.44, "text": " And this mapping function is uniform."}, {"start": 1176.44, "end": 1183.44, "text": " What I mean here is not that it's uniform. It produces the uniform output value."}, {"start": 1183.44, "end": 1186.44, "text": " So you don't get an image with a single color value."}, {"start": 1186.44, "end": 1195.44, "text": " But the function itself just takes as input the radians at a certain pixel and outputs RGB value."}, {"start": 1195.44, "end": 1199.44, "text": " And this function is then used for all pixels of the image."}, {"start": 1199.44, "end": 1222.44, "text": " More complex methodologies are local tone mapas, which not only take a single pixel into account, but also its neighbor's."}, {"start": 1222.44, "end": 1237.44, "text": " And this is perceptually motivated because as you have seen before, this contrast or brightness adaptation of your eyes."}, {"start": 1237.44, "end": 1241.44, "text": " In the photo receptors, for example, this is done locally."}, {"start": 1241.44, "end": 1245.44, "text": " So a single photo receptor adapts to different brightnesses."}, {"start": 1245.44, "end": 1256.44, "text": " So that means that you tone mapping in the human eye is a local behavior."}, {"start": 1256.44, "end": 1262.44, "text": " But there are there are reasons to employ both of them."}, {"start": 1262.44, "end": 1267.44, "text": " So the global tone mapas, they are fast because they have a single mapping function."}, {"start": 1267.44, "end": 1273.44, "text": " So you take you can execute this function parallel on each pixel."}, {"start": 1273.44, "end": 1278.44, "text": " So this makes it perfectly usable for GPU approaches, for example."}, {"start": 1278.44, "end": 1288.44, "text": " But you'll incur some loss of detail because you cannot locally look as if this is already a dark patch of my scene."}, {"start": 1288.44, "end": 1292.44, "text": " So I can enhance the contrast more there."}, {"start": 1292.44, "end": 1299.44, "text": " Or is it at a dark bright boundary where I do not do it that much."}, {"start": 1299.44, "end": 1304.44, "text": " And the local tone mapas allow this."}, {"start": 1304.44, "end": 1313.44, "text": " So they allow a local contrast enhancement, but they are slow because you not only have to say, look at the pixel, but also its neighborhood."}, {"start": 1313.44, "end": 1319.44, "text": " The neighborhood grows quadratically if you enlarge it."}, {"start": 1319.44, "end": 1329.44, "text": " So I'm there you will incur a different complexity in your problem."}]
Two Minute Papers
https://www.youtube.com/watch?v=E69UxMz2Q9A
TU Wien Rendering #22 - Reinhard's Tone Mapper
This lecture is held by Thomas Auzinger. In the first lecture, we discussed that we're trying to simulate light transport and measure radiance. That sounds indeed wonderful, but we can't display radiance on our display device, can we? We have to convert it to RGB somehow. It turns out that it's not such a trivial problem! Reinhard et al.'s tone mapping algorithm is used in many of the major renderers out there to take care of the problem. Both the local and global versions of the algorithm are discussed. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger. Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
So, as I said before, the tool map we will look at was developed by Renhardt at all in Hitzigrath 2002 presented. So it has, it had already some time to prove itself, so to say. It's widely used, so most light simulation products and also HDR photography tools have it and it has both a global and a local variant. So they were both presented. Here you see the difference between a global and a local approach. In some way, I mean usually convolution is by a fixed kernel. So your kernel function is the same. Here this could be a non-linear kernel. So it's not that it's linear convolution in the sense that you have one fixed kernel that does it, but you have a non-linear mapping. In this sense, it's not really convolution, but the implementations can are in this period. So if you want to have it fast. So parallelization strategies that work for convolution is a work for depth. Usually you, I will go into the local approach later, but you usually have to adapt the feature size. So either then you do multiple passes with different discretizations of your kernel size or you take a huge kernel and vary the kernel function. Okay, here the difference between a global and a local approach. I'm not 100% true how wide it is is possible on this project. But for example, you see that the local approach has much more contrast in the Google path or for example in the in the most ethics on the chart. A technological one already works very well is faster, but doesn't give you all the possibilities that your eye would give you. So the steps. This is just a short outline. I will explain what this meant with this formula is. Of course, if you want to have in depth details on this, the reference is at the end of the lecture. So here a percent global version. So you now have one mapping function of the whole image or of the luminance values of the whole image to a color space. So the first thing you do is you compute some kind of average of the image. So since you do not know if this is this inherently a very dark image or is it globally a very bright image, you somehow have to set the baseline. So to say. And this is done with the log average. So you don't just average the sun. So you don't sum up and divide because this would give an un proportionable weight to the large values because they are somehow exponentially distributed you can imagine. So this takes care of that. So you. You average in log space and then convert back into exponential. And then you have some kind of average of your scene. So this is the approximate average brightness of the image that I'm looking at. And then you map this to a middle gray value. So A is now what you define as you want to have middle gray. This you can vary. This also depends on what the range is. You have available. And this now sets the bar. So everything above this middle gray will be brighter. Everything below will be dark. And then what you do is you take the middle gray value and scale the input accordingly to it. So now you don't have a input in some kind of arbitrary range but you have it relative to the middle gray value. And then what you do is that you compress the high luminances. So as if you remember from before this division, division by a maximum, which gave you just a single bright spot in the image. This is because the high luminances dominate, usually dominate the scene but are only of very small extent in the scene itself. Imagine looking at the sun. I mean the sun is very small when compared to the whole sky dome but it would dominate it as immensely higher brightness values than the surrounding blue of the sky. So what you do here is not that you are map your scale luminance values to the final output by compressing the higher luminances. So in the lower right you see the function, the mapping function. So as you see that a huge range, say from value 4 to 10 in the x-axis is mapped to a rather small part in the y-axis. That means that a lot of large luminances are just compressed in a small range in the, say, RGB values. And what you also have is that the small luminances, so the small luminances in the input file or in the input image are mapped to a larger extent on the y-axis. So this you can see that the slope of the function starts out as nilivertically. Go at the 45 degree you have a 1-1 mapping which is approximately a 2 and then you have a compression. So what this mapping does is it compresses the high luminances and enhances the low luminances. This gives you the mapping and you can convert this into, so I go back. So the output luminances then have a predefined range from 0 to 1 and can then be mapped to what that color space you like. So the local version, this is now the one that takes the neighbors into account, starts off similarly. So you compute the log average to get an estimate how is the average brightness of my scene. Then you map the values to average gray. That means that the value of 1 is now my the middle gray to the defined. But now I do not compress the luminance by one uniform function over the whole image but then locally adept this function by looking at the neighbors. So now this the local average apacase V now depends not only on the coordinates in the image, so x and y. So the pixel that I'm there right now but it also depends on some kind of scale. The scale means how uniform I'm a brightness distribution there. So am I at a say a silhouette where half of it is very dark, half of it is very bright or am I in a homogeneous region because in homogeneous regions your eye would adapt on a larger extent on the right to this brightness and if it's accommodated to this brightness it can differentiate details better. You perhaps have already experienced it when you look at the moon. So if you look quickly at the moon it's just a bright spot but the longer you look at it the more details you can discern it. This is an adaptation of your eye to it and this happens also at local spots on the right and this is what's simulated here. So you compute for every pixel position x, y, a local scale and with this local scale you then you compute a local average V and this now gives you how much range compression is going to happen at this location. So how to get this scale? So the scale is the extent of an area around this pixel where the brightness does not change too much. So in this example where you have a colorful church window this is that is illuminated from outside. You have three different examples on how how you can put the influence region of your scale. So the line-up tool map is by computing two radies and then look at a certain scale that I mean a way that they have a certain behavior. So in the very top you see you assume that the pixels in the center of the two concentric circles. So this is the pixel that I want to tone that now and if I lie if I put two concentric small circles around it then I see that the center area the smaller circle is quite smooth in the distribution of brightnesses but the outer circle is very smooth too. So there is no significant difference between the small and the large circle that means that my scale is too small. That means that I could enlarge my circles even more and still retain smoothness. In the center example you see the correct application so the center disk has smooth values so they are approximately the same brightness but if you take now the outer circle that intersects the window you see that suddenly in this footprint very large brightness values are. So you see that the small circle is smooth but the surrounding area is not smooth and this is what you want to achieve when you determine the local scale. So the larger the local scale is the more you are looking at the large patch of equal brightnesses and your eye would adapt to that. That means that for this pixel you can improve the contrast because your eye would do the same. Another failure case what you should not do is that you make the disks too large as shown in the bottom example because your bright values are already in the inner disk. So this scale would be too large. I go back. So what you get but this when you notice K you know in how large the reach is where you can compute your local average and this then allows you to locally compress your range. So if it's very smooth you compress less if you are next to very bright boundaries. So if you are next to discontinuities in the brightness in your image so you are at the verge of traversing from a bright to a dim region or vice versa you do the opposite. The ratio between the larger radius or smaller is constant. Exactly. This is the approach that detect there and then you have to find this. I mean they also show you an implementation how this can be efficiently mapped because I mean trying all the possible circles or disks is not feasible so you need some kind of approximation that gives you a good estimate of that. Yeah I mean I don't know how they handle this but I would guess that it just ignore everything that's out of the image. So you have to cope with what you have there. And but this is natural. In the corners of your image you get less neighborhood so you get less information of how the surrounding looks like so you will get say less optimal tone mapping there. So here another difference assuming in the image before so as you can see the inch of figures on the wall are have a much higher contrast and this is now because the brightness values there are relatively constant. So if you compare this with say the window next to it then this has a very uniform brightness even if the colors are perhaps different this is still a low range as you saw before between sun and moon it was 800,000 between black and white it was 100 or 1000. So if you have direct or indirect illumination counts much more than just color variations of your surface. So here the tone map corrects the take-in its local version that the brightness values are quite uniform in this region so it enhances the contrast. So now this concludes the right-hand tone map but there are also other tone mapping approaches. So in this C-Craft 2002 was the year of the tone map bus because there were three tone mapping approaches presented. So one of those was the bilateral filter perhaps you already heard of bilateral filtering in computer vision perhaps. There this is you can imagine it as a dorsal smoothing that does not smoothe the edges. So if you have a large difference in your image then it does not smoothe this way. So this is you can also call it edge preserving smoothing. And you can imagine that this is somehow conceptually similar to this scale estimation of a line-out because the scale estimation looks where to the brightness values differ a lot. So these are say my brightness edges in the image and then it doesn't try to propagate contrast enhancement over this edge and the bilateral filters can do the same thing because they just stop the filtering process at edges. Then another approach by Fatal gradient processing this is you can imagine an image say you have only a luminous channel of a single color then you can imagine your image as a height function. So bright spots correspond to high peaks, dim regions to low values. And what they do in this work is that they look at the gradients. So how steep are the slopes in my image and then they compress in gradient range. This allows them to preserve low gradients because these are small slopes are the details in the dimly-luminated regions and the very high slopes are the ramps up to the bright spots. So if you just say reduce the high gradients you also get a similar behavior that you only reduce the range in non-uniform regions. So these are three approaches that do approximately the same they have the same underlying idea but they have different approaches to it. So one of our hand heart goes more into what's used in photography with direct exposure plates that you can use. bilateral filtering immediately uses a well established filtering paradigm in signal processing but it's most likely very sub-optimal from a perceptual point of view. Gradient processing allows a very fast application but perceptionally of these three works that I have told me it was the best. And this speed increases that you perhaps needed 13 years ago are not really relevant now so you don't need to take approximations that were highly relevant before into account anymore and just do the and just have something that's less efficient but gives you a lot of the box nice solution. And if you'll check Wikipedia the German version as a long list they list the approximately 20 different variants. The literature for that is Reinhard not only started that but also continued research and that and he's seen as one of the say experts that to go to when tool mapping is concerned he wrote a book about that so hytonemic range imaging and you can also see these three papers. If you want to know how to speak developed you just check for example the Reinhard paper on the digital library of ACM. You can do this from here from the university so you get an account you get immediate access to the whole digital library of ACM and there you can just click on site it by and then you get the long list of papers that referenced the original work by Reinhard and then you can sort them by a year's and look what different approaches we used. Good that concludes the tool mapping parts are there any questions? Okay if not, if we know which method is the use time we also do we also do we also do some extent because I mean you're inherently losing information because okay the say the thing where you for sure lose precision is if you have before say 24 beat values and then you're in cold then we need 8 beat RGB values then you have a contestation of your brightness values this is unrecoverable this is just lost. The other thing is if you use local approaches that take the neighborhood information to account and then do different things in different regions of the image then it could be that you have some kind of ambiguity if you want to reverse this process because then it could be that you either had say in the original data large brightness values and a small scale or a large range compression then or the range was already small like that and you're just used less compression so you do not have the full information how local approaches adapt to certain parts of the image you will have to infer this somehow I would guess like an optimization approach so you would like that locally you should have the same scale that you assume that you reconstruct and then you can recover your signal but you will not be able to recover with it with global approaches it should be possible yeah okay yeah if you combine global and the local ones as the right level does I think wait in any way or about the global one and the local one if you look here the local version has this expression so the the scale luminance is then divided by some kind of local average and the global version the first two steps are identical but then it uses the scale value only at the very center to say compute this local average so the global version is a special case of the local version that there is not a certain way to implement yeah you combine the whole picture of the local picture I will not know no no no the luminance output this is what you get so for each picture location you get scale luminance between 0 and 1 by the global approach and by the local approach so here it also immediately gives you the output luminance so that there is no combination for the questions okay then thank you for your attention on this concludes this rendering lecture
[{"start": 0.0, "end": 8.120000000000001, "text": " So, as I said before, the tool map we will look at was developed by Renhardt at all in"}, {"start": 8.120000000000001, "end": 15.76, "text": " Hitzigrath 2002 presented. So it has, it had already some time to prove itself, so to"}, {"start": 15.76, "end": 24.96, "text": " say. It's widely used, so most light simulation products and also HDR photography tools have"}, {"start": 24.96, "end": 34.28, "text": " it and it has both a global and a local variant. So they were both presented. Here you see the"}, {"start": 34.28, "end": 48.56, "text": " difference between a global and a local approach. In some way, I mean usually convolution is"}, {"start": 48.56, "end": 55.040000000000006, "text": " by a fixed kernel. So your kernel function is the same. Here this could be a non-linear kernel."}, {"start": 55.040000000000006, "end": 60.96, "text": " So it's not that it's linear convolution in the sense that you have one fixed kernel that does"}, {"start": 60.96, "end": 68.80000000000001, "text": " it, but you have a non-linear mapping. In this sense, it's not really convolution, but the"}, {"start": 68.80000000000001, "end": 76.44, "text": " implementations can are in this period. So if you want to have it fast. So parallelization"}, {"start": 76.44, "end": 93.56, "text": " strategies that work for convolution is a work for depth. Usually you, I will go into the local"}, {"start": 93.56, "end": 102.4, "text": " approach later, but you usually have to adapt the feature size. So either then you do multiple"}, {"start": 102.4, "end": 107.60000000000001, "text": " passes with different discretizations of your kernel size or you take a huge kernel and vary the"}, {"start": 107.60000000000001, "end": 116.24000000000001, "text": " kernel function. Okay, here the difference between a global and a local approach. I'm not 100%"}, {"start": 116.24000000000001, "end": 124.4, "text": " true how wide it is is possible on this project. But for example, you see that the local approach"}, {"start": 124.4, "end": 134.32, "text": " has much more contrast in the Google path or for example in the in the most ethics on the"}, {"start": 134.32, "end": 145.6, "text": " chart. A technological one already works very well is faster, but doesn't give you all the possibilities"}, {"start": 145.6, "end": 156.07999999999998, "text": " that your eye would give you. So the steps. This is just a short outline. I will explain what"}, {"start": 156.07999999999998, "end": 163.04, "text": " this meant with this formula is. Of course, if you want to have in depth details on this, the"}, {"start": 163.04, "end": 173.04, "text": " reference is at the end of the lecture. So here a percent global version. So you now have one"}, {"start": 173.04, "end": 182.23999999999998, "text": " mapping function of the whole image or of the luminance values of the whole image to a color space."}, {"start": 183.04, "end": 190.64, "text": " So the first thing you do is you compute some kind of average of the image. So since you do not know"}, {"start": 190.64, "end": 198.56, "text": " if this is this inherently a very dark image or is it globally a very bright image, you somehow"}, {"start": 198.56, "end": 209.12, "text": " have to set the baseline. So to say. And this is done with the log average. So you don't just average"}, {"start": 212.48, "end": 220.56, "text": " the sun. So you don't sum up and divide because this would give an un proportionable weight to the"}, {"start": 221.68, "end": 228.48000000000002, "text": " large values because they are somehow exponentially distributed you can imagine. So this takes care"}, {"start": 228.48, "end": 238.32, "text": " of that. So you. You average in log space and then convert back into exponential."}, {"start": 240.79999999999998, "end": 248.07999999999998, "text": " And then you have some kind of average of your scene. So this is the approximate average brightness"}, {"start": 248.07999999999998, "end": 257.12, "text": " of the image that I'm looking at. And then you map this to a middle gray value. So A is now what you"}, {"start": 257.12, "end": 263.84000000000003, "text": " define as you want to have middle gray. This you can vary. This also depends on what the range is."}, {"start": 263.84000000000003, "end": 274.48, "text": " You have available. And this now sets the bar. So everything above this middle gray will be brighter."}, {"start": 274.48, "end": 277.52, "text": " Everything below will be dark. And then"}, {"start": 277.52, "end": 290.56, "text": " what you do is you take the middle gray value and scale the input accordingly to it. So now you"}, {"start": 290.56, "end": 297.12, "text": " don't have a input in some kind of arbitrary range but you have it relative to the middle gray value."}, {"start": 298.47999999999996, "end": 305.28, "text": " And then what you do is that you compress the high luminances. So as if you remember from before"}, {"start": 305.28, "end": 311.28, "text": " this division, division by a maximum, which gave you just a single bright spot in the image."}, {"start": 311.91999999999996, "end": 319.2, "text": " This is because the high luminances dominate, usually dominate the scene but are only of very"}, {"start": 319.2, "end": 326.64, "text": " small extent in the scene itself. Imagine looking at the sun. I mean the sun is very small when compared"}, {"start": 326.64, "end": 337.52, "text": " to the whole sky dome but it would dominate it as immensely higher brightness values than the"}, {"start": 337.52, "end": 345.68, "text": " surrounding blue of the sky. So what you do here is not that you are map your scale luminance"}, {"start": 345.68, "end": 354.15999999999997, "text": " values to the final output by compressing the higher luminances. So in the lower right you see"}, {"start": 354.16, "end": 366.48, "text": " the function, the mapping function. So as you see that a huge range, say from value 4 to 10 in the"}, {"start": 366.48, "end": 375.44000000000005, "text": " x-axis is mapped to a rather small part in the y-axis. That means that a lot of large luminances"}, {"start": 375.44, "end": 389.68, "text": " are just compressed in a small range in the, say, RGB values. And what you also have is that the small"}, {"start": 389.68, "end": 399.68, "text": " luminances, so the small luminances in the input file or in the input image are mapped to a larger"}, {"start": 399.68, "end": 408.8, "text": " extent on the y-axis. So this you can see that the slope of the function starts out as nilivertically."}, {"start": 412.08, "end": 419.6, "text": " Go at the 45 degree you have a 1-1 mapping which is approximately a 2 and then you have a compression."}, {"start": 419.6, "end": 429.84000000000003, "text": " So what this mapping does is it compresses the high luminances and enhances the low luminances."}, {"start": 431.28000000000003, "end": 439.12, "text": " This gives you the mapping and you can convert this into, so I go back. So the output luminances then"}, {"start": 439.12, "end": 445.68, "text": " have a predefined range from 0 to 1 and can then be mapped to what that color space you like."}, {"start": 445.68, "end": 452.48, "text": " So the local version, this is now the one that takes the neighbors into account, starts off"}, {"start": 452.48, "end": 459.44, "text": " similarly. So you compute the log average to get an estimate how is the average brightness of my"}, {"start": 459.44, "end": 468.96000000000004, "text": " scene. Then you map the values to average gray. That means that the value of 1 is now my"}, {"start": 468.96, "end": 479.84, "text": " the middle gray to the defined. But now I do not compress the luminance by one uniform function"}, {"start": 479.84, "end": 486.47999999999996, "text": " over the whole image but then locally adept this function by looking at the neighbors. So now"}, {"start": 486.48, "end": 503.84000000000003, "text": " this the local average apacase V now depends not only on the coordinates in the image, so x and y."}, {"start": 503.84000000000003, "end": 511.92, "text": " So the pixel that I'm there right now but it also depends on some kind of scale. The scale"}, {"start": 511.92, "end": 522.96, "text": " means how uniform I'm a brightness distribution there. So am I at a say a silhouette where half of"}, {"start": 522.96, "end": 529.84, "text": " it is very dark, half of it is very bright or am I in a homogeneous region because in homogeneous"}, {"start": 529.84, "end": 539.6, "text": " regions your eye would adapt on a larger extent on the right to this brightness and if it's"}, {"start": 539.6, "end": 546.0, "text": " accommodated to this brightness it can differentiate details better. You perhaps have already"}, {"start": 546.5600000000001, "end": 553.6, "text": " experienced it when you look at the moon. So if you look quickly at the moon it's just a bright spot"}, {"start": 554.32, "end": 560.16, "text": " but the longer you look at it the more details you can discern it. This is an adaptation of your"}, {"start": 560.16, "end": 568.08, "text": " eye to it and this happens also at local spots on the right and this is what's simulated here."}, {"start": 568.08, "end": 578.4000000000001, "text": " So you compute for every pixel position x, y, a local scale and with this local scale you then"}, {"start": 581.2, "end": 589.12, "text": " you compute a local average V and this now gives you how much range compression is going to happen"}, {"start": 589.12, "end": 604.32, "text": " at this location. So how to get this scale? So the scale is the extent of an area around this"}, {"start": 604.32, "end": 612.0, "text": " pixel where the brightness does not change too much. So in this example where you have a colorful"}, {"start": 612.0, "end": 622.24, "text": " church window this is that is illuminated from outside. You have three different examples on how"}, {"start": 623.28, "end": 634.48, "text": " how you can put the influence region of your scale. So the line-up tool map is by computing"}, {"start": 634.48, "end": 644.0, "text": " two radies and then look at a certain scale that I mean a way that they have a certain behavior."}, {"start": 644.64, "end": 652.88, "text": " So in the very top you see you assume that the pixels in the center of the two concentric circles."}, {"start": 652.88, "end": 663.52, "text": " So this is the pixel that I want to tone that now and if I lie if I put two concentric small"}, {"start": 663.52, "end": 672.24, "text": " circles around it then I see that the center area the smaller circle is quite smooth"}, {"start": 672.24, "end": 677.92, "text": " in the distribution of brightnesses but the outer circle is very smooth too. So there is no"}, {"start": 677.92, "end": 683.36, "text": " significant difference between the small and the large circle that means that my scale is too small."}, {"start": 683.36, "end": 692.56, "text": " That means that I could enlarge my circles even more and still retain smoothness. In the center"}, {"start": 692.56, "end": 701.1999999999999, "text": " example you see the correct application so the center disk has smooth values so they are"}, {"start": 701.76, "end": 708.4799999999999, "text": " approximately the same brightness but if you take now the outer circle that intersects the window"}, {"start": 708.4799999999999, "end": 715.8399999999999, "text": " you see that suddenly in this footprint very large brightness values are. So you see that the"}, {"start": 715.8399999999999, "end": 722.4799999999999, "text": " small circle is smooth but the surrounding area is not smooth and this is what you want to achieve"}, {"start": 722.48, "end": 733.2, "text": " when you determine the local scale. So the larger the local scale is the more you are looking at"}, {"start": 733.2, "end": 741.6800000000001, "text": " the large patch of equal brightnesses and your eye would adapt to that. That means that for this"}, {"start": 741.6800000000001, "end": 748.32, "text": " pixel you can improve the contrast because your eye would do the same. Another failure case what"}, {"start": 748.32, "end": 754.24, "text": " you should not do is that you make the disks too large as shown in the bottom example because"}, {"start": 754.24, "end": 760.08, "text": " your bright values are already in the inner disk. So this scale would be too large."}, {"start": 762.5600000000001, "end": 770.1600000000001, "text": " I go back. So what you get but this when you notice K you know in"}, {"start": 770.16, "end": 780.64, "text": " how large the reach is where you can compute your local average and this then allows you to"}, {"start": 780.64, "end": 788.7199999999999, "text": " locally compress your range. So if it's very smooth you compress less if you are next to very"}, {"start": 788.7199999999999, "end": 797.04, "text": " bright boundaries. So if you are next to discontinuities in the brightness in your image so"}, {"start": 797.04, "end": 804.8, "text": " you are at the verge of traversing from a bright to a dim region or vice versa you do the opposite."}, {"start": 805.8399999999999, "end": 814.8, "text": " The ratio between the larger radius or smaller is constant. Exactly. This is the approach that"}, {"start": 814.8, "end": 823.4399999999999, "text": " detect there and then you have to find this. I mean they also show you an implementation how this"}, {"start": 823.44, "end": 831.36, "text": " can be efficiently mapped because I mean trying all the possible circles or disks is not feasible"}, {"start": 831.36, "end": 836.4000000000001, "text": " so you need some kind of approximation that gives you a good estimate of that."}, {"start": 845.36, "end": 851.5200000000001, "text": " Yeah I mean I don't know how they handle this but I would guess that it just ignore everything"}, {"start": 851.52, "end": 856.48, "text": " that's out of the image. So you have to cope with what you have there."}, {"start": 857.76, "end": 863.6, "text": " And but this is natural. In the corners of your image you get less neighborhood so you get less"}, {"start": 863.6, "end": 869.1999999999999, "text": " information of how the surrounding looks like so you will get say less optimal tone mapping there."}, {"start": 872.0799999999999, "end": 879.68, "text": " So here another difference assuming in the image before so as you can see"}, {"start": 879.68, "end": 889.12, "text": " the inch of figures on the wall are have a much higher contrast and this is now because the brightness"}, {"start": 889.12, "end": 897.1999999999999, "text": " values there are relatively constant. So if you compare this with say the window next to it"}, {"start": 898.4799999999999, "end": 906.0799999999999, "text": " then this has a very uniform brightness even if the colors are perhaps different this is still"}, {"start": 906.08, "end": 913.2, "text": " a low range as you saw before between sun and moon it was 800,000 between black and white it was"}, {"start": 913.2, "end": 922.64, "text": " 100 or 1000. So if you have direct or indirect illumination counts much more than just color"}, {"start": 922.64, "end": 931.0400000000001, "text": " variations of your surface. So here the tone map corrects the take-in its local version that"}, {"start": 931.04, "end": 936.88, "text": " the brightness values are quite uniform in this region so it enhances the contrast."}, {"start": 939.76, "end": 948.3199999999999, "text": " So now this concludes the right-hand tone map but there are also other tone mapping approaches."}, {"start": 948.3199999999999, "end": 956.48, "text": " So in this C-Craft 2002 was the year of the tone map bus because there were three tone mapping"}, {"start": 956.48, "end": 966.8000000000001, "text": " approaches presented. So one of those was the bilateral filter perhaps you already heard of"}, {"start": 966.8000000000001, "end": 975.6800000000001, "text": " bilateral filtering in computer vision perhaps. There this is you can imagine it as a dorsal"}, {"start": 975.6800000000001, "end": 983.04, "text": " smoothing that does not smoothe the edges. So if you have a large difference in your image"}, {"start": 983.04, "end": 991.76, "text": " then it does not smoothe this way. So this is you can also call it edge preserving smoothing."}, {"start": 992.56, "end": 1001.4399999999999, "text": " And you can imagine that this is somehow conceptually similar to this scale estimation of"}, {"start": 1001.4399999999999, "end": 1009.4399999999999, "text": " a line-out because the scale estimation looks where to the brightness values differ a lot. So"}, {"start": 1009.44, "end": 1016.48, "text": " these are say my brightness edges in the image and then it doesn't try to propagate contrast"}, {"start": 1016.48, "end": 1022.5600000000001, "text": " enhancement over this edge and the bilateral filters can do the same thing because they just stop"}, {"start": 1022.5600000000001, "end": 1032.24, "text": " the filtering process at edges. Then another approach by Fatal gradient processing this is"}, {"start": 1032.24, "end": 1045.92, "text": " you can imagine an image say you have only a luminous channel of a single color then you can imagine"}, {"start": 1045.92, "end": 1055.28, "text": " your image as a height function. So bright spots correspond to high peaks,"}, {"start": 1055.28, "end": 1066.0, "text": " dim regions to low values. And what they do in this work is that they look at the gradients."}, {"start": 1066.0, "end": 1073.04, "text": " So how steep are the slopes in my image and then they compress in gradient range."}, {"start": 1073.04, "end": 1087.36, "text": " This allows them to preserve low gradients because these are small slopes are the details in the"}, {"start": 1087.36, "end": 1098.08, "text": " dimly-luminated regions and the very high slopes are the ramps up to the bright spots. So if you"}, {"start": 1098.08, "end": 1109.6, "text": " just say reduce the high gradients you also get a similar behavior that you only reduce the range"}, {"start": 1109.6, "end": 1116.8799999999999, "text": " in non-uniform regions. So these are three approaches that do approximately the same they"}, {"start": 1116.8799999999999, "end": 1123.36, "text": " have the same underlying idea but they have different approaches to it. So one of our hand"}, {"start": 1123.36, "end": 1132.8, "text": " heart goes more into what's used in photography with direct exposure plates that you can use."}, {"start": 1134.1599999999999, "end": 1143.4399999999998, "text": " bilateral filtering immediately uses a well established filtering paradigm in signal processing"}, {"start": 1144.3999999999999, "end": 1151.28, "text": " but it's most likely very sub-optimal from a perceptual point of view. Gradient processing"}, {"start": 1151.28, "end": 1158.56, "text": " allows a very fast application but perceptionally of these three works that I have told me"}, {"start": 1158.56, "end": 1167.2, "text": " it was the best. And this speed increases that you perhaps needed 13 years ago are not really"}, {"start": 1167.2, "end": 1174.16, "text": " relevant now so you don't need to take approximations that were highly relevant before into account"}, {"start": 1174.16, "end": 1181.76, "text": " anymore and just do the and just have something that's less efficient but gives you a lot of the"}, {"start": 1181.76, "end": 1191.92, "text": " box nice solution. And if you'll check Wikipedia the German version as a long list they"}, {"start": 1192.5600000000002, "end": 1202.3200000000002, "text": " list the approximately 20 different variants. The literature for that is Reinhard not only"}, {"start": 1202.32, "end": 1211.6799999999998, "text": " started that but also continued research and that and he's seen as one of the say experts"}, {"start": 1212.32, "end": 1219.9199999999998, "text": " that to go to when tool mapping is concerned he wrote a book about that so hytonemic range imaging"}, {"start": 1222.24, "end": 1229.12, "text": " and you can also see these three papers. If you want to know how to speak developed you just"}, {"start": 1229.12, "end": 1238.1599999999999, "text": " check for example the Reinhard paper on the digital library of ACM. You can do this from here"}, {"start": 1238.1599999999999, "end": 1243.52, "text": " from the university so you get an account you get immediate access to the whole digital library of"}, {"start": 1243.52, "end": 1252.1599999999999, "text": " ACM and there you can just click on site it by and then you get the long list of papers that"}, {"start": 1252.1599999999999, "end": 1258.3999999999999, "text": " referenced the original work by Reinhard and then you can sort them by a year's and look"}, {"start": 1258.4, "end": 1267.6000000000001, "text": " what different approaches we used. Good that concludes the tool mapping parts are there any questions?"}, {"start": 1267.6, "end": 1284.24, "text": " Okay if not, if we know which method is the use time we also do we also do we also do"}, {"start": 1284.24, "end": 1301.04, "text": " some extent because I mean you're inherently losing information because"}, {"start": 1304.64, "end": 1312.56, "text": " okay the say the thing where you for sure lose precision is if you have before say 24"}, {"start": 1312.56, "end": 1318.32, "text": " beat values and then you're in cold then we need 8 beat RGB values then you have a contestation"}, {"start": 1318.32, "end": 1325.28, "text": " of your brightness values this is unrecoverable this is just lost. The other thing is if you use"}, {"start": 1325.28, "end": 1335.52, "text": " local approaches that take the neighborhood information to account and then do different things"}, {"start": 1335.52, "end": 1343.28, "text": " in different regions of the image then it could be that you have some kind of ambiguity if you"}, {"start": 1343.28, "end": 1350.8799999999999, "text": " want to reverse this process because then it could be that you either had say in the original data"}, {"start": 1351.44, "end": 1360.96, "text": " large brightness values and a small scale or a large range compression then or the range was"}, {"start": 1360.96, "end": 1369.04, "text": " already small like that and you're just used less compression so you do not have the full"}, {"start": 1369.04, "end": 1379.1200000000001, "text": " information how local approaches adapt to certain parts of the image you will have to infer this"}, {"start": 1379.1200000000001, "end": 1386.48, "text": " somehow I would guess like an optimization approach so you would like that locally you should"}, {"start": 1386.48, "end": 1394.0, "text": " have the same scale that you assume that you reconstruct and then you can recover your signal"}, {"start": 1394.0, "end": 1402.4, "text": " but you will not be able to recover with it with global approaches it should be possible yeah"}, {"start": 1404.64, "end": 1413.68, "text": " okay yeah if you combine global and the local ones as the right level does I think wait in"}, {"start": 1413.68, "end": 1423.04, "text": " any way or about the global one and the local one if you look here the local version has"}, {"start": 1423.8400000000001, "end": 1431.04, "text": " this expression so the the scale luminance is then divided by some kind of local average"}, {"start": 1432.0, "end": 1438.0800000000002, "text": " and the global version the first two steps are identical but then it uses the"}, {"start": 1438.08, "end": 1447.4399999999998, "text": " scale value only at the very center to say compute this local average so the global version is a"}, {"start": 1447.4399999999998, "end": 1452.8, "text": " special case of the local version that there is not a certain way to implement"}, {"start": 1454.96, "end": 1458.48, "text": " yeah you combine the whole picture of the local picture I will not know"}, {"start": 1458.48, "end": 1467.1999999999998, "text": " no no no the luminance output this is what you get so for each"}, {"start": 1467.2, "end": 1475.8400000000001, "text": " picture location you get scale luminance between 0 and 1 by the global approach and by the local"}, {"start": 1475.8400000000001, "end": 1482.96, "text": " approach so here it also immediately gives you the output luminance so that there is no combination"}, {"start": 1488.16, "end": 1494.96, "text": " for the questions okay then thank you for your attention on this concludes"}, {"start": 1494.96, "end": 1497.76, "text": " this rendering lecture"}]
Two Minute Papers
https://www.youtube.com/watch?v=Oo9oOuC2zOo
TU Wien Rendering #18 - Coming Up Next: BVH, Tone Mapping, SSS
We now know a lot, but there is still lots of exciting things coming up next! If we implement subsurface scattering in our renderer, we can render translucent objects - these objects are not treated as surfaces, but volumes, in which photons can scatter or get absorbed. Many of these materials look absolutely mesmerizing so we should definitely learn how to do this. Space partitioning techniques help us to alleviate the problem of intersecting against every object in the scene and tone mapping will help us in translating the simulated radiance to RGB values that we can display on our monitors. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Next time, what you will see is something that was missing from many of the bigger mutations of many assignments. What is the complexity of the ray tracing algorithm depend on? What depends on the resolution? The bigger the image, the more it takes, got it? It is exponential with respect to the depth. At least this implementation is, if you shoot out two rays, there is always a branching. Then this is going to be exponential. So we have taken into consideration resolution, we have taken into consideration depth. But we haven't taken into consideration how many objects there are in the scene. And if you start running the same ray tracer on huge scene because you don't want to see spheres, you want to do ray tracing like real men do, then what you do is you implement a function that can load you triangle meshes. And then you just grab a nice triangle meshes and I've seen from somewhere loaded to your ray tracer and you're very excited the run the ray tracer and you don't get anything in your whole lifetime. If you load something with millions of hours, it's not much nowadays. Why? Someone help you out. Just face to wrong. That's true, but why does it take a lot? Because you have to do a lot of intersection tests. Exactly. So I have, if I have one million objects, I have to do one million intersections every single time. That's too much. It's just way too much. So what we can do is that we can do some kind of space partitioning, which means that simple optimizations I can do. For instance, I really don't care what is behind me because I'm going to intersect something that's in front of me. So whenever it's behind me, I can immediately throw this all of those polygons out. That's immediately half of it. And if you use smart tricks and key details, smart tricks, smart data structures, you can go from linear, while in here, one million objects, one million intersections. So that's linear complexity. You can go to logarithmic complexity, which is amazing because the logarithm after a point doesn't really increase too much. And you will learn about techniques that will make you able to compute this intersection with one million objects with about four or five intersections on average. Obviously, obviously it depends on the distribution of the triangles and all of that. But on average, you can do it in four or five intersections instead of one million. So it's a huge, huge speed up. This is going to be on the next lecture. And again, it seems that I have been lying to you all along regarding this as well, because I told you that we are measuring radians for the random equation. Now, radians, I cannot really display on my monitor. What can I display on my monitor? RGB values. So there has to be some transformation that comes from radians and conversing to RGB in a meaningful way. This process is called tone mapping. And Thomas is going to tell you all about tone mapping as well. You can do it in a number of different ways. It's heavily going to be built. And a good tone mapping method really breathes life into your render images. Now, we haven't talked about filtering. This is a bit more sophisticated. Recursivariating, you should one sample through the midpoint of the pixels. For the scene, you computed this year down. With Monte Carlo integration, we are going to have many samples. So we are going to have a metric that's called samples for pixel. And these samples will not go through the midpoint of the pixel. These are going to go through the surface of the pixel, like a random samples over the surface of the pixel. And we are going to integrate radians over the whole surface. Now, you can do this differently because you have many samples over the pixel surface. And you can take into consideration them into consideration in different ways. And you can see that different filtering methods, this is what we both filtering. A different filtering methods will give you different results. And the interesting part is that you will get anti aliasing for free if you do filtering well. Because in a ray tracer, you will shoot one ray through the midpoint of the pixel. Your images, unless they are super high resolution, they are going to be elious. The completely straight line is going to be pixelated. The edges are going to be pixelated. What can you do? Fibial things like supersampling. Let's split one pixel into four other pixels, or smaller pixels, and compute the rays through all of them and average. That's the Fibial method. That gives you anti aliasing by supersampling. But this is super expensive. I mean, you have HD resolutions, and you have to bump this up by even four times. Too much. There's better solutions. You can get this for free in your preliminary examination if you do filter. So this is what filtering is about. Thomas is also going to talk. This is not one lecture. This is the next three lectures. It's going to talk about participating in media. What is this about? Well, in our simulation so far, we have taken into consideration that rays of light only bounce off of surfaces. But in real life, there's not only surfaces. There's volumes. There's smoke. Hayes. Many of these effects, where ray of light can not really hit an object, but just a smoke, and gets carried. And if you do your simulation in a way that it supports such a participating medium, then you can get volume costings. And that's amazing because I just have shown you the ring. And whatever else kind of costings you will look at, you will think of those as some 2D things that I see it on the table. This diffuse material that diffuses this radiance back to me. So you would think that costings and shadows are play hard. They are 2D things. But they are, in fact, volumes. So the shadows exist not only the plane, but they have a volume. Because the set of points that are extracted from the light source are not on the plane. They are in 3D. And you can get volumetric costings and volume shadows with participating in media. Because there will be a media in there of which light can scatter. So therefore, you will see these boundaries. You can also get god rays, beautiful phenomena in the nature if you compute the participants. You can also get something like this. This is an actual photograph just to make sure that you see the difference that the first ray is traversing air or vacuum. And the next ones have a participating video, which can give you this effect, this scattering effect. And another example of god rays, while apparently we have this do not disturb a piece of paper. So there is some luxur under and then going on in this room. You better not enter. Who knows what you will see? And you can get not necessarily set for now's effects, but the more subtle effect. You can feel that there is some haze in this image. But it's not so perfect. Now, we don't stop there because don't just think of smoke and atmosphere. You can just look at your own skin if you would like to see some participating video. Now, this is a phenomenon called subsurface scattering. And this means that some of the things that you would think are objects, are surfaces, are in fact volumes. This is your skin, for instance. Light goes through your skin, the portion of light. And we don't simulate that because when we hit the surface of the object, we bounce during the back. And if we write a simulation that makes us able to go inside these objects, then we have a simulation with subsurface scattering. And we can account for beautiful, beautiful effects like this. These are some simulations. So for instance, on the left side, you can see probably marble. There is subsurface scattering in Marwood. It seems heavily exaggerated to me. Or either there is a really, really strong backlight thing. But this is not a surface anymore. You can see the nose of the lady light, lots of the radiance, actually gets through the nose. This is one more example. This is not so pronounced. This is not so exaggerated. But you can see this j-dragum clearly has some subsurface scattering. Look at the optically thin parts. Like the end of the tail. You can see that it's much lighter. And this is because some of the light is going through it. And the optically thick parts, like the body of the dragon, have less subsurface scattering. So you can see that this is a bit darker. It's a beautiful phenomenon. And we can also simulate this. And look at this one. Absolutely amazing. That doesn't just look amazing. This is incredibly awesome. We can write computer programs if we compute this. It's reasonably well known this time. So absolutely beautiful phenomenon. Let's look at this as well. This is a fractal with subsurface scattering. I mean, how cool can someone get its fractals and subsurface scattering? It's like two of the best foods mixed together. It has to be something also. And another example of a beautiful j-crag. Well, just a bit of subsurface scattering. So that's going to be it for today. And there is going to be the next three lectures with Thomas. These are all the exciting things that are going to be discussed. And then we will complete the Monte Carlo integration. I will tell you how I like to use exactly and how to use mathematics to see through these lines. And then we will write our global illumination program. Thank you.
[{"start": 0.0, "end": 6.84, "text": " Next time, what you will see is something that was missing from many of the bigger"}, {"start": 6.84, "end": 13.84, "text": " mutations of many assignments. What is the complexity of the ray tracing algorithm"}, {"start": 13.84, "end": 17.02, "text": " depend on? What depends on the resolution? The bigger the image, the more"}, {"start": 17.02, "end": 22.44, "text": " it takes, got it? It is exponential with respect to the depth. At least this"}, {"start": 22.44, "end": 28.14, "text": " implementation is, if you shoot out two rays, there is always a branching. Then this"}, {"start": 28.14, "end": 32.34, "text": " is going to be exponential. So we have taken into consideration resolution, we"}, {"start": 32.34, "end": 38.3, "text": " have taken into consideration depth. But we haven't taken into consideration"}, {"start": 38.3, "end": 44.38, "text": " how many objects there are in the scene. And if you start running the same"}, {"start": 44.38, "end": 49.900000000000006, "text": " ray tracer on huge scene because you don't want to see spheres, you want to"}, {"start": 49.900000000000006, "end": 56.86, "text": " do ray tracing like real men do, then what you do is you implement a function"}, {"start": 56.86, "end": 61.06, "text": " that can load you triangle meshes. And then you just grab a nice triangle"}, {"start": 61.06, "end": 65.22, "text": " meshes and I've seen from somewhere loaded to your ray tracer and you're very"}, {"start": 65.22, "end": 70.98, "text": " excited the run the ray tracer and you don't get anything in your whole lifetime."}, {"start": 70.98, "end": 76.86, "text": " If you load something with millions of hours, it's not much nowadays. Why?"}, {"start": 76.86, "end": 82.74, "text": " Someone help you out. Just face to wrong. That's true, but why does it take a lot?"}, {"start": 82.74, "end": 87.97999999999999, "text": " Because you have to do a lot of intersection tests. Exactly. So I have, if I have"}, {"start": 87.97999999999999, "end": 92.38, "text": " one million objects, I have to do one million intersections every single time."}, {"start": 92.38, "end": 98.33999999999999, "text": " That's too much. It's just way too much. So what we can do is that we can do some"}, {"start": 98.33999999999999, "end": 105.69999999999999, "text": " kind of space partitioning, which means that simple optimizations I can do."}, {"start": 105.69999999999999, "end": 109.89999999999999, "text": " For instance, I really don't care what is behind me because I'm going to"}, {"start": 109.9, "end": 113.62, "text": " intersect something that's in front of me. So whenever it's behind me, I can"}, {"start": 113.62, "end": 118.86000000000001, "text": " immediately throw this all of those polygons out. That's immediately half of it."}, {"start": 118.86000000000001, "end": 124.66000000000001, "text": " And if you use smart tricks and key details, smart tricks, smart data"}, {"start": 124.66000000000001, "end": 130.18, "text": " structures, you can go from linear, while in here, one million objects, one million"}, {"start": 130.18, "end": 135.22, "text": " intersections. So that's linear complexity. You can go to logarithmic complexity,"}, {"start": 135.22, "end": 139.38, "text": " which is amazing because the logarithm after a point doesn't really increase too much."}, {"start": 139.38, "end": 147.9, "text": " And you will learn about techniques that will make you able to compute this intersection"}, {"start": 147.9, "end": 154.7, "text": " with one million objects with about four or five intersections on average."}, {"start": 154.7, "end": 160.78, "text": " Obviously, obviously it depends on the distribution of the triangles and all of that."}, {"start": 160.78, "end": 166.38, "text": " But on average, you can do it in four or five intersections instead of one million."}, {"start": 166.38, "end": 172.46, "text": " So it's a huge, huge speed up. This is going to be on the next lecture."}, {"start": 172.46, "end": 177.66, "text": " And again, it seems that I have been lying to you all along regarding this as well,"}, {"start": 177.66, "end": 184.74, "text": " because I told you that we are measuring radians for the random equation."}, {"start": 184.74, "end": 189.46, "text": " Now, radians, I cannot really display on my monitor. What can I display on my monitor?"}, {"start": 189.46, "end": 194.94, "text": " RGB values. So there has to be some transformation that comes from radians and"}, {"start": 194.94, "end": 200.42, "text": " conversing to RGB in a meaningful way. This process is called tone mapping."}, {"start": 200.42, "end": 203.62, "text": " And Thomas is going to tell you all about tone mapping as well."}, {"start": 203.62, "end": 207.3, "text": " You can do it in a number of different ways. It's heavily going to be built."}, {"start": 207.3, "end": 215.34, "text": " And a good tone mapping method really breathes life into your render images."}, {"start": 215.34, "end": 219.7, "text": " Now, we haven't talked about filtering. This is a bit more sophisticated."}, {"start": 219.7, "end": 224.9, "text": " Recursivariating, you should one sample through the midpoint of the pixels."}, {"start": 224.9, "end": 227.86, "text": " For the scene, you computed this year down."}, {"start": 227.86, "end": 231.62, "text": " With Monte Carlo integration, we are going to have many samples."}, {"start": 231.62, "end": 236.54000000000002, "text": " So we are going to have a metric that's called samples for pixel."}, {"start": 236.54000000000002, "end": 241.42000000000002, "text": " And these samples will not go through the midpoint of the pixel."}, {"start": 241.42000000000002, "end": 247.5, "text": " These are going to go through the surface of the pixel, like a random samples"}, {"start": 247.5, "end": 252.94, "text": " over the surface of the pixel. And we are going to integrate radians over the whole surface."}, {"start": 252.94, "end": 258.1, "text": " Now, you can do this differently because you have many samples over the pixel surface."}, {"start": 258.1, "end": 263.18, "text": " And you can take into consideration them into consideration in different ways."}, {"start": 263.18, "end": 266.54, "text": " And you can see that different filtering methods, this is what we both filtering."}, {"start": 266.54, "end": 270.1, "text": " A different filtering methods will give you different results."}, {"start": 270.1, "end": 277.65999999999997, "text": " And the interesting part is that you will get anti aliasing for free if you do filtering well."}, {"start": 277.65999999999997, "end": 282.38, "text": " Because in a ray tracer, you will shoot one ray through the midpoint of the pixel."}, {"start": 282.38, "end": 286.65999999999997, "text": " Your images, unless they are super high resolution, they are going to be elious."}, {"start": 286.65999999999997, "end": 290.7, "text": " The completely straight line is going to be pixelated."}, {"start": 290.7, "end": 293.3, "text": " The edges are going to be pixelated."}, {"start": 293.3, "end": 294.58, "text": " What can you do?"}, {"start": 294.58, "end": 296.38, "text": " Fibial things like supersampling."}, {"start": 296.38, "end": 301.1, "text": " Let's split one pixel into four other pixels, or smaller pixels,"}, {"start": 301.1, "end": 304.42, "text": " and compute the rays through all of them and average."}, {"start": 304.42, "end": 306.7, "text": " That's the Fibial method."}, {"start": 306.7, "end": 309.9, "text": " That gives you anti aliasing by supersampling."}, {"start": 309.9, "end": 311.65999999999997, "text": " But this is super expensive."}, {"start": 311.66, "end": 319.26000000000005, "text": " I mean, you have HD resolutions, and you have to bump this up by even four times."}, {"start": 319.26000000000005, "end": 320.06, "text": " Too much."}, {"start": 320.06, "end": 321.18, "text": " There's better solutions."}, {"start": 321.18, "end": 324.94000000000005, "text": " You can get this for free in your preliminary examination if you do filter."}, {"start": 324.94000000000005, "end": 328.18, "text": " So this is what filtering is about."}, {"start": 328.18, "end": 329.86, "text": " Thomas is also going to talk."}, {"start": 329.86, "end": 330.94000000000005, "text": " This is not one lecture."}, {"start": 330.94000000000005, "end": 333.1, "text": " This is the next three lectures."}, {"start": 333.1, "end": 335.74, "text": " It's going to talk about participating in media."}, {"start": 335.74, "end": 337.82000000000005, "text": " What is this about?"}, {"start": 337.82, "end": 343.26, "text": " Well, in our simulation so far, we have taken into consideration that"}, {"start": 343.26, "end": 346.98, "text": " rays of light only bounce off of surfaces."}, {"start": 346.98, "end": 349.3, "text": " But in real life, there's not only surfaces."}, {"start": 349.3, "end": 350.18, "text": " There's volumes."}, {"start": 350.18, "end": 351.14, "text": " There's smoke."}, {"start": 351.14, "end": 352.21999999999997, "text": " Hayes."}, {"start": 352.21999999999997, "end": 356.74, "text": " Many of these effects, where ray of light can not really hit an object,"}, {"start": 356.74, "end": 360.14, "text": " but just a smoke, and gets carried."}, {"start": 360.14, "end": 366.98, "text": " And if you do your simulation in a way that it supports such a participating"}, {"start": 366.98, "end": 371.5, "text": " medium, then you can get volume costings."}, {"start": 371.5, "end": 374.62, "text": " And that's amazing because I just have shown you the ring."}, {"start": 374.62, "end": 378.90000000000003, "text": " And whatever else kind of costings you will look at,"}, {"start": 378.90000000000003, "end": 386.22, "text": " you will think of those as some 2D things that I see it on the table."}, {"start": 386.22, "end": 391.54, "text": " This diffuse material that diffuses this radiance back to me."}, {"start": 391.54, "end": 396.26, "text": " So you would think that costings and shadows are play hard."}, {"start": 396.26, "end": 398.5, "text": " They are 2D things."}, {"start": 398.5, "end": 401.06, "text": " But they are, in fact, volumes."}, {"start": 401.06, "end": 404.5, "text": " So the shadows exist not only the plane,"}, {"start": 404.5, "end": 408.34, "text": " but they have a volume."}, {"start": 408.34, "end": 413.09999999999997, "text": " Because the set of points that are extracted from the light source"}, {"start": 413.09999999999997, "end": 414.02, "text": " are not on the plane."}, {"start": 414.02, "end": 417.26, "text": " They are in 3D."}, {"start": 417.26, "end": 420.26, "text": " And you can get volumetric costings and volume"}, {"start": 420.26, "end": 423.21999999999997, "text": " shadows with participating in media."}, {"start": 423.22, "end": 429.34000000000003, "text": " Because there will be a media in there of which light can scatter."}, {"start": 429.34000000000003, "end": 431.54, "text": " So therefore, you will see these boundaries."}, {"start": 434.70000000000005, "end": 441.26000000000005, "text": " You can also get god rays, beautiful phenomena"}, {"start": 441.26000000000005, "end": 446.70000000000005, "text": " in the nature if you compute the participants."}, {"start": 446.70000000000005, "end": 448.22, "text": " You can also get something like this."}, {"start": 448.22, "end": 451.34000000000003, "text": " This is an actual photograph just to make sure"}, {"start": 451.34, "end": 459.34, "text": " that you see the difference that the first ray is traversing air or vacuum."}, {"start": 459.34, "end": 462.34, "text": " And the next ones have a participating video, which"}, {"start": 462.34, "end": 467.53999999999996, "text": " can give you this effect, this scattering effect."}, {"start": 467.53999999999996, "end": 470.94, "text": " And another example of god rays, while apparently we"}, {"start": 470.94, "end": 476.85999999999996, "text": " have this do not disturb a piece of paper."}, {"start": 476.85999999999996, "end": 480.09999999999997, "text": " So there is some luxur under and then going on in this room."}, {"start": 480.1, "end": 482.18, "text": " You better not enter."}, {"start": 482.18, "end": 485.54, "text": " Who knows what you will see?"}, {"start": 485.54, "end": 489.70000000000005, "text": " And you can get not necessarily set for now's effects,"}, {"start": 489.70000000000005, "end": 491.90000000000003, "text": " but the more subtle effect."}, {"start": 491.90000000000003, "end": 495.3, "text": " You can feel that there is some haze in this image."}, {"start": 495.3, "end": 499.18, "text": " But it's not so perfect."}, {"start": 499.18, "end": 506.1, "text": " Now, we don't stop there because don't just think of smoke"}, {"start": 506.1, "end": 508.02000000000004, "text": " and atmosphere."}, {"start": 508.02, "end": 510.41999999999996, "text": " You can just look at your own skin if you"}, {"start": 510.41999999999996, "end": 514.86, "text": " would like to see some participating video."}, {"start": 514.86, "end": 518.34, "text": " Now, this is a phenomenon called subsurface scattering."}, {"start": 518.34, "end": 520.38, "text": " And this means that some of the things"}, {"start": 520.38, "end": 525.86, "text": " that you would think are objects, are surfaces, are in fact volumes."}, {"start": 525.86, "end": 527.54, "text": " This is your skin, for instance."}, {"start": 527.54, "end": 531.34, "text": " Light goes through your skin, the portion of light."}, {"start": 531.34, "end": 533.18, "text": " And we don't simulate that because when"}, {"start": 533.18, "end": 536.6999999999999, "text": " we hit the surface of the object, we bounce during the back."}, {"start": 536.7, "end": 540.1, "text": " And if we write a simulation that makes us"}, {"start": 540.1, "end": 543.26, "text": " able to go inside these objects, then"}, {"start": 543.26, "end": 546.5, "text": " we have a simulation with subsurface scattering."}, {"start": 546.5, "end": 549.6600000000001, "text": " And we can account for beautiful, beautiful effects like this."}, {"start": 555.26, "end": 557.46, "text": " These are some simulations."}, {"start": 557.46, "end": 559.26, "text": " So for instance, on the left side,"}, {"start": 559.26, "end": 560.98, "text": " you can see probably marble."}, {"start": 560.98, "end": 562.9000000000001, "text": " There is subsurface scattering in Marwood."}, {"start": 562.9000000000001, "end": 564.7800000000001, "text": " It seems heavily exaggerated to me."}, {"start": 564.78, "end": 568.4599999999999, "text": " Or either there is a really, really strong backlight thing."}, {"start": 568.4599999999999, "end": 570.18, "text": " But this is not a surface anymore."}, {"start": 570.18, "end": 575.38, "text": " You can see the nose of the lady light, lots of the radiance,"}, {"start": 575.38, "end": 579.14, "text": " actually gets through the nose."}, {"start": 579.14, "end": 580.66, "text": " This is one more example."}, {"start": 580.66, "end": 582.42, "text": " This is not so pronounced."}, {"start": 582.42, "end": 585.14, "text": " This is not so exaggerated."}, {"start": 585.14, "end": 587.8199999999999, "text": " But you can see this j-dragum clearly"}, {"start": 587.8199999999999, "end": 589.26, "text": " has some subsurface scattering."}, {"start": 589.26, "end": 591.9, "text": " Look at the optically thin parts."}, {"start": 591.9, "end": 593.38, "text": " Like the end of the tail."}, {"start": 593.38, "end": 595.58, "text": " You can see that it's much lighter."}, {"start": 595.58, "end": 599.18, "text": " And this is because some of the light is going through it."}, {"start": 599.18, "end": 602.9399999999999, "text": " And the optically thick parts, like the body of the dragon,"}, {"start": 602.9399999999999, "end": 604.86, "text": " have less subsurface scattering."}, {"start": 604.86, "end": 607.06, "text": " So you can see that this is a bit darker."}, {"start": 607.06, "end": 608.38, "text": " It's a beautiful phenomenon."}, {"start": 608.38, "end": 612.26, "text": " And we can also simulate this."}, {"start": 612.26, "end": 615.5, "text": " And look at this one."}, {"start": 615.5, "end": 616.82, "text": " Absolutely amazing."}, {"start": 616.82, "end": 618.58, "text": " That doesn't just look amazing."}, {"start": 618.58, "end": 621.38, "text": " This is incredibly awesome."}, {"start": 621.38, "end": 625.26, "text": " We can write computer programs if we compute this."}, {"start": 625.26, "end": 626.9, "text": " It's reasonably well known this time."}, {"start": 626.9, "end": 629.98, "text": " So absolutely beautiful phenomenon."}, {"start": 629.98, "end": 631.9399999999999, "text": " Let's look at this as well."}, {"start": 631.9399999999999, "end": 634.62, "text": " This is a fractal with subsurface scattering."}, {"start": 634.62, "end": 638.82, "text": " I mean, how cool can someone get its fractals"}, {"start": 638.82, "end": 640.1, "text": " and subsurface scattering?"}, {"start": 640.1, "end": 643.5, "text": " It's like two of the best foods mixed together."}, {"start": 643.5, "end": 645.66, "text": " It has to be something also."}, {"start": 645.66, "end": 652.66, "text": " And another example of a beautiful j-crag."}, {"start": 656.2199999999999, "end": 658.4599999999999, "text": " Well, just a bit of subsurface scattering."}, {"start": 664.74, "end": 666.78, "text": " So that's going to be it for today."}, {"start": 666.78, "end": 671.8199999999999, "text": " And there is going to be the next three lectures with Thomas."}, {"start": 671.8199999999999, "end": 675.3, "text": " These are all the exciting things that are going to be discussed."}, {"start": 675.3, "end": 679.26, "text": " And then we will complete the Monte Carlo integration."}, {"start": 679.26, "end": 682.9799999999999, "text": " I will tell you how I like to use exactly"}, {"start": 682.9799999999999, "end": 686.5, "text": " and how to use mathematics to see through these lines."}, {"start": 686.5, "end": 689.2199999999999, "text": " And then we will write our global illumination program."}, {"start": 689.22, "end": 706.22, "text": " Thank you."}]
Two Minute Papers
https://www.youtube.com/watch?v=HZWwaLVATA8
TU Wien Rendering #17 - Monte Carlo Integration: Sample Mean & An Important Lesson
It is now time to implement a simple Monte Carlo integration scheme, the sample mean. It is indeed quite simple and seems to work quite well on some cases, but it apparently breaks down in others. This is a very important lesson: intuition is tremendously useful to get a good visual understanding of complicated theories, but there are times when we hit the wall. If this happens, we have to proceed using formal mathematics. Let's figure out what went wrong and fix it together soon! (There will be a few segments with Thomas Auzinger, then the solution is presented in segment #23) About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Excellent. So this was the heat or mis. Why heat or mis? Because the ball that I throw is either below or above the function. Now what we will actually use is the sample mean. The sample mean is different. I would like to integrate this function and I can take samples of it. Samples here mean that I have f of x and 2x. I can substitute a number and I can evaluate the function there. So I don't know the integral of the functions too complicated. But I can evaluate it. I can evaluate it at 0 at 0.15 at 2 and very little like that. How do I compute the actual integral from this function from these samples? Well, we will take a look through an extremely difficult example which is integrating x. So let's solve this with multiple different methods. What does the mathematician do? Find a primitive function. What is the primitive function of x? x squared over 2. All we have to do is substitute 1 at 0 and therefore we get 1. So I know that I am looking for 1.5. What does the engineer do? The engineer knows that this is a linear function. Therefore this is going to be the area of a triangle. What are the lengths of the triangle? The base is 1 because I am integrating from 0 to 1. The height is also 1 because if I go 1 to the right that I am going to go 1, it's upwards as well because this is x. So the area of the triangle is the base times the height over 2. So this is 1.5 again. Now we have the mathematician and the engineer. What does Monte Carlo guy do? A Monte Carlo guy didn't stand in that matrix at all. So he cannot do any of these. So what Monte Carlo guy is going to do is he's going to take samples of this function. So I evaluate f of x at 0.2. How much is it at 0.2? Well obviously 0.2. Simple as possible example. What about 0.4? Well at 0.4 this is 0.4. And so on. So I have taken four randomly chosen samples from this function and this is called sample mean. This means averaging. So let's take the average of all of these. So the average of all of these are losses exactly 1.5. So this gives me the actual perfect result for an integral that I could otherwise not solve. Now we can code this very easily. In just a few lines of code and there's already excess lines of code because of printing and whatnot. But you can we can see how small this is. This is the actual function is double f that I'm interested in and f of x equals x. So it's not really that difficult. What is the output of this program? After many samples I approach very close to 1.5 up to quite a few digits. So this works really really well. But there's something really interesting about this. So if I draw one sample from this integral. Then I have an over estimation of the result. Why? Because I'm looking for 0.5 and I have 0.87. What about 10 samples? Is this an over estimation or under estimation? 10 samples. I wasn't paying attention to the samples before because I was thinking about 1 million samples to get. Damn it. Okay. So the question is is 0.61 more than 0.5? To a good approximation. Is this 1.6 more than what? Opened the idea is more than what? Opened 5. Yeah. Exactly. So this is an over estimation. Excellent. What about 100 samples? We saw we'll have the output fully. It's an under estimation. It's 0.3. An under estimation. Perfect. That's an under estimation. Okay. What about 1000 samples? We'll have maybe an under estimation. It's an under estimation. Yeah. It's the name of it. Okay. Marker knows that this is an under estimation. And this is a weird behavior, right? Because I have over estimations and under estimations of this interval. But in the end, it seems that they are going to the deviations are going to be less and less. Okay. So this almost looks like a sign. So it's like, if you like, algebra, the convergence is something like sign of x times x. Is it? No, because it's going to get large sign of x over x. So this is like a sign that starts out with large deviations and large amplitude. Then it gets smaller and smaller. This is how the convergence of Monte Carlo estimators go. And this we call, by the way, stochastic convergence. So it means that it can be over and under the integral. But as we add more samples, it's guaranteed to be closer. Let's have another example. Let's integrate this function two times sign square of x. Yeah. You can see the function is very constant. We're going to be, as results were, function. Yes, I guess that's the figure that you always decadent where the frequency is right. Then you get to compute. There's a probability to have such favor and what you could say that yes, this can happen one. But this has very low probability because why would you have the same region over and over again? And you can also do smart things like putting a grid on a function and sampling that. So that's one thing. But what you will see later, that we will have unbiased estimators. And this means that you can expect the error to shrink in time. But this will be a couple lectures down the line. Is everything fine with copying this? That was a pretty remarkable thing. Exactly how it goes. OK, what does the mathematician guy do? Look for primitive functions. Excellent. What is the primitive function for the sine square of x? What is one half of x minus the sine times the cosine? Let's do the actual substitution. We have our well-learned pie together. What does the engineer do? Well, these are not trying those anymore. So you better look it up on more from alpha. And you will get something like this. And the result is again pie. So wonderful. Engineering works. OK, what does Monte Carlo guy do? Monte Carlo guy doesn't know what from alpha. Doesn't know mathematics. Doesn't know anything. But he has his 21-ever line C++ program. Let's take samples of this. What are we looking for? What was the number? What was the end result? It was pie. OK, so let's substitute this function, where this double f is now the sine square of x. And I have also this multiplier of 2 in line 35. So on the right side, you see that this is what I was looking for. This is what we have changed. Now, just one more time. What am I looking for? What would be the perfect result? One more. OK, excellent. And the run this program and it starts out maybe pretty well, 3.6. OK. And as I add more samples, I will get one. Not pie. I get one. OK. So I have been lying to you. I have been lying to you all along. This doesn't work at all. And we don't have the slightest idea why this doesn't work. That's one of the most important lessons during this course. Not because of this thing. Who cares? You'll have to study this thing and sort it out. But you start out, if you have a difficult problem, you start out trying to understand it with your intuition. You don't start throwing multi-dimensional integrals everywhere. You start out thinking of what is going on. There is a diffuse interaction. There is scaring in the atmosphere. How does it look like? You use your intuition. And your intuition can get you very far. So in the integration of this f of x, the intuition of the sample mean could get us the perfect solution. But there may be more complicated cases where your intuition fails. And this doesn't mean that intuition is not useful. But it means that it can only take you so far. So if you have barriers like this, but you cannot go through intuition, then it is the point when you start using mathematics. And you start to evaluate what is going on. You start to look at the details. So use the intuition to get an idea of what's going on. And then if you run into obstacles, use mathematics to sort out the details. That's one of the most important lessons out there for you when you will go out there and try to study really complicated theories. So this doesn't work. I have been lying to you all along. How can we sort it out? Well, after the commercials, we will know a bit more. The commercial will come in the form of Thomas. Because he's going to travel to Japan for a half a year long, half a year long research project. So he has a few lectures left. Three of them in particular. And he has to hold them now because he's going to take the plane afterwards. So the next three lectures are going to be held by Thomas. And I mean, the timing is a bit suboptimal because I have to cut this lecture in half. But at least you know how Monte Carlo integration works and he is going to tell you more about this. And then we will complete this unit and at the end of this unit, so before I get to complicated program, three lectures from Thomas, then I come back. We complete this lecture. We will know how to write a global illumination program. So this is exactly what we're going to do. I have implemented the whole global illumination. I think it is beautiful. It can work with me. It can be to beautiful, interactive illumination, caustics. I think it's in 250 lines. It's readable. It's understandable. And many, many people have learned how to do global illumination from this program. So after three lectures from Thomas, then I finish this. That's one lecture. The next lecture is going to be a code walkthrough. So we are going to take into, we are going to look through the code, what I have read, how this works, how is friend as long inserted here? Where do I use friend as long? How do I do all these things? You will see everything in code. It's going to be very practical.
[{"start": 0.0, "end": 11.0, "text": " Excellent. So this was the heat or mis. Why heat or mis? Because the ball that I throw is either below or above the function."}, {"start": 11.0, "end": 18.0, "text": " Now what we will actually use is the sample mean. The sample mean is different. I would like to integrate this function and I can take samples of it."}, {"start": 18.0, "end": 30.0, "text": " Samples here mean that I have f of x and 2x. I can substitute a number and I can evaluate the function there. So I don't know the integral of the functions too complicated."}, {"start": 30.0, "end": 38.0, "text": " But I can evaluate it. I can evaluate it at 0 at 0.15 at 2 and very little like that."}, {"start": 38.0, "end": 52.0, "text": " How do I compute the actual integral from this function from these samples? Well, we will take a look through an extremely difficult example which is integrating x."}, {"start": 52.0, "end": 61.0, "text": " So let's solve this with multiple different methods. What does the mathematician do? Find a primitive function. What is the primitive function of x?"}, {"start": 61.0, "end": 71.0, "text": " x squared over 2. All we have to do is substitute 1 at 0 and therefore we get 1. So I know that I am looking for 1.5. What does the engineer do?"}, {"start": 71.0, "end": 81.0, "text": " The engineer knows that this is a linear function. Therefore this is going to be the area of a triangle. What are the lengths of the triangle?"}, {"start": 81.0, "end": 94.0, "text": " The base is 1 because I am integrating from 0 to 1. The height is also 1 because if I go 1 to the right that I am going to go 1, it's upwards as well because this is x."}, {"start": 94.0, "end": 102.0, "text": " So the area of the triangle is the base times the height over 2. So this is 1.5 again."}, {"start": 102.0, "end": 114.0, "text": " Now we have the mathematician and the engineer. What does Monte Carlo guy do? A Monte Carlo guy didn't stand in that matrix at all. So he cannot do any of these."}, {"start": 114.0, "end": 123.0, "text": " So what Monte Carlo guy is going to do is he's going to take samples of this function. So I evaluate f of x at 0.2."}, {"start": 123.0, "end": 134.0, "text": " How much is it at 0.2? Well obviously 0.2. Simple as possible example. What about 0.4? Well at 0.4 this is 0.4."}, {"start": 134.0, "end": 144.0, "text": " And so on. So I have taken four randomly chosen samples from this function and this is called sample mean. This means averaging."}, {"start": 144.0, "end": 160.0, "text": " So let's take the average of all of these. So the average of all of these are losses exactly 1.5. So this gives me the actual perfect result for an integral that I could otherwise not solve."}, {"start": 160.0, "end": 170.0, "text": " Now we can code this very easily. In just a few lines of code and there's already excess lines of code because of printing and whatnot."}, {"start": 170.0, "end": 182.0, "text": " But you can we can see how small this is. This is the actual function is double f that I'm interested in and f of x equals x. So it's not really that difficult."}, {"start": 182.0, "end": 195.0, "text": " What is the output of this program? After many samples I approach very close to 1.5 up to quite a few digits."}, {"start": 195.0, "end": 205.0, "text": " So this works really really well. But there's something really interesting about this. So if I draw one sample from this integral."}, {"start": 205.0, "end": 213.0, "text": " Then I have an over estimation of the result. Why? Because I'm looking for 0.5 and I have 0.87."}, {"start": 213.0, "end": 220.0, "text": " What about 10 samples? Is this an over estimation or under estimation? 10 samples."}, {"start": 220.0, "end": 228.0, "text": " I wasn't paying attention to the samples before because I was thinking about 1 million samples to get."}, {"start": 228.0, "end": 234.0, "text": " Damn it. Okay. So the question is is 0.61 more than 0.5?"}, {"start": 234.0, "end": 240.0, "text": " To a good approximation. Is this 1.6 more than what?"}, {"start": 240.0, "end": 244.0, "text": " Opened the idea is more than what? Opened 5. Yeah."}, {"start": 244.0, "end": 251.0, "text": " Exactly. So this is an over estimation. Excellent. What about 100 samples? We saw we'll have the output fully."}, {"start": 251.0, "end": 254.0, "text": " It's an under estimation. It's 0.3."}, {"start": 254.0, "end": 258.0, "text": " An under estimation. Perfect. That's an under estimation."}, {"start": 258.0, "end": 264.0, "text": " Okay. What about 1000 samples? We'll have maybe an under estimation."}, {"start": 264.0, "end": 270.0, "text": " It's an under estimation. Yeah. It's the name of it. Okay. Marker knows that this is an under estimation."}, {"start": 270.0, "end": 278.0, "text": " And this is a weird behavior, right? Because I have over estimations and under estimations of this interval."}, {"start": 278.0, "end": 284.0, "text": " But in the end, it seems that they are going to the deviations are going to be less and less."}, {"start": 284.0, "end": 295.0, "text": " Okay. So this almost looks like a sign. So it's like, if you like, algebra, the convergence is something like sign of x times x."}, {"start": 295.0, "end": 305.0, "text": " Is it? No, because it's going to get large sign of x over x. So this is like a sign that starts out with large deviations and large amplitude."}, {"start": 305.0, "end": 311.0, "text": " Then it gets smaller and smaller. This is how the convergence of Monte Carlo estimators go."}, {"start": 311.0, "end": 318.0, "text": " And this we call, by the way, stochastic convergence. So it means that it can be over and under the integral."}, {"start": 318.0, "end": 325.0, "text": " But as we add more samples, it's guaranteed to be closer."}, {"start": 325.0, "end": 331.0, "text": " Let's have another example. Let's integrate this function two times sign square of x."}, {"start": 331.0, "end": 332.0, "text": " Yeah."}, {"start": 332.0, "end": 335.0, "text": " You can see the function is very constant."}, {"start": 335.0, "end": 342.0, "text": " We're going to be, as results were, function."}, {"start": 342.0, "end": 349.0, "text": " Yes, I guess that's the figure that you always decadent where the frequency is right."}, {"start": 349.0, "end": 349.0, "text": " Then you get to compute."}, {"start": 349.0, "end": 355.0, "text": " There's a probability to have such favor and what you could say that yes, this can happen one."}, {"start": 355.0, "end": 361.0, "text": " But this has very low probability because why would you have the same region over and over again?"}, {"start": 361.0, "end": 370.0, "text": " And you can also do smart things like putting a grid on a function and sampling that."}, {"start": 370.0, "end": 371.84, "text": " So that's one thing."}, {"start": 371.84, "end": 378.2, "text": " But what you will see later, that we will have unbiased estimators."}, {"start": 378.2, "end": 383.52, "text": " And this means that you can expect the error to shrink in time."}, {"start": 383.52, "end": 388.64, "text": " But this will be a couple lectures down the line."}, {"start": 388.64, "end": 391.0, "text": " Is everything fine with copying this?"}, {"start": 391.0, "end": 392.64, "text": " That was a pretty remarkable thing."}, {"start": 392.64, "end": 393.84, "text": " Exactly how it goes."}, {"start": 393.84, "end": 395.88, "text": " OK, what does the mathematician guy do?"}, {"start": 395.88, "end": 397.52, "text": " Look for primitive functions."}, {"start": 397.52, "end": 398.04, "text": " Excellent."}, {"start": 398.04, "end": 401.12, "text": " What is the primitive function for the sine square of x?"}, {"start": 401.12, "end": 404.96000000000004, "text": " What is one half of x minus the sine times the cosine?"}, {"start": 404.96000000000004, "end": 406.96000000000004, "text": " Let's do the actual substitution."}, {"start": 406.96000000000004, "end": 410.12, "text": " We have our well-learned pie together."}, {"start": 410.12, "end": 411.84000000000003, "text": " What does the engineer do?"}, {"start": 411.84000000000003, "end": 413.64000000000004, "text": " Well, these are not trying those anymore."}, {"start": 413.64000000000004, "end": 418.88, "text": " So you better look it up on more from alpha."}, {"start": 418.88, "end": 420.8, "text": " And you will get something like this."}, {"start": 420.8, "end": 422.64000000000004, "text": " And the result is again pie."}, {"start": 422.64000000000004, "end": 424.44, "text": " So wonderful."}, {"start": 424.44, "end": 425.92, "text": " Engineering works."}, {"start": 425.92, "end": 427.52000000000004, "text": " OK, what does Monte Carlo guy do?"}, {"start": 427.52, "end": 429.59999999999997, "text": " Monte Carlo guy doesn't know what from alpha."}, {"start": 429.59999999999997, "end": 431.44, "text": " Doesn't know mathematics."}, {"start": 431.44, "end": 432.47999999999996, "text": " Doesn't know anything."}, {"start": 432.47999999999996, "end": 439.03999999999996, "text": " But he has his 21-ever line C++ program."}, {"start": 439.03999999999996, "end": 440.64, "text": " Let's take samples of this."}, {"start": 440.64, "end": 441.71999999999997, "text": " What are we looking for?"}, {"start": 441.71999999999997, "end": 442.76, "text": " What was the number?"}, {"start": 442.76, "end": 445.47999999999996, "text": " What was the end result?"}, {"start": 445.47999999999996, "end": 446.64, "text": " It was pie."}, {"start": 446.64, "end": 449.76, "text": " OK, so let's substitute this function,"}, {"start": 449.76, "end": 454.79999999999995, "text": " where this double f is now the sine square of x."}, {"start": 454.8, "end": 458.2, "text": " And I have also this multiplier of 2 in line 35."}, {"start": 458.2, "end": 462.32, "text": " So on the right side, you see that this is what I was looking for."}, {"start": 462.32, "end": 465.40000000000003, "text": " This is what we have changed."}, {"start": 465.40000000000003, "end": 466.68, "text": " Now, just one more time."}, {"start": 466.68, "end": 468.64, "text": " What am I looking for?"}, {"start": 468.64, "end": 470.68, "text": " What would be the perfect result?"}, {"start": 470.68, "end": 471.40000000000003, "text": " One more."}, {"start": 471.40000000000003, "end": 472.92, "text": " OK, excellent."}, {"start": 472.92, "end": 475.88, "text": " And the run this program and it starts out"}, {"start": 475.88, "end": 480.12, "text": " maybe pretty well, 3.6."}, {"start": 480.12, "end": 481.12, "text": " OK."}, {"start": 481.12, "end": 484.56, "text": " And as I add more samples, I will get one."}, {"start": 484.56, "end": 486.64, "text": " Not pie."}, {"start": 486.64, "end": 489.64, "text": " I get one."}, {"start": 489.64, "end": 490.8, "text": " OK."}, {"start": 490.8, "end": 494.12, "text": " So I have been lying to you."}, {"start": 494.12, "end": 496.0, "text": " I have been lying to you all along."}, {"start": 496.0, "end": 498.48, "text": " This doesn't work at all."}, {"start": 498.48, "end": 502.48, "text": " And we don't have the slightest idea why this doesn't work."}, {"start": 502.48, "end": 506.92, "text": " That's one of the most important lessons during this course."}, {"start": 506.92, "end": 508.32, "text": " Not because of this thing."}, {"start": 508.32, "end": 508.84000000000003, "text": " Who cares?"}, {"start": 508.84000000000003, "end": 510.92, "text": " You'll have to study this thing and sort it out."}, {"start": 510.92, "end": 514.88, "text": " But you start out, if you have a difficult problem,"}, {"start": 514.88, "end": 518.84, "text": " you start out trying to understand it with your intuition."}, {"start": 518.84, "end": 523.04, "text": " You don't start throwing multi-dimensional integrals everywhere."}, {"start": 523.04, "end": 526.04, "text": " You start out thinking of what is going on."}, {"start": 526.04, "end": 528.2, "text": " There is a diffuse interaction."}, {"start": 528.2, "end": 530.16, "text": " There is scaring in the atmosphere."}, {"start": 530.16, "end": 531.24, "text": " How does it look like?"}, {"start": 531.24, "end": 532.6, "text": " You use your intuition."}, {"start": 532.6, "end": 535.64, "text": " And your intuition can get you very far."}, {"start": 535.64, "end": 538.28, "text": " So in the integration of this f of x,"}, {"start": 538.28, "end": 543.28, "text": " the intuition of the sample mean could get us the perfect solution."}, {"start": 543.28, "end": 545.68, "text": " But there may be more complicated cases"}, {"start": 545.68, "end": 547.88, "text": " where your intuition fails."}, {"start": 547.88, "end": 551.28, "text": " And this doesn't mean that intuition is not useful."}, {"start": 551.28, "end": 554.16, "text": " But it means that it can only take you so far."}, {"start": 554.16, "end": 557.0, "text": " So if you have barriers like this,"}, {"start": 557.0, "end": 559.24, "text": " but you cannot go through intuition,"}, {"start": 559.24, "end": 562.92, "text": " then it is the point when you start using mathematics."}, {"start": 562.92, "end": 567.0799999999999, "text": " And you start to evaluate what is going on."}, {"start": 567.08, "end": 569.0, "text": " You start to look at the details."}, {"start": 569.0, "end": 573.6800000000001, "text": " So use the intuition to get an idea of what's going on."}, {"start": 573.6800000000001, "end": 576.1600000000001, "text": " And then if you run into obstacles,"}, {"start": 576.1600000000001, "end": 579.2800000000001, "text": " use mathematics to sort out the details."}, {"start": 579.2800000000001, "end": 584.76, "text": " That's one of the most important lessons out there for you"}, {"start": 585.72, "end": 589.1600000000001, "text": " when you will go out there and try to study"}, {"start": 589.1600000000001, "end": 591.0, "text": " really complicated theories."}, {"start": 593.4000000000001, "end": 594.72, "text": " So this doesn't work."}, {"start": 594.72, "end": 596.88, "text": " I have been lying to you all along."}, {"start": 598.84, "end": 600.4, "text": " How can we sort it out?"}, {"start": 600.4, "end": 604.72, "text": " Well, after the commercials, we will know a bit more."}, {"start": 604.72, "end": 608.24, "text": " The commercial will come in the form of Thomas."}, {"start": 610.88, "end": 613.9200000000001, "text": " Because he's going to travel to Japan"}, {"start": 613.9200000000001, "end": 617.5600000000001, "text": " for a half a year long, half a year long research project."}, {"start": 617.5600000000001, "end": 620.6800000000001, "text": " So he has a few lectures left."}, {"start": 622.36, "end": 624.28, "text": " Three of them in particular."}, {"start": 624.28, "end": 626.72, "text": " And he has to hold them now because he's"}, {"start": 626.72, "end": 629.24, "text": " going to take the plane afterwards."}, {"start": 629.24, "end": 633.52, "text": " So the next three lectures are going to be held by Thomas."}, {"start": 633.52, "end": 636.4399999999999, "text": " And I mean, the timing is a bit suboptimal"}, {"start": 636.4399999999999, "end": 638.92, "text": " because I have to cut this lecture in half."}, {"start": 641.24, "end": 644.36, "text": " But at least you know how Monte Carlo integration works"}, {"start": 644.36, "end": 646.8, "text": " and he is going to tell you more about this."}, {"start": 646.8, "end": 648.88, "text": " And then we will complete this unit"}, {"start": 648.88, "end": 652.92, "text": " and at the end of this unit, so before I get to complicated"}, {"start": 652.92, "end": 655.28, "text": " program, three lectures from Thomas,"}, {"start": 655.28, "end": 656.5999999999999, "text": " then I come back."}, {"start": 656.5999999999999, "end": 658.0799999999999, "text": " We complete this lecture."}, {"start": 658.0799999999999, "end": 661.28, "text": " We will know how to write a global illumination program."}, {"start": 661.28, "end": 663.3199999999999, "text": " So this is exactly what we're going to do."}, {"start": 663.3199999999999, "end": 665.88, "text": " I have implemented the whole global illumination."}, {"start": 665.88, "end": 667.3199999999999, "text": " I think it is beautiful."}, {"start": 667.3199999999999, "end": 668.16, "text": " It can work with me."}, {"start": 668.16, "end": 671.04, "text": " It can be to beautiful, interactive illumination,"}, {"start": 671.04, "end": 672.12, "text": " caustics."}, {"start": 672.12, "end": 673.9599999999999, "text": " I think it's in 250 lines."}, {"start": 673.9599999999999, "end": 674.68, "text": " It's readable."}, {"start": 674.68, "end": 675.92, "text": " It's understandable."}, {"start": 675.92, "end": 677.76, "text": " And many, many people have learned"}, {"start": 677.76, "end": 680.92, "text": " how to do global illumination from this program."}, {"start": 680.92, "end": 683.52, "text": " So after three lectures from Thomas,"}, {"start": 683.52, "end": 684.8, "text": " then I finish this."}, {"start": 684.8, "end": 686.36, "text": " That's one lecture."}, {"start": 686.36, "end": 689.04, "text": " The next lecture is going to be a code walkthrough."}, {"start": 689.04, "end": 692.1999999999999, "text": " So we are going to take into, we are"}, {"start": 692.1999999999999, "end": 694.8399999999999, "text": " going to look through the code, what I have read,"}, {"start": 694.8399999999999, "end": 698.88, "text": " how this works, how is friend as long inserted here?"}, {"start": 698.88, "end": 701.4, "text": " Where do I use friend as long?"}, {"start": 701.4, "end": 703.1999999999999, "text": " How do I do all these things?"}, {"start": 703.1999999999999, "end": 705.12, "text": " You will see everything in code."}, {"start": 705.12, "end": 713.88, "text": " It's going to be very practical."}]
Two Minute Papers
https://www.youtube.com/watch?v=Tb6-JfI0HA0
TU Wien Rendering #16 - Monte Carlo Integration: Hit or Miss
Monte Carlo integration is one of the most powerful techniques in all mathematics. If explained well, it is a simple technique that opens up the possibility of computing many definite integrals by taking random samples of the function. We'll also implement this in just a few lines of code soon! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's go to Monte Carlo integration. I promise you something. If you learn what Monte Carlo integration is, you will never, ever in your life, will have to be loaded anymore integrals. Never, I promise to you, I give you my word. This is a simple method to approach that integrals. Basically, what we are looking for is we would like to integrate the function and we can take samples of this function. What does it mean? We will check it out in a second. We will take samples of this function and we would like to reconstruct the integral. If we do this, this is what is called Monte Carlo integration. This was founded during the Second World War by Stanislav Lom and his co-workers during the Manhattan Project. This was the Antonin von project. They had unbelievably difficult integrals to solve. They had to come up with a numerical solution in order to at least approximate. This is what they came up with. There are two different kinds of Monte Carlo integration, the keys. I have this function f of x and I would like to integrate this for me to be. This is a definite integral. What I can do is hit our miss Monte Carlo or sample mean Monte Carlo. 99.9% of the case we use the sample mean, but just for the intuition and to visualize what is going on, I will show you the hidden miss as well. We can see how we can take samples of this function. Let's take a look at this. This is the recipe for a wonderful Viennese Nizzo. This is the recipe for Monte Carlo integration. You draw this function that you have on a paper. You close it in a box that you know the size of and let the size of the box be vague. You throw lots of random points on this paper and for every single point you have in determine if it is above or below this function. Then you have a magical formula. You use this formula and you will get the integral. The more points you have on the paper, the better. I compute the ratio of hits the points below the curve of the function compared to all the samples that I have. How does it look like? This looks more or less like this. This immediately gives you the intuition that the reds are above the function. The blues are below the function. I would like to know the ratio of blues to all samples because this gives you exactly what the integrals mean, the area below this curve. If I would be on a summer holiday, I could have some beers and get a crazy idea that I would go on top of my house and imagine that I have a pool of water. I would start throwing beach balls in this pool and after doing this for long enough, I could approximate the value of pi. It sounds like black magic. Provided that the balls are small enough and I am patient enough that this can happen. What is the recipe? Let's go through it. Let's draw a unit square somewhere. The area of this square is going to be one. Let's draw a quarter of a unit circle inside this box. This is also of unit radius. Now we start throwing these points. We compute the ratio. What is inside and how much is outside? We multiply the result by four and then we get pi. Now how is this? This doesn't sound like that it makes any sense. This is black magic and this works. Let's take a closer look at why this works. I would like to compute the integral. The integral would be below this function. This is the one quarter of a sphere of a circle. What is the area of the sphere? R square times pi. R is one. It's pi over four. What we are approximately approximating here is pi over four. When I solve this integral, what I get is a result is pi over four. What we need to do with this in order to get pi multiplied by lambda. Shadow Cooper would be proud for all of us. This is due to a lot of this. What if we have a surface not a 2D? This also works for multidimensional functions. You will actually compute such a thing in the next assignment. It will be absolutely trivial. This is trivial. It is better because the rendering equation is infinite dimensional. It has to take care of high-dimensional functions somehow.
[{"start": 0.0, "end": 17.0, "text": " Let's go to Monte Carlo integration. I promise you something. If you learn what Monte Carlo integration is, you will never, ever in your life, will have to be loaded anymore integrals."}, {"start": 17.0, "end": 31.0, "text": " Never, I promise to you, I give you my word. This is a simple method to approach that integrals."}, {"start": 31.0, "end": 42.0, "text": " Basically, what we are looking for is we would like to integrate the function and we can take samples of this function. What does it mean? We will check it out in a second."}, {"start": 42.0, "end": 49.0, "text": " We will take samples of this function and we would like to reconstruct the integral. If we do this, this is what is called Monte Carlo integration."}, {"start": 49.0, "end": 61.0, "text": " This was founded during the Second World War by Stanislav Lom and his co-workers during the Manhattan Project. This was the Antonin von project."}, {"start": 61.0, "end": 74.0, "text": " They had unbelievably difficult integrals to solve. They had to come up with a numerical solution in order to at least approximate."}, {"start": 74.0, "end": 82.0, "text": " This is what they came up with. There are two different kinds of Monte Carlo integration, the keys."}, {"start": 82.0, "end": 94.0, "text": " I have this function f of x and I would like to integrate this for me to be. This is a definite integral. What I can do is hit our miss Monte Carlo or sample mean Monte Carlo."}, {"start": 94.0, "end": 108.0, "text": " 99.9% of the case we use the sample mean, but just for the intuition and to visualize what is going on, I will show you the hidden miss as well."}, {"start": 108.0, "end": 120.0, "text": " We can see how we can take samples of this function. Let's take a look at this. This is the recipe for a wonderful Viennese Nizzo."}, {"start": 120.0, "end": 132.0, "text": " This is the recipe for Monte Carlo integration. You draw this function that you have on a paper. You close it in a box that you know the size of and let the size of the box be vague."}, {"start": 132.0, "end": 142.0, "text": " You throw lots of random points on this paper and for every single point you have in determine if it is above or below this function."}, {"start": 142.0, "end": 153.0, "text": " Then you have a magical formula. You use this formula and you will get the integral. The more points you have on the paper, the better."}, {"start": 153.0, "end": 165.0, "text": " I compute the ratio of hits the points below the curve of the function compared to all the samples that I have. How does it look like? This looks more or less like this."}, {"start": 165.0, "end": 172.0, "text": " This immediately gives you the intuition that the reds are above the function. The blues are below the function."}, {"start": 172.0, "end": 189.0, "text": " I would like to know the ratio of blues to all samples because this gives you exactly what the integrals mean, the area below this curve."}, {"start": 189.0, "end": 208.0, "text": " If I would be on a summer holiday, I could have some beers and get a crazy idea that I would go on top of my house and imagine that I have a pool of water."}, {"start": 208.0, "end": 221.0, "text": " I would start throwing beach balls in this pool and after doing this for long enough, I could approximate the value of pi. It sounds like black magic."}, {"start": 221.0, "end": 231.0, "text": " Provided that the balls are small enough and I am patient enough that this can happen. What is the recipe? Let's go through it."}, {"start": 231.0, "end": 246.0, "text": " Let's draw a unit square somewhere. The area of this square is going to be one. Let's draw a quarter of a unit circle inside this box. This is also of unit radius."}, {"start": 246.0, "end": 259.0, "text": " Now we start throwing these points. We compute the ratio. What is inside and how much is outside? We multiply the result by four and then we get pi."}, {"start": 259.0, "end": 278.0, "text": " Now how is this? This doesn't sound like that it makes any sense. This is black magic and this works. Let's take a closer look at why this works."}, {"start": 278.0, "end": 290.0, "text": " I would like to compute the integral. The integral would be below this function. This is the one quarter of a sphere of a circle."}, {"start": 290.0, "end": 307.0, "text": " What is the area of the sphere? R square times pi. R is one. It's pi over four. What we are approximately approximating here is pi over four."}, {"start": 307.0, "end": 322.0, "text": " When I solve this integral, what I get is a result is pi over four. What we need to do with this in order to get pi multiplied by lambda."}, {"start": 322.0, "end": 329.0, "text": " Shadow Cooper would be proud for all of us. This is due to a lot of this."}, {"start": 329.0, "end": 345.0, "text": " What if we have a surface not a 2D? This also works for multidimensional functions. You will actually compute such a thing in the next assignment."}, {"start": 345.0, "end": 362.0, "text": " It will be absolutely trivial. This is trivial. It is better because the rendering equation is infinite dimensional. It has to take care of high-dimensional functions somehow."}]
Two Minute Papers
https://www.youtube.com/watch?v=sg0pAwOSNGw
TU Wien Rendering #15 - Rendering Equation Properties
Equipped with the knowledge of BRDFs used for the two most common materials, we get a step closer to solve the Holy Rendering Equation. According to its infinite dimensional and singular properties, it is immensely difficult to solve, but to our surprise, with a few more tricks up our sleeve, we can make it happen. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
And of course, if we have some super long live paths that are combinations of these, then obviously the ray-face-circum, or recursive ray-face-circum cannot take this into account. Why is that? That's the big question. Let's go back to the illumination equation and imagine that I can think of a diffuse surface. What do I do? I try to emphasize this earlier, but I will emphasize again that I take the perfect reflection by it. It doesn't matter if it's diffuse or specular, I take the perfect reflection direction. But if I do this, I have no idea about the surroundings of the object. I have no idea what is, for instance, above this diffuse plane. If there is some red object, I don't shoot a ray there in order to get some indirect illumination. So I will have no idea about the surroundings of this object. Now, if I switch to global illumination, however, there is this integration, and the part of the integration is the incoming light, the incoming radiance. And how I can integrate this over the hemisphere is basically sending samples out in every direction in this hemisphere. Now, if I do this, then I will know about the surroundings of the object. If there is a red wall or a red object in nearby or the desert nearby, then I will have samples of the incoming light, and therefore it will appear in the color of the object. This is fundamental. This is the very important way to understand why ray tracers are missing these effects. Now, let's talk about the real deal, the real, physically based PRDF models. How does it diffuse PRDF look like? It looks like this. So FR is the PRDF. Omega, omega prime, our incoming and outgoing directions, X is a point on this object. These are probabilities. Now, this is weird because I am used to formulate. So, if I talk about, if you are sharing, I have seen L. at that step of formula with variables. And this is a freaking number. What do I do with this number? It's 1 over pi, but does this even make sense? Can someone help me out with this? This bug was no longer with. So, this 1 over pi means that if this is a scaler, if this probability distribution is a scaler, remember that this is the distribution of the possible outgoing directions. So, imagine this scenario up here where you have an incoming direction. And if I have a completely diffuse material, it means that it will diffuse the incoming light in every direction. So, all possible outgoing directions on the hemisphere have the very same probability. And if they have the very same probability, then this should be a number. Then the whole PRDF should be a number because whatever directions I specify here, I will get the same probability. And I can scale this 1 over pi with a row, which is the albedo of the material because not all materials reflect all light. In fact, most, or if not all of the materials we know, absorb some amount of light. So, this is again a number. This can be wavelength dependent because it depends how much you absorb on the red channel, how much absorb on the blue channel. But this can be potentially 0. And then you have a black body, something that absorbs everything. So, you can call it, you can change the color of the object if I'm not using the right terms, but I'm not doing the intuitive. So, the albedo is going to give the color of the object. And this we can specify on 0. Okay, the next question is, is this a probability distribution function? Of course it is. Why? Because it's because it integrates to 1. There are some other rules that we're going to disregard. We respect the probability distribution functions. How much does it integrate to this integration 1? Why? What does the engineer guy say? Well, 1 over pi integrated from 0 to pi, what does it mean? I have a rectangle that has the height of 1 over pi, and it has the width of pi. What is the area of the rectangle? Let's multiply these two sides. So, it's a times b. A is pi, b is 1 over pi, just multiply it to, and you get 1. So, this is indeed a probability distribution function. Good to go. What about specular bRDS? These are what describe mirrors. How can I write such a bRDS? It's a bit trickier because it is fundamentally different than just diffuse materials. Why? They don't diffuse incoming light in all possible directions. What is possible is only one outgoing direction. I see only one thing in the mirror. Not the mixture of everything, like on the walls. So, this means that one outgoing direction is going to have a probability of 1, and every single other choices have the zero probability. And this is indeed a probabilistic model that can be described by a delta distribution. Delta distribution means that one guy has a probability of 1 and everyone else has zero. So, it's like elections in a tink tadership. Is this a probability distribution function? It is, but I couldn't last a while. I'm going to talk a bit more about this. But let's say for now that it is because this is one for one incoming direction and zero everybody else. So, we have the one that we're looking for. And there are also glossy bRDS. We haven't been really talking about this in the first lecture of mine. There was some bRDS which was called spread on one of these images, but I asked you to forget this term immediately. Glossy is the mixture of the two. So, it is not like a mirror, but it's not like a completely diffuse material. So, there is some view dependence. In this is material, they are completely viewing the pendant. Mirrors are completely view dependent. So, it's like a mixture of the two. It is possible that there are some glossy materials in this scene. Can you find them? Raise your hand if you see at least one. Many of you. Okay. Yes? How about the cupboard? The cupboard. Excellent. Yes. Anything else? Just show that we're in good. Yes. Is it round? Is it round? Is it round in good fit? Do you mean this? No. The floor? No, the cooking field class. Slow. Top to slow. Oh, yeah. Exactly. That's also glossy. So, there is many examples. I think the question would be what is not glossy in this scene? The better it is would be the better question. And the people you are sitting at is also glossy. It is a bit view dependent, but it's modern mirror, but it's not completely diffuse. And it also transfers the caustics. So, it has some diffusibility. Okay. Next question is, it looks good, but the mathematician guy asks how accurate is this? We have these two images. One of these is generated by means of global illumination, solving this equation, and the other one is a photograph. Do you know which is which? Raise your hand if so. Okay. One person. Two. Okay. I'm going to spoil all the front-end tie-in solution. Okay. So, look at this part. So, this is the difference that you can see, for instance, because this is an actual box that the guys put together at Cornell University. And you cannot only see the box in the photograph, but what is next to the box? Whereas, in global illumination, these surroundings are not modern, just the Cornell box itself. So, you can have a blue text. Yes, this can be distinguished from a photograph. But if you look at the actual scene, it is very beautiful. And if everything is perfectly implemented, then this is so close to physical reality that it is literally indistinguishable. So, this is really amazing that we can do this. Whatever you see out there in the world, we can mount with this equation. There are exceptions, because there are wave effects, such as diffraction and stuff like that. But these are very rare. I mean, there are butterflies who look the way they look, because of interference. And these effects. But 99% of what you see can be modeled with this equation. And the rest can be handled by more sophisticated methods. So, back to this previous question. What is the dimensionality of the rendering equation? Let's try to think it through and we will see. So, just for now, imagine that I shoot a ray out from the camera. And I hit the diffuse object. I need to sample this hemisphere exhaustively. This is not how I will evaluate the algorithm. But technically, this is what I need to do. All possible outcome directions have the same probability. So, I need to shoot these outgoing rays many of them. Now, I will hit more diffuse objects after the first bounce. And I have to exhaust the sample all of these as well. And if I take this other ray, I also have to do this. And so on and so on and so on. Until how many bounces we have concluded previously that we have to take into consideration an infinite number of points. So, this is definitely very difficult because the incoming light that I am sampling the hemisphere form is another rendering equation. So, imagine that this LI, you can insert another one of these equations. But that equation will also contain this chemical and this LI. And there is some random rendering equation. So, it is an infinite large sequence of intervals. Therefore, this is infinite dimensional. Now, I told you before that this is also singular. This is not such a bad thing. But this is because of the possibility of specular BRDFs. The specular BRDF is some kind of a delta distribution. And delta distributions are not really functions. So, in signals processing, you may have studied this function. And the first thing that they tell you about this, that this is not a function. This can be defined in terms of a limit. So, you can, for instance, imagine like a Gaussian curve. And you start pushing this Gaussian curve from two sides. Therefore, this is going to be a larger and larger and thinner and thinner spike. And you do this until you have an infinite bit of spike. Now, if you check it for the properties of a function, you will get something that has nothing to do with the function. That's a singularity. There is an infinitely quick jump from 0 to 1 in there. And we need to handle this somehow because we can take into consideration functions with an integrated functions. So, let's just solve this trivially by handling this specular interreflection explicitly. What does it mean? This means that if you have an income interaction, you're not going to play with probabilities. You are just going to grab, like in a ray tracer, you are just going to grab the perfect reflection direction as a loud-core interaction. No probabilities. No. A beauty break. We have some scenario which is ray tracing because of different things. Because the image you create by means of ray tracing, but there's literally one ray of light being reflected here many times. So, awesome laser experiments with Lux render. We will try things out like this, a bit later during the course. And another example. It's amazing what we can do with these algorithms.
[{"start": 0.0, "end": 6.0, "text": " And of course, if we have some super long live paths that are combinations of these,"}, {"start": 6.0, "end": 10.0, "text": " then obviously the ray-face-circum, or recursive ray-face-circum cannot take this into account."}, {"start": 10.0, "end": 18.0, "text": " Why is that? That's the big question. Let's go back to the illumination equation and imagine that I can think of a diffuse surface."}, {"start": 18.0, "end": 28.0, "text": " What do I do? I try to emphasize this earlier, but I will emphasize again that I take the perfect reflection by it."}, {"start": 28.0, "end": 34.0, "text": " It doesn't matter if it's diffuse or specular, I take the perfect reflection direction."}, {"start": 34.0, "end": 39.0, "text": " But if I do this, I have no idea about the surroundings of the object."}, {"start": 39.0, "end": 44.0, "text": " I have no idea what is, for instance, above this diffuse plane."}, {"start": 44.0, "end": 50.0, "text": " If there is some red object, I don't shoot a ray there in order to get some indirect illumination."}, {"start": 50.0, "end": 55.0, "text": " So I will have no idea about the surroundings of this object."}, {"start": 55.0, "end": 61.0, "text": " Now, if I switch to global illumination, however, there is this integration,"}, {"start": 61.0, "end": 67.0, "text": " and the part of the integration is the incoming light, the incoming radiance."}, {"start": 67.0, "end": 75.0, "text": " And how I can integrate this over the hemisphere is basically sending samples out in every direction in this hemisphere."}, {"start": 75.0, "end": 79.0, "text": " Now, if I do this, then I will know about the surroundings of the object."}, {"start": 79.0, "end": 86.0, "text": " If there is a red wall or a red object in nearby or the desert nearby,"}, {"start": 86.0, "end": 92.0, "text": " then I will have samples of the incoming light, and therefore it will appear in the color of the object."}, {"start": 92.0, "end": 103.0, "text": " This is fundamental. This is the very important way to understand why ray tracers are missing these effects."}, {"start": 103.0, "end": 109.0, "text": " Now, let's talk about the real deal, the real, physically based PRDF models."}, {"start": 109.0, "end": 114.0, "text": " How does it diffuse PRDF look like? It looks like this."}, {"start": 114.0, "end": 121.0, "text": " So FR is the PRDF. Omega, omega prime, our incoming and outgoing directions,"}, {"start": 121.0, "end": 128.0, "text": " X is a point on this object. These are probabilities."}, {"start": 128.0, "end": 133.0, "text": " Now, this is weird because I am used to formulate. So, if I talk about,"}, {"start": 133.0, "end": 138.0, "text": " if you are sharing, I have seen L. at that step of formula with variables."}, {"start": 138.0, "end": 141.0, "text": " And this is a freaking number. What do I do with this number?"}, {"start": 141.0, "end": 149.0, "text": " It's 1 over pi, but does this even make sense? Can someone help me out with this?"}, {"start": 149.0, "end": 155.0, "text": " This bug was no longer with. So, this 1 over pi means that if this is a scaler,"}, {"start": 155.0, "end": 163.0, "text": " if this probability distribution is a scaler, remember that this is the distribution of the possible outgoing directions."}, {"start": 163.0, "end": 170.0, "text": " So, imagine this scenario up here where you have an incoming direction."}, {"start": 170.0, "end": 178.0, "text": " And if I have a completely diffuse material, it means that it will diffuse the incoming light in every direction."}, {"start": 178.0, "end": 183.0, "text": " So, all possible outgoing directions on the hemisphere have the very same probability."}, {"start": 183.0, "end": 188.0, "text": " And if they have the very same probability, then this should be a number."}, {"start": 188.0, "end": 193.0, "text": " Then the whole PRDF should be a number because whatever directions I specify here,"}, {"start": 193.0, "end": 197.0, "text": " I will get the same probability."}, {"start": 197.0, "end": 202.0, "text": " And I can scale this 1 over pi with a row, which is the albedo of the material"}, {"start": 202.0, "end": 208.0, "text": " because not all materials reflect all light. In fact, most,"}, {"start": 208.0, "end": 215.0, "text": " or if not all of the materials we know, absorb some amount of light."}, {"start": 215.0, "end": 221.0, "text": " So, this is again a number. This can be wavelength dependent because it depends how much you absorb on the red channel,"}, {"start": 221.0, "end": 226.0, "text": " how much absorb on the blue channel. But this can be potentially 0."}, {"start": 226.0, "end": 229.0, "text": " And then you have a black body, something that absorbs everything."}, {"start": 229.0, "end": 236.0, "text": " So, you can call it, you can change the color of the object if I'm not using the right terms,"}, {"start": 236.0, "end": 243.0, "text": " but I'm not doing the intuitive. So, the albedo is going to give the color of the object."}, {"start": 243.0, "end": 246.0, "text": " And this we can specify on 0."}, {"start": 246.0, "end": 250.0, "text": " Okay, the next question is, is this a probability distribution function?"}, {"start": 250.0, "end": 254.0, "text": " Of course it is. Why? Because it's because it integrates to 1."}, {"start": 254.0, "end": 257.0, "text": " There are some other rules that we're going to disregard."}, {"start": 257.0, "end": 260.0, "text": " We respect the probability distribution functions."}, {"start": 260.0, "end": 263.0, "text": " How much does it integrate to this integration 1?"}, {"start": 263.0, "end": 270.0, "text": " Why? What does the engineer guy say? Well, 1 over pi integrated from 0 to pi, what does it mean?"}, {"start": 270.0, "end": 278.0, "text": " I have a rectangle that has the height of 1 over pi, and it has the width of pi."}, {"start": 278.0, "end": 282.0, "text": " What is the area of the rectangle? Let's multiply these two sides."}, {"start": 282.0, "end": 287.0, "text": " So, it's a times b. A is pi, b is 1 over pi, just multiply it to, and you get 1."}, {"start": 287.0, "end": 294.0, "text": " So, this is indeed a probability distribution function. Good to go."}, {"start": 294.0, "end": 301.0, "text": " What about specular bRDS? These are what describe mirrors."}, {"start": 301.0, "end": 311.0, "text": " How can I write such a bRDS? It's a bit trickier because it is fundamentally different than just diffuse materials."}, {"start": 311.0, "end": 317.0, "text": " Why? They don't diffuse incoming light in all possible directions."}, {"start": 317.0, "end": 322.0, "text": " What is possible is only one outgoing direction. I see only one thing in the mirror."}, {"start": 322.0, "end": 327.0, "text": " Not the mixture of everything, like on the walls."}, {"start": 327.0, "end": 332.0, "text": " So, this means that one outgoing direction is going to have a probability of 1,"}, {"start": 332.0, "end": 336.0, "text": " and every single other choices have the zero probability."}, {"start": 336.0, "end": 341.0, "text": " And this is indeed a probabilistic model that can be described by a delta distribution."}, {"start": 341.0, "end": 347.0, "text": " Delta distribution means that one guy has a probability of 1 and everyone else has zero."}, {"start": 347.0, "end": 354.0, "text": " So, it's like elections in a tink tadership."}, {"start": 354.0, "end": 359.0, "text": " Is this a probability distribution function? It is, but I couldn't last a while."}, {"start": 359.0, "end": 369.0, "text": " I'm going to talk a bit more about this. But let's say for now that it is because this is one for one incoming direction and zero everybody else."}, {"start": 369.0, "end": 374.0, "text": " So, we have the one that we're looking for."}, {"start": 374.0, "end": 381.0, "text": " And there are also glossy bRDS. We haven't been really talking about this in the first lecture of mine."}, {"start": 381.0, "end": 385.0, "text": " There was some bRDS which was called spread on one of these images,"}, {"start": 385.0, "end": 392.0, "text": " but I asked you to forget this term immediately. Glossy is the mixture of the two."}, {"start": 392.0, "end": 397.0, "text": " So, it is not like a mirror, but it's not like a completely diffuse material."}, {"start": 397.0, "end": 403.0, "text": " So, there is some view dependence. In this is material, they are completely viewing the pendant."}, {"start": 403.0, "end": 409.0, "text": " Mirrors are completely view dependent. So, it's like a mixture of the two."}, {"start": 409.0, "end": 415.0, "text": " It is possible that there are some glossy materials in this scene."}, {"start": 415.0, "end": 421.0, "text": " Can you find them? Raise your hand if you see at least one."}, {"start": 421.0, "end": 427.0, "text": " Many of you. Okay. Yes? How about the cupboard?"}, {"start": 427.0, "end": 437.0, "text": " The cupboard. Excellent. Yes. Anything else? Just show that we're in good."}, {"start": 437.0, "end": 443.0, "text": " Yes. Is it round? Is it round? Is it round in good fit?"}, {"start": 443.0, "end": 447.0, "text": " Do you mean this? No. The floor? No, the cooking field class."}, {"start": 447.0, "end": 453.0, "text": " Slow. Top to slow. Oh, yeah. Exactly. That's also glossy."}, {"start": 453.0, "end": 459.0, "text": " So, there is many examples. I think the question would be what is not glossy in this scene?"}, {"start": 459.0, "end": 466.0, "text": " The better it is would be the better question. And the people you are sitting at is also glossy."}, {"start": 466.0, "end": 470.0, "text": " It is a bit view dependent, but it's modern mirror, but it's not completely diffuse."}, {"start": 470.0, "end": 478.0, "text": " And it also transfers the caustics. So, it has some diffusibility."}, {"start": 478.0, "end": 488.0, "text": " Okay. Next question is, it looks good, but the mathematician guy asks how accurate is this?"}, {"start": 488.0, "end": 494.0, "text": " We have these two images. One of these is generated by means of global illumination,"}, {"start": 494.0, "end": 504.0, "text": " solving this equation, and the other one is a photograph."}, {"start": 504.0, "end": 514.0, "text": " Do you know which is which? Raise your hand if so."}, {"start": 514.0, "end": 520.0, "text": " Okay. One person. Two."}, {"start": 520.0, "end": 526.0, "text": " Okay. I'm going to spoil all the front-end tie-in solution."}, {"start": 526.0, "end": 530.0, "text": " Okay. So, look at this part."}, {"start": 530.0, "end": 540.0, "text": " So, this is the difference that you can see, for instance, because this is an actual box that the guys put together at Cornell University."}, {"start": 540.0, "end": 552.0, "text": " And you cannot only see the box in the photograph, but what is next to the box? Whereas, in global illumination, these surroundings are not modern, just the Cornell box itself."}, {"start": 552.0, "end": 558.0, "text": " So, you can have a blue text. Yes, this can be distinguished from a photograph."}, {"start": 558.0, "end": 566.0, "text": " But if you look at the actual scene, it is very beautiful."}, {"start": 566.0, "end": 576.0, "text": " And if everything is perfectly implemented, then this is so close to physical reality that it is literally indistinguishable."}, {"start": 576.0, "end": 584.0, "text": " So, this is really amazing that we can do this. Whatever you see out there in the world, we can mount with this equation."}, {"start": 584.0, "end": 590.0, "text": " There are exceptions, because there are wave effects, such as diffraction and stuff like that."}, {"start": 590.0, "end": 598.0, "text": " But these are very rare. I mean, there are butterflies who look the way they look, because of interference."}, {"start": 598.0, "end": 608.0, "text": " And these effects. But 99% of what you see can be modeled with this equation. And the rest can be handled by more sophisticated methods."}, {"start": 608.0, "end": 614.0, "text": " So, back to this previous question. What is the dimensionality of the rendering equation?"}, {"start": 614.0, "end": 624.0, "text": " Let's try to think it through and we will see. So, just for now, imagine that I shoot a ray out from the camera."}, {"start": 624.0, "end": 632.0, "text": " And I hit the diffuse object. I need to sample this hemisphere exhaustively. This is not how I will evaluate the algorithm."}, {"start": 632.0, "end": 638.0, "text": " But technically, this is what I need to do. All possible outcome directions have the same probability."}, {"start": 638.0, "end": 648.0, "text": " So, I need to shoot these outgoing rays many of them. Now, I will hit more diffuse objects after the first bounce."}, {"start": 648.0, "end": 656.0, "text": " And I have to exhaust the sample all of these as well. And if I take this other ray, I also have to do this."}, {"start": 656.0, "end": 667.0, "text": " And so on and so on and so on. Until how many bounces we have concluded previously that we have to take into consideration an infinite number of points."}, {"start": 667.0, "end": 681.0, "text": " So, this is definitely very difficult because the incoming light that I am sampling the hemisphere form is another rendering equation."}, {"start": 681.0, "end": 691.0, "text": " So, imagine that this LI, you can insert another one of these equations. But that equation will also contain this chemical and this LI."}, {"start": 691.0, "end": 703.0, "text": " And there is some random rendering equation. So, it is an infinite large sequence of intervals."}, {"start": 703.0, "end": 711.0, "text": " Therefore, this is infinite dimensional. Now, I told you before that this is also singular. This is not such a bad thing."}, {"start": 711.0, "end": 721.0, "text": " But this is because of the possibility of specular BRDFs. The specular BRDF is some kind of a delta distribution."}, {"start": 721.0, "end": 730.0, "text": " And delta distributions are not really functions. So, in signals processing, you may have studied this function."}, {"start": 730.0, "end": 737.0, "text": " And the first thing that they tell you about this, that this is not a function. This can be defined in terms of a limit."}, {"start": 737.0, "end": 746.0, "text": " So, you can, for instance, imagine like a Gaussian curve. And you start pushing this Gaussian curve from two sides."}, {"start": 746.0, "end": 751.0, "text": " Therefore, this is going to be a larger and larger and thinner and thinner spike."}, {"start": 751.0, "end": 755.0, "text": " And you do this until you have an infinite bit of spike."}, {"start": 755.0, "end": 765.0, "text": " Now, if you check it for the properties of a function, you will get something that has nothing to do with the function. That's a singularity."}, {"start": 765.0, "end": 770.0, "text": " There is an infinitely quick jump from 0 to 1 in there."}, {"start": 770.0, "end": 777.0, "text": " And we need to handle this somehow because we can take into consideration functions with an integrated functions."}, {"start": 777.0, "end": 785.0, "text": " So, let's just solve this trivially by handling this specular interreflection explicitly."}, {"start": 785.0, "end": 790.0, "text": " What does it mean? This means that if you have an income interaction, you're not going to play with probabilities."}, {"start": 790.0, "end": 799.0, "text": " You are just going to grab, like in a ray tracer, you are just going to grab the perfect reflection direction as a loud-core interaction."}, {"start": 799.0, "end": 802.0, "text": " No probabilities. No."}, {"start": 802.0, "end": 805.0, "text": " A beauty break."}, {"start": 805.0, "end": 811.0, "text": " We have some scenario which is ray tracing because of different things."}, {"start": 811.0, "end": 819.0, "text": " Because the image you create by means of ray tracing, but there's literally one ray of light being reflected here many times."}, {"start": 819.0, "end": 823.0, "text": " So, awesome laser experiments with Lux render."}, {"start": 823.0, "end": 829.0, "text": " We will try things out like this, a bit later during the course."}, {"start": 829.0, "end": 858.0, "text": " And another example. It's amazing what we can do with these algorithms."}]
Two Minute Papers
https://www.youtube.com/watch?v=vS0g9SVHRFc
TU Wien Rendering #14 - Global Illumination Benefits
Global illumination programs, unlike recursive ray tracers, are able to compute beautiful effects like indirect illumination and caustics. We take a closer look on how this is possible, and why the definition of shadows is fundamentally different in global illumination - this alternative definition allows us to get perfect soft shadows without explicitly computing many shadow rays against light sources. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, so we have these two guys in the ring and we know already how to solve the illumination equation. The illumination equation. We don't measure radians, we measure intensity. It's not really unit and physics. It's just some heck-doth thing that happens to work. In the rendering equation, we measure radians and we have to do some kind of integration and this is, if you, the more you think about it, the more you possibly will sound to even the thought of solving this problem. So the first question is what can I earn by solving this equation? Because I have to be motivated to do it. So obviously, the result better be, better look really good in order to give me the motivation and the resources to solve it. So this is an image from the first assignment and this we have computed with recursive ray tracing. So you can see for instance, hard shadows. You can see that this is a reasonably boring image. I mean, it's great compared to the simplicity of the model that we have, but it's not really the greatest. Well, what is missing? Let's take a look and look very closely. Let's take a look at the very same scene, but not with recursive ray tracing, but with low-dolumination algorithms. So not the elimination equation, but the full rendering equation. Take a look at the difference. Look closely. This is full global illumination. Finally, absolutely beautiful. Let's take another look. This is recursive ray tracing and global illumination. So apparently there are some effects that recursive ray tracing cannot account for. What are these effects? Well, we have talked about indirect illumination or color reading. This is the very same thing. This means that I am hitting two diffuse objects, one after each other. Is this visible enough? Okay, I'm just pulling a bit on these pertains. So you guys can see better. Okay, perhaps a bit better, right? Yes. You're back. So these are in this case LEDV paths. What does it mean? Everyone knows you start out from the light source. You hit two diffuse objects and you hit the eye. Excellent. Now, indirect illumination is all around us everywhere. Both in the real world and both in the better computer games out there, which have approximations of indirect illumination. And you can see that on the left image, it almost looks like Photoshop. It is completely alien from its surroundings. It is almost as if it didn't take into consideration its surroundings. So you're standing in the middle of the desert, not just somewhere. You would have to have some color reading effect that you get from your surroundings. And this is what usually the problem is with many of the Photoshop images. You just repel the person from somewhere and you put it in another photograph and it looks super fake. And yes, mostly because of the illumination conditions, but even if you try to account for that, even if you try to recolor it to have more of that the same color scheme than the rest of the photograph. You're still missing the indirect illumination effect. And human eye is very keen in recognizing that. So you recognize that something is wrong, but you don't know what exactly is missing. And it's usually in the direct illumination. But there's something else. Let's take a look at this scene with recursive ray tracing. So we have refractive materials. For instance, this last sphere on the left of the mirror sphere in the middle and the completely diffuse sphere on the right. Let's take a look at how the very same scene looks like with global illumination. This is the difference. One more time. Recursive ray tracing and global illumination. So like we have talked about this before, I can see that difference is in indirect illumination. So on the upper left, I can see that some of the red color is bleeding onto the other wall and the very same with the green wall in the background. Also with this diffuse ball. So even a simple diffuse sphere looks much more interesting and much more beautiful with global illumination. Don't say anything. But I say something else. I see something else as well. Not only indirect illumination. I see some other effect on this image that I couldn't compute with ray tracing before. Don't say anything. Raise your hand if you know what I'm talking about. Excellent. How was that? Everyone. And what? Don't say anything. Okay. I'm talking about this. And this. So this interesting light effect on the wall and below this glass sphere. So raise your hand again if you know what this is exactly. Don't say anything. Because so many people know you will have to say all of you at the same time after three. Got it? Okay. So everyone. One. Two. Three. What is this? Just a little. Okay. What are the other guesses? That's technically a fraction. Yes. But that's not how we call the effect. Anyone else? Okay. This is what we call caustics. So what kind of light path is this? This is an interesting light path. In this case, this is L. S S D E Y because we start out from the light source. We hit the glass sphere from the outside. Then we have refraction. We hit it from the inside. And then we hit some diffuse object that is either this checker board down there or the red wall on the left. And then to the eye. And if we have this effect, then we are going to have caustics. It's a beautiful, beautiful phenomenon in nature that we can find the account for. And it's. And then you can you can see this many, many places. Now, let's take a look at another example. This is the famous school corridor example from Luxrender. Okay. We have recursive ray tracing and blue glid illumination. So you can see lots of indirect illumination. This reddish light on the floor and perhaps some caustics or at least caustic looking thing in front of the lovers. Okay. So next question. What is the definition of shadows again? So what we have said before that shadows are regions that are not visible from the light source. Now, an alternative definition of shadows is the absence of light. This is what definition we will use in group illumination. So there is you could say that there's no such thing as shadows. There's no. That's that's not something that's just the absence of something else. If there is less light somewhere, then there's going to be shadows. So this is the definition of shadows in local illumination and in Zen culture. And take a look at this image. We can see some beautiful, beautiful soft shadows. And the thing is that you don't even need to do anything to compute these illumination. So if I have a ray tracer, what do I do? I shoot out shadow rays from these regions and I try to approximate what regions of the light source are visible from this point. In global illumination, you don't need to do anything. You just solve this equation and outcomes, physical reality. And shadows are parts of physical reality. You don't need to do anything in order to obtain shadows. It's not like a bottom-up approach like ray tracing. So you start from a baseline and you add more and more hacks to account for more and more effects. And for global illumination, you will see that we will have a simple algorithm. That can give you all of this. And you don't need to account for shadows and costics and all of these things. Another beautiful example of costics. This is costics from the point light source because, for instance, you can take a look at the shadows. The shadows are hard. So it's likely to be a small or a point light source. And the costics are very sharp. So they have the same behavior to large light sources as shadows. And another beauty with costics. Okay, so let's assess what these recursive ray facers are capable of doing and what they cannot. Well, obviously they cannot compute indirect illumination. Indirect illumination means two diffuse bounces or possibly more. This you cannot compute correctly. We will talk about why. And you cannot compute costics. Well, costics I have written in a few scenes ago that it was LSSD. So two specular bounces and that they do because you have to go through the blastbox. And here I'm writing something completely different. I just say one specular bounce is necessary, the rest are optional. Is this true? Or how can we verify this is true? In order to find out if this is true or not, I don't even need to say a word. I can just do this. You see the costics? This is one of the costics inside. Can you two see it? Yes. Excellent. Please take a look. No one still exists my writing name. My fiance is going to kill me. Okay, you two have seen it. Okay. Okay. Okay. Nice. Beautiful. I was going to say I'm going to put it in the bin. Okay. So apparently rings have costics. Well, I start off from the light source. I think one specular object, one mirror light object and then a diffuse which is the table and then the I have costics. So LSSD is enough for costics. There's no need to prove it in any other way. Just take a look at physical reality and let it be your touch always.
[{"start": 0.0, "end": 9.0, "text": " Okay, so we have these two guys in the ring and we know already how to solve the"}, {"start": 9.0, "end": 15.0, "text": " illumination equation. The illumination equation. We don't measure radians, we measure intensity."}, {"start": 15.0, "end": 21.0, "text": " It's not really unit and physics. It's just some heck-doth thing that happens to work."}, {"start": 21.0, "end": 27.0, "text": " In the rendering equation, we measure radians and we have to do some kind of integration"}, {"start": 27.0, "end": 34.0, "text": " and this is, if you, the more you think about it, the more you possibly will sound"}, {"start": 34.0, "end": 40.0, "text": " to even the thought of solving this problem. So the first question is what can I"}, {"start": 40.0, "end": 45.0, "text": " earn by solving this equation? Because I have to be motivated to do it."}, {"start": 45.0, "end": 50.0, "text": " So obviously, the result better be, better look really good in order to give me the"}, {"start": 50.0, "end": 60.0, "text": " motivation and the resources to solve it. So this is an image from the first assignment"}, {"start": 60.0, "end": 64.0, "text": " and this we have computed with recursive ray tracing. So you can see for instance,"}, {"start": 64.0, "end": 70.0, "text": " hard shadows. You can see that this is a reasonably boring image. I mean,"}, {"start": 70.0, "end": 78.0, "text": " it's great compared to the simplicity of the model that we have, but it's not really the greatest."}, {"start": 78.0, "end": 83.0, "text": " Well, what is missing? Let's take a look and look very closely. Let's take a look at"}, {"start": 83.0, "end": 89.0, "text": " the very same scene, but not with recursive ray tracing, but with low-dolumination algorithms."}, {"start": 89.0, "end": 93.0, "text": " So not the elimination equation, but the full rendering equation."}, {"start": 93.0, "end": 101.0, "text": " Take a look at the difference. Look closely. This is full global illumination."}, {"start": 101.0, "end": 109.0, "text": " Finally, absolutely beautiful. Let's take another look. This is recursive ray tracing and global illumination."}, {"start": 109.0, "end": 114.0, "text": " So apparently there are some effects that recursive ray tracing cannot account for."}, {"start": 114.0, "end": 120.0, "text": " What are these effects? Well, we have talked about indirect illumination or color reading."}, {"start": 120.0, "end": 126.0, "text": " This is the very same thing. This means that I am hitting two diffuse objects,"}, {"start": 126.0, "end": 132.0, "text": " one after each other. Is this visible enough? Okay, I'm just pulling a bit on these"}, {"start": 132.0, "end": 139.0, "text": " pertains. So you guys can see better. Okay, perhaps a bit better, right?"}, {"start": 139.0, "end": 143.0, "text": " Yes. You're back."}, {"start": 143.0, "end": 149.0, "text": " So these are in this case LEDV paths. What does it mean? Everyone knows you start out"}, {"start": 149.0, "end": 156.0, "text": " from the light source. You hit two diffuse objects and you hit the eye. Excellent."}, {"start": 156.0, "end": 163.0, "text": " Now, indirect illumination is all around us everywhere. Both in the real world and both"}, {"start": 163.0, "end": 169.0, "text": " in the better computer games out there, which have approximations of indirect illumination."}, {"start": 169.0, "end": 174.0, "text": " And you can see that on the left image, it almost looks like Photoshop."}, {"start": 174.0, "end": 185.0, "text": " It is completely alien from its surroundings. It is almost as if it didn't take into"}, {"start": 185.0, "end": 190.0, "text": " consideration its surroundings. So you're standing in the middle of the desert, not just"}, {"start": 190.0, "end": 195.0, "text": " somewhere. You would have to have some color reading effect that you get from your"}, {"start": 195.0, "end": 199.0, "text": " surroundings. And this is what usually the problem is with many of the Photoshop"}, {"start": 199.0, "end": 205.0, "text": " images. You just repel the person from somewhere and you put it in another photograph and"}, {"start": 205.0, "end": 211.0, "text": " it looks super fake. And yes, mostly because of the illumination conditions, but even"}, {"start": 211.0, "end": 216.0, "text": " if you try to account for that, even if you try to recolor it to have more of that"}, {"start": 216.0, "end": 220.0, "text": " the same color scheme than the rest of the photograph. You're still missing the"}, {"start": 220.0, "end": 225.0, "text": " indirect illumination effect. And human eye is very keen in recognizing that."}, {"start": 225.0, "end": 230.0, "text": " So you recognize that something is wrong, but you don't know what exactly is missing."}, {"start": 230.0, "end": 235.0, "text": " And it's usually in the direct illumination. But there's something else."}, {"start": 235.0, "end": 240.0, "text": " Let's take a look at this scene with recursive ray tracing. So we have refractive"}, {"start": 240.0, "end": 245.0, "text": " materials. For instance, this last sphere on the left of the mirror sphere in the"}, {"start": 245.0, "end": 252.0, "text": " middle and the completely diffuse sphere on the right. Let's take a look at how the"}, {"start": 252.0, "end": 258.0, "text": " very same scene looks like with global illumination. This is the difference."}, {"start": 258.0, "end": 265.0, "text": " One more time. Recursive ray tracing and global illumination."}, {"start": 265.0, "end": 271.0, "text": " So like we have talked about this before, I can see that difference is in"}, {"start": 271.0, "end": 276.0, "text": " indirect illumination. So on the upper left, I can see that some of the red color"}, {"start": 276.0, "end": 281.0, "text": " is bleeding onto the other wall and the very same with the green wall in the background."}, {"start": 281.0, "end": 288.0, "text": " Also with this diffuse ball. So even a simple diffuse sphere looks much"}, {"start": 288.0, "end": 293.0, "text": " more interesting and much more beautiful with global illumination."}, {"start": 293.0, "end": 299.0, "text": " Don't say anything. But I say something else. I see something else as well."}, {"start": 299.0, "end": 304.0, "text": " Not only indirect illumination. I see some other effect on this image that I"}, {"start": 304.0, "end": 308.0, "text": " couldn't compute with ray tracing before. Don't say anything."}, {"start": 308.0, "end": 313.0, "text": " Raise your hand if you know what I'm talking about. Excellent."}, {"start": 313.0, "end": 317.0, "text": " How was that? Everyone. And what? Don't say anything."}, {"start": 317.0, "end": 328.0, "text": " Okay. I'm talking about this. And this. So this interesting light"}, {"start": 328.0, "end": 334.0, "text": " effect on the wall and below this glass sphere. So raise your hand again if you"}, {"start": 334.0, "end": 339.0, "text": " know what this is exactly. Don't say anything. Because so many people know"}, {"start": 339.0, "end": 346.0, "text": " you will have to say all of you at the same time after three. Got it?"}, {"start": 346.0, "end": 352.0, "text": " Okay. So everyone. One. Two. Three. What is this?"}, {"start": 352.0, "end": 359.0, "text": " Just a little."}, {"start": 359.0, "end": 364.0, "text": " Okay. What are the other guesses? That's technically a fraction."}, {"start": 364.0, "end": 369.0, "text": " Yes. But that's not how we call the effect. Anyone else?"}, {"start": 369.0, "end": 375.0, "text": " Okay. This is what we call caustics. So what kind of light path is this?"}, {"start": 375.0, "end": 379.0, "text": " This is an interesting light path. In this case, this is L."}, {"start": 379.0, "end": 384.0, "text": " S S D E Y because we start out from the light source. We hit the glass sphere"}, {"start": 384.0, "end": 389.0, "text": " from the outside. Then we have refraction. We hit it from the inside."}, {"start": 389.0, "end": 395.0, "text": " And then we hit some diffuse object that is either this checker board down there or"}, {"start": 395.0, "end": 400.0, "text": " the red wall on the left. And then to the eye. And if we have this effect,"}, {"start": 400.0, "end": 405.0, "text": " then we are going to have caustics. It's a beautiful, beautiful phenomenon in"}, {"start": 405.0, "end": 410.0, "text": " nature that we can find the account for. And it's."}, {"start": 410.0, "end": 415.0, "text": " And then you can you can see this many, many places."}, {"start": 415.0, "end": 423.0, "text": " Now, let's take a look at another example. This is the famous school corridor example"}, {"start": 423.0, "end": 428.0, "text": " from Luxrender. Okay. We have recursive ray tracing and"}, {"start": 428.0, "end": 431.0, "text": " blue glid illumination. So you can see lots of indirect illumination."}, {"start": 431.0, "end": 439.0, "text": " This reddish light on the floor and perhaps some caustics or at least"}, {"start": 439.0, "end": 444.0, "text": " caustic looking thing in front of the lovers."}, {"start": 444.0, "end": 448.0, "text": " Okay. So next question. What is the definition of shadows again?"}, {"start": 448.0, "end": 454.0, "text": " So what we have said before that shadows are regions that are not visible from"}, {"start": 454.0, "end": 459.0, "text": " the light source. Now, an alternative definition of shadows is the"}, {"start": 459.0, "end": 464.0, "text": " absence of light. This is what definition we will use in group illumination."}, {"start": 464.0, "end": 469.0, "text": " So there is you could say that there's no such thing as shadows."}, {"start": 469.0, "end": 473.0, "text": " There's no. That's that's not something that's just the absence of"}, {"start": 473.0, "end": 478.0, "text": " something else. If there is less light somewhere, then there's going to be shadows."}, {"start": 478.0, "end": 483.0, "text": " So this is the definition of shadows in local illumination and in"}, {"start": 483.0, "end": 488.0, "text": " Zen culture. And take a look at this image. We can see some beautiful,"}, {"start": 488.0, "end": 493.0, "text": " beautiful soft shadows. And the thing is that you don't even need to"}, {"start": 493.0, "end": 498.0, "text": " do anything to compute these illumination. So if I have a ray tracer,"}, {"start": 498.0, "end": 502.0, "text": " what do I do? I shoot out shadow rays from these regions and I try to"}, {"start": 502.0, "end": 507.0, "text": " approximate what regions of the light source are visible from this point."}, {"start": 507.0, "end": 510.0, "text": " In global illumination, you don't need to do anything. You just solve this"}, {"start": 510.0, "end": 515.0, "text": " equation and outcomes, physical reality. And shadows are parts of"}, {"start": 515.0, "end": 519.0, "text": " physical reality. You don't need to do anything in order to obtain shadows."}, {"start": 519.0, "end": 524.0, "text": " It's not like a bottom-up approach like ray tracing. So you start from a"}, {"start": 524.0, "end": 528.0, "text": " baseline and you add more and more hacks to account for more and more"}, {"start": 528.0, "end": 532.0, "text": " effects. And for global illumination, you will see that we will have a simple"}, {"start": 532.0, "end": 537.0, "text": " algorithm. That can give you all of this. And you don't need to account for"}, {"start": 537.0, "end": 543.0, "text": " shadows and costics and all of these things. Another beautiful example of"}, {"start": 543.0, "end": 550.0, "text": " costics. This is costics from the point light source because, for instance,"}, {"start": 550.0, "end": 554.0, "text": " you can take a look at the shadows. The shadows are hard. So it's likely to be a small or"}, {"start": 554.0, "end": 558.0, "text": " a point light source. And the costics are very sharp. So they have the same"}, {"start": 558.0, "end": 569.0, "text": " behavior to large light sources as shadows. And another beauty with costics."}, {"start": 569.0, "end": 575.0, "text": " Okay, so let's assess what these recursive ray facers are capable of doing and"}, {"start": 575.0, "end": 580.0, "text": " what they cannot. Well, obviously they cannot compute indirect illumination."}, {"start": 580.0, "end": 587.0, "text": " Indirect illumination means two diffuse bounces or possibly more. This you cannot"}, {"start": 587.0, "end": 594.0, "text": " compute correctly. We will talk about why. And you cannot compute costics. Well,"}, {"start": 594.0, "end": 603.0, "text": " costics I have written in a few scenes ago that it was LSSD. So two"}, {"start": 603.0, "end": 608.0, "text": " specular bounces and that they do because you have to go through the blastbox."}, {"start": 608.0, "end": 613.0, "text": " And here I'm writing something completely different. I just say one"}, {"start": 613.0, "end": 620.0, "text": " specular bounce is necessary, the rest are optional. Is this true? Or how can we verify"}, {"start": 620.0, "end": 630.0, "text": " this is true? In order to find out if this is true or not, I don't even need to"}, {"start": 630.0, "end": 640.0, "text": " say a word. I can just do this. You see the costics?"}, {"start": 640.0, "end": 650.0, "text": " This is one of the costics inside. Can you two see it? Yes."}, {"start": 650.0, "end": 657.0, "text": " Excellent. Please take a look. No one still exists my writing name. My"}, {"start": 657.0, "end": 663.0, "text": " fiance is going to kill me. Okay, you two have seen it."}, {"start": 663.0, "end": 670.0, "text": " Okay. Okay. Okay."}, {"start": 670.0, "end": 676.0, "text": " Nice."}, {"start": 676.0, "end": 680.0, "text": " Beautiful."}, {"start": 680.0, "end": 685.0, "text": " I was going to say I'm going to put it in the bin."}, {"start": 685.0, "end": 693.0, "text": " Okay. So apparently rings have costics. Well, I start off from the light source."}, {"start": 693.0, "end": 699.0, "text": " I think one specular object, one mirror light object and then a diffuse which is"}, {"start": 699.0, "end": 706.0, "text": " the table and then the I have costics. So LSSD is enough for costics."}, {"start": 706.0, "end": 711.0, "text": " There's no need to prove it in any other way. Just take a look at physical"}, {"start": 711.0, "end": 718.0, "text": " reality and let it be your touch always."}]
Two Minute Papers
https://www.youtube.com/watch?v=AUKLBdyvFxw
TU Wien Rendering #13 - Easter, BRDF++, Depth of Field
After some ramblings on the the differences of the Easter break in Hungary and Austria, we continue our journey and discuss how the f-stop works and how the well-known depth of field effect of cameras can be reproduced in our program. Then, finally we get to know the "real deal" BRDF models that we will use in our global illumination renderer. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
How was the Easter break? With nice, too short. Come on. We discussed it already. But what would be an appropriate length? For it. It's a big day for this one. Bigger two weeks. Bigger weeks. Maybe the entire curriculum. Yes. And just extended every time that. So every time by just two more weeks or something. That's not new. Yeah. Well, I don't know about Austria, but in Hungary people usually go to their friends. You know, many people are coming over and we are hanging out with other people. And you always have to drink their stuff. So you go there and we have this drink that's called the Piling Cup. It's something like a snub, but way stronger. And I told this to some Austrian people and they were like, Oh my God, stronger than the slubs. How can that be? Very easily, at least the hundreds. So that's how it works. And you go to the very first place and you have to drink from there home room, old fool, Piling Cup. Usually it's very old fool. And you even have to say something to the about it. So you drink it, that is it. But you have to say something good about this because they are looking at you. What will be the reaction? So you say that it puts, that's really strong. And most people seem to be satisfied with that. So this is usually what I say. But then when you are at like the fifth station for 10 days, and some people just don't take no for an answer, unfortunately. So this is how it goes. Is it any better in Austria? It's more family politics. Okay. Okay, yeah, the family part is actually the nice part. So we can decide if we drink our own stuff or not. But I mean, my fiance's grandfather attempted to make some brew, some harlequat home. And well, he did something. Okay, so I mean, something was created in the process. But after tasting it, even the postman didn't want to drink it. I don't know about postman in Austria, but in Hungary they are really hardly people. So they drink whatever they find. Because obviously you don't give the good stuff for the postman. You give them the leftovers. It's like no one drank, oh, it's really good for the postman. And he's happy with that. Imagine that postman drank. Why are you doing postman? It's kind of weird here. Yeah, well, they are... They seem to like alcohol according to my experiences. And even the postman didn't want to drink that anymore. And next time we have seen him around the house. And he just came in front of the main door. And we wave to him, hey, come, we have some for you. No, no, no, I'm just going to put them in here. Immediately. Okay, so regarding the assignments, you guys and girls have done really well. So I'm very happy to see that. People realize that there is some exponentiality with respect to the depth of the city. And that's of the simulation. So the more people go, then the more exponential things become. But this is after like, I mean, it's exponential all along, but you don't know this because it starts off slow. But after like 10 to 15 bounces, you can see a very telling characteristic of this exponential distribution. And many of you have recognized correctly that this is because a reflection and reflection are sampled. All the time. So whenever I have a bounce, I am going to compute an intersection. And then there's going to be two rays, perhaps, that continue their way because one is going to be the reflection direction. And one is with reflection. And this quickly gets out of hand because wherever ray you have two more, that's the definition of something that is exponential. So well done. Let's proceed a bit and I'm going to talk just a bit about some advanced, a bit more advanced BRDF models that are mostly used with ray-tracing. So you remember this combustion BRDF that you see on the right. And this is the scalar product between L and then. So light factor and the normal. And obviously you can scale this with KD, which is some kind of diffuser video. Now if you put it next to a really image of the diffuser material, obviously it is a question, you know, what do we call the diffuser material? Or where was this for the BRDF? How exactly? But let's disregard this. And let's accept that we have this difference between the two. And if you take a good look, then it becomes apparent that in grazing angles, the simulated diffuser material seems to be completed on. And if you take a look at the formula up there, then this is a self-explanatory because the normal and the normal and the light direction can get perpendicular. And then you will see this darkness. So there are some advanced BRDF models that try to be a bit more realistic in this regard. Such example is the ORN-AIR model, which is much closer to what you would measure in real life. But let's note that all of these simplified BRDF models, these are all tags. This is not what physical reality is. People write up the actual equations that relate to physical reality and try to simplify in a way that a simple ray-facer can capture. We are going to talk about global effects and what a real diffused material looks like in a few minutes. So this ORN-AIR model seems much better. And what's even more, it can take into consideration these microscopic imperfections in different materials. And you can get a roughness parameter that can model these imperfections. What about specular models? Well, the form model, V.R, that we have talked about, is not the only way to do it. There is also the form-plane model, which is a more advanced model, and uses this H, this half vector between L and V. And it produces different results. I think this image is maybe not the best because yes, the highlights are different. One of the main advantages of this material model is that the shape of the specular reflections can get a bit more elliptic, depending on the viewing direction and the surroundings. So here you have the very same circular thing. So it's not the best example, but you can see that it's different. It looks more realistic. And we still have to think about the fact that these are still really good models. But these are still the X. There's also the Cooctorian model that's basically foam-plane. That can model also microscopic roughness. And here, maybe with the projector, maybe it's not so visible, but you can see that the specular reflection here is a bit more easier. So it's not a completely round sphere, it's not a perfect sphere, there are these small imperfections that are characteristic to viewing those materials. So this is what this model can capture. And there are some other advanced BRDF models, some of which are more easy to understand that implement that it is to pronounce the name of the authors of the BRDF models. This is one of those examples. And this is some kind of multilayer model where you have a diffused substrate and you have like a coating, a specular coating. So there are also BRDFs for car paint, where you can have these sparkly effects. And there are many BRDF models that capture given effects. Okay, what if one would like to play with these? The Disney guys have implemented this program called the BRDF Explorer. And you can load many local BRDF models and change the light source positions, look at the actual BRDFs and the impulse responses. Give it a try. So we have always been talking about cameras. So we are trying to model real world cameras. If you have an handheld camera, you will see a setting that's called the F-stop. And the F-stop is related to the size of the aperture. The aperture is the opening of the camera where the light goes in. And you can set this to different values. And you will notice that if you set this F-stop to a high value, then the aperture of the camera is going to become smaller. And if it's smaller, then this means that less light is left in. And more of the image that you get is going to be in focus. And vice versa. So if you have a low F-stop setting, then you will have a bigger aperture. More light is left in. And more regions will be out of focus. And what gives you this depth of field effect. Because whatever images we have seen and created with ray tracer yet, don't have this depth of field effect. But if you use a handheld camera, then somehow you have to model this effect as well. Because this is how an image will be created in a real world. So this is a nice chart made by Photograph First to see how exactly these F-stops will lead to an aperture size, what are the typical settings and all these interesting things. And an actual example. So let's take a look at the bottom right here. You can see that the whole image is in focus. And as you adjust the F-stop accordingly, you can see the top left. And see here immediately that the background is heavily blurred. So this is a more pronounced depth of field effect. It would be wonderful to have a ray tracer that can take this effect into account. And this is another maybe a bit more beautiful and a bit more visible example. So the left side you can see a very pronounced depth of field effect on the right. And it's close to everything is in focus. Yes. Do you also use the word OK for computer graphics? Yes. And there is also a bunch of papers on how to simulate this effect. So people even try to compute this in real time. So you have like a computer game, you would like to see this OK effect. How do you do this? And you have to take into consideration the depth. If you know what objects are wearing exactly how far away, then you can do a bunch of tricks. To get something like that in the approximation real time. And if you do this something that I'm going to show you in a second, then you will have the very same effect in your ray tracer from the left. Completely only focus image on the right. You can see again the depth of field effect, especially in the background. The further away you go, the more you see. So how do we do this? Very simple. Let's skip the text and I just put this here because people who read this at home would know about this. So mostly most of what we do is we shoot a ray through the midpoint of the pixel in our ray tracer. And this is going to touch this focal point and then hit an object. What we can do is that we would also take samples from nearby. So not only this pixel and not only the midpoint, but from nearby. And shoot all of these where is through the same focal point. And compute the very same samples, but these samples will be nearby. What we do with these samples is we average them. And this is what is going to give you this depth of field effect. So this is already some kind of integration. This means that, which if you run a ray tracer, you are going to get a completely converged image without any noise, without any problems. That image you can consider done. But this will not be solved with completely illumination. This is a speciality of ray tracers. But if you have such an effect, then you may have to wait until more and more of these samples are computed and the more smooth image you move. But we will talk about this effect extensively. It's going to be very important. And just an important question, what kind of material model can this be? Obviously, this is some quick perhaps open gm preview, but it's very apparent what I see in here. What kind of shading is this? Is it spectacular? No. These are definitely not here. What else? Yes, exactly. So this is the Lumberian model. You can also see this effect that it goes completely black in this facing angles.
[{"start": 0.0, "end": 4.0, "text": " How was the Easter break?"}, {"start": 4.0, "end": 7.0, "text": " With nice, too short."}, {"start": 7.0, "end": 8.0, "text": " Come on."}, {"start": 8.0, "end": 10.0, "text": " We discussed it already."}, {"start": 10.0, "end": 13.0, "text": " But what would be an appropriate length?"}, {"start": 13.0, "end": 14.0, "text": " For it."}, {"start": 14.0, "end": 16.0, "text": " It's a big day for this one."}, {"start": 16.0, "end": 17.0, "text": " Bigger two weeks."}, {"start": 17.0, "end": 18.0, "text": " Bigger weeks."}, {"start": 18.0, "end": 21.0, "text": " Maybe the entire curriculum."}, {"start": 21.0, "end": 22.0, "text": " Yes."}, {"start": 22.0, "end": 24.0, "text": " And just extended every time that."}, {"start": 24.0, "end": 27.0, "text": " So every time by just two more weeks or something."}, {"start": 27.0, "end": 28.0, "text": " That's not new."}, {"start": 28.0, "end": 29.0, "text": " Yeah."}, {"start": 29.0, "end": 38.0, "text": " Well, I don't know about Austria, but in Hungary people usually go to their friends."}, {"start": 38.0, "end": 42.0, "text": " You know, many people are coming over and we are hanging out with other people."}, {"start": 42.0, "end": 45.0, "text": " And you always have to drink their stuff."}, {"start": 45.0, "end": 51.0, "text": " So you go there and we have this drink that's called the Piling Cup."}, {"start": 51.0, "end": 55.0, "text": " It's something like a snub, but way stronger."}, {"start": 55.0, "end": 59.0, "text": " And I told this to some Austrian people and they were like,"}, {"start": 59.0, "end": 61.0, "text": " Oh my God, stronger than the slubs."}, {"start": 61.0, "end": 63.0, "text": " How can that be?"}, {"start": 63.0, "end": 66.0, "text": " Very easily, at least the hundreds."}, {"start": 66.0, "end": 68.0, "text": " So that's how it works."}, {"start": 68.0, "end": 73.0, "text": " And you go to the very first place and you have to drink from there home room,"}, {"start": 73.0, "end": 75.0, "text": " old fool, Piling Cup."}, {"start": 75.0, "end": 77.0, "text": " Usually it's very old fool."}, {"start": 77.0, "end": 80.0, "text": " And you even have to say something to the about it."}, {"start": 80.0, "end": 82.0, "text": " So you drink it, that is it."}, {"start": 82.0, "end": 86.0, "text": " But you have to say something good about this because they are looking at you."}, {"start": 86.0, "end": 88.0, "text": " What will be the reaction?"}, {"start": 88.0, "end": 94.0, "text": " So you say that it puts, that's really strong."}, {"start": 94.0, "end": 97.0, "text": " And most people seem to be satisfied with that."}, {"start": 97.0, "end": 99.0, "text": " So this is usually what I say."}, {"start": 99.0, "end": 103.0, "text": " But then when you are at like the fifth station for 10 days,"}, {"start": 103.0, "end": 108.0, "text": " and some people just don't take no for an answer, unfortunately."}, {"start": 108.0, "end": 114.0, "text": " So this is how it goes. Is it any better in Austria?"}, {"start": 114.0, "end": 117.0, "text": " It's more family politics."}, {"start": 117.0, "end": 118.0, "text": " Okay."}, {"start": 118.0, "end": 122.0, "text": " Okay, yeah, the family part is actually the nice part."}, {"start": 122.0, "end": 126.0, "text": " So we can decide if we drink our own stuff or not."}, {"start": 126.0, "end": 131.0, "text": " But I mean, my fiance's grandfather attempted to make some brew,"}, {"start": 131.0, "end": 133.0, "text": " some harlequat home."}, {"start": 133.0, "end": 135.0, "text": " And well, he did something."}, {"start": 135.0, "end": 139.0, "text": " Okay, so I mean, something was created in the process."}, {"start": 139.0, "end": 146.0, "text": " But after tasting it, even the postman didn't want to drink it."}, {"start": 146.0, "end": 154.0, "text": " I don't know about postman in Austria, but in Hungary they are really hardly people."}, {"start": 154.0, "end": 157.0, "text": " So they drink whatever they find."}, {"start": 157.0, "end": 161.0, "text": " Because obviously you don't give the good stuff for the postman."}, {"start": 161.0, "end": 166.0, "text": " You give them the leftovers. It's like no one drank, oh, it's really good for the postman."}, {"start": 166.0, "end": 168.0, "text": " And he's happy with that."}, {"start": 168.0, "end": 171.0, "text": " Imagine that postman drank."}, {"start": 171.0, "end": 174.0, "text": " Why are you doing postman?"}, {"start": 174.0, "end": 177.0, "text": " It's kind of weird here."}, {"start": 177.0, "end": 180.0, "text": " Yeah, well, they are..."}, {"start": 180.0, "end": 185.0, "text": " They seem to like alcohol according to my experiences."}, {"start": 185.0, "end": 190.0, "text": " And even the postman didn't want to drink that anymore."}, {"start": 190.0, "end": 193.0, "text": " And next time we have seen him around the house."}, {"start": 193.0, "end": 198.0, "text": " And he just came in front of the main door."}, {"start": 198.0, "end": 202.0, "text": " And we wave to him, hey, come, we have some for you."}, {"start": 202.0, "end": 205.0, "text": " No, no, no, I'm just going to put them in here."}, {"start": 205.0, "end": 208.0, "text": " Immediately."}, {"start": 208.0, "end": 213.0, "text": " Okay, so regarding the assignments, you guys and girls have done really well."}, {"start": 213.0, "end": 215.0, "text": " So I'm very happy to see that."}, {"start": 215.0, "end": 219.0, "text": " People realize that there is some exponentiality with respect to the depth of the city."}, {"start": 219.0, "end": 221.0, "text": " And that's of the simulation."}, {"start": 221.0, "end": 226.0, "text": " So the more people go, then the more exponential things become."}, {"start": 226.0, "end": 232.0, "text": " But this is after like, I mean, it's exponential all along, but you don't know this because it starts off slow."}, {"start": 232.0, "end": 242.0, "text": " But after like 10 to 15 bounces, you can see a very telling characteristic of this exponential distribution."}, {"start": 242.0, "end": 248.0, "text": " And many of you have recognized correctly that this is because a reflection and reflection are sampled."}, {"start": 248.0, "end": 249.0, "text": " All the time."}, {"start": 249.0, "end": 255.0, "text": " So whenever I have a bounce, I am going to compute an intersection."}, {"start": 255.0, "end": 262.0, "text": " And then there's going to be two rays, perhaps, that continue their way because one is going to be the reflection direction."}, {"start": 262.0, "end": 264.0, "text": " And one is with reflection."}, {"start": 264.0, "end": 272.0, "text": " And this quickly gets out of hand because wherever ray you have two more, that's the definition of something that is exponential."}, {"start": 272.0, "end": 274.0, "text": " So well done."}, {"start": 274.0, "end": 285.0, "text": " Let's proceed a bit and I'm going to talk just a bit about some advanced, a bit more advanced BRDF models that are mostly used with ray-tracing."}, {"start": 285.0, "end": 289.0, "text": " So you remember this combustion BRDF that you see on the right."}, {"start": 289.0, "end": 292.0, "text": " And this is the scalar product between L and then."}, {"start": 292.0, "end": 294.0, "text": " So light factor and the normal."}, {"start": 294.0, "end": 300.0, "text": " And obviously you can scale this with KD, which is some kind of diffuser video."}, {"start": 300.0, "end": 308.0, "text": " Now if you put it next to a really image of the diffuser material, obviously it is a question, you know, what do we call the diffuser material?"}, {"start": 308.0, "end": 310.0, "text": " Or where was this for the BRDF?"}, {"start": 310.0, "end": 311.0, "text": " How exactly?"}, {"start": 311.0, "end": 313.0, "text": " But let's disregard this."}, {"start": 313.0, "end": 317.0, "text": " And let's accept that we have this difference between the two."}, {"start": 317.0, "end": 329.0, "text": " And if you take a good look, then it becomes apparent that in grazing angles, the simulated diffuser material seems to be completed on."}, {"start": 329.0, "end": 341.0, "text": " And if you take a look at the formula up there, then this is a self-explanatory because the normal and the normal and the light direction can get perpendicular."}, {"start": 341.0, "end": 345.0, "text": " And then you will see this darkness."}, {"start": 345.0, "end": 352.0, "text": " So there are some advanced BRDF models that try to be a bit more realistic in this regard."}, {"start": 352.0, "end": 359.0, "text": " Such example is the ORN-AIR model, which is much closer to what you would measure in real life."}, {"start": 359.0, "end": 366.0, "text": " But let's note that all of these simplified BRDF models, these are all tags."}, {"start": 366.0, "end": 369.0, "text": " This is not what physical reality is."}, {"start": 369.0, "end": 376.0, "text": " People write up the actual equations that relate to physical reality and try to simplify in a way that a simple ray-facer can capture."}, {"start": 376.0, "end": 384.0, "text": " We are going to talk about global effects and what a real diffused material looks like in a few minutes."}, {"start": 384.0, "end": 388.0, "text": " So this ORN-AIR model seems much better."}, {"start": 388.0, "end": 395.0, "text": " And what's even more, it can take into consideration these microscopic imperfections in different materials."}, {"start": 395.0, "end": 404.0, "text": " And you can get a roughness parameter that can model these imperfections."}, {"start": 404.0, "end": 409.0, "text": " What about specular models?"}, {"start": 409.0, "end": 416.0, "text": " Well, the form model, V.R, that we have talked about, is not the only way to do it."}, {"start": 416.0, "end": 424.0, "text": " There is also the form-plane model, which is a more advanced model, and uses this H, this half vector between L and V."}, {"start": 424.0, "end": 426.0, "text": " And it produces different results."}, {"start": 426.0, "end": 431.0, "text": " I think this image is maybe not the best because yes, the highlights are different."}, {"start": 431.0, "end": 440.0, "text": " One of the main advantages of this material model is that the shape of the specular reflections can get a bit more elliptic,"}, {"start": 440.0, "end": 444.0, "text": " depending on the viewing direction and the surroundings."}, {"start": 444.0, "end": 448.0, "text": " So here you have the very same circular thing."}, {"start": 448.0, "end": 451.0, "text": " So it's not the best example, but you can see that it's different."}, {"start": 451.0, "end": 453.0, "text": " It looks more realistic."}, {"start": 453.0, "end": 458.0, "text": " And we still have to think about the fact that these are still really good models."}, {"start": 458.0, "end": 463.0, "text": " But these are still the X."}, {"start": 463.0, "end": 466.0, "text": " There's also the Cooctorian model that's basically foam-plane."}, {"start": 466.0, "end": 470.0, "text": " That can model also microscopic roughness."}, {"start": 470.0, "end": 479.0, "text": " And here, maybe with the projector, maybe it's not so visible, but you can see that the specular reflection here is a bit more easier."}, {"start": 479.0, "end": 490.0, "text": " So it's not a completely round sphere, it's not a perfect sphere, there are these small imperfections that are characteristic to viewing those materials."}, {"start": 490.0, "end": 493.0, "text": " So this is what this model can capture."}, {"start": 493.0, "end": 504.0, "text": " And there are some other advanced BRDF models, some of which are more easy to understand that implement that it is to pronounce the name of the authors of the BRDF models."}, {"start": 504.0, "end": 507.0, "text": " This is one of those examples."}, {"start": 507.0, "end": 518.0, "text": " And this is some kind of multilayer model where you have a diffused substrate and you have like a coating, a specular coating."}, {"start": 518.0, "end": 526.0, "text": " So there are also BRDFs for car paint, where you can have these sparkly effects."}, {"start": 526.0, "end": 533.0, "text": " And there are many BRDF models that capture given effects."}, {"start": 533.0, "end": 541.0, "text": " Okay, what if one would like to play with these? The Disney guys have implemented this program called the BRDF Explorer."}, {"start": 541.0, "end": 552.0, "text": " And you can load many local BRDF models and change the light source positions, look at the actual BRDFs and the impulse responses."}, {"start": 552.0, "end": 558.0, "text": " Give it a try."}, {"start": 558.0, "end": 563.0, "text": " So we have always been talking about cameras. So we are trying to model real world cameras."}, {"start": 563.0, "end": 568.0, "text": " If you have an handheld camera, you will see a setting that's called the F-stop."}, {"start": 568.0, "end": 572.0, "text": " And the F-stop is related to the size of the aperture."}, {"start": 572.0, "end": 577.0, "text": " The aperture is the opening of the camera where the light goes in."}, {"start": 577.0, "end": 580.0, "text": " And you can set this to different values."}, {"start": 580.0, "end": 589.0, "text": " And you will notice that if you set this F-stop to a high value, then the aperture of the camera is going to become smaller."}, {"start": 589.0, "end": 592.0, "text": " And if it's smaller, then this means that less light is left in."}, {"start": 592.0, "end": 598.0, "text": " And more of the image that you get is going to be in focus."}, {"start": 598.0, "end": 603.0, "text": " And vice versa. So if you have a low F-stop setting, then you will have a bigger aperture."}, {"start": 603.0, "end": 607.0, "text": " More light is left in. And more regions will be out of focus."}, {"start": 607.0, "end": 614.0, "text": " And what gives you this depth of field effect. Because whatever images we have seen and created with ray tracer yet,"}, {"start": 614.0, "end": 617.0, "text": " don't have this depth of field effect."}, {"start": 617.0, "end": 621.0, "text": " But if you use a handheld camera, then somehow you have to model this effect as well."}, {"start": 621.0, "end": 625.0, "text": " Because this is how an image will be created in a real world."}, {"start": 625.0, "end": 630.0, "text": " So this is a nice chart made by Photograph First"}, {"start": 630.0, "end": 641.0, "text": " to see how exactly these F-stops will lead to an aperture size, what are the typical settings and all these interesting things."}, {"start": 641.0, "end": 643.0, "text": " And an actual example."}, {"start": 643.0, "end": 649.0, "text": " So let's take a look at the bottom right here."}, {"start": 649.0, "end": 653.0, "text": " You can see that the whole image is in focus."}, {"start": 653.0, "end": 658.0, "text": " And as you adjust the F-stop accordingly, you can see the top left."}, {"start": 658.0, "end": 663.0, "text": " And see here immediately that the background is heavily blurred."}, {"start": 663.0, "end": 667.0, "text": " So this is a more pronounced depth of field effect."}, {"start": 667.0, "end": 671.0, "text": " It would be wonderful to have a ray tracer that can take this effect into account."}, {"start": 671.0, "end": 678.0, "text": " And this is another maybe a bit more beautiful and a bit more visible example."}, {"start": 678.0, "end": 683.0, "text": " So the left side you can see a very pronounced depth of field effect on the right."}, {"start": 683.0, "end": 685.0, "text": " And it's close to everything is in focus."}, {"start": 685.0, "end": 690.0, "text": " Yes. Do you also use the word OK for computer graphics?"}, {"start": 690.0, "end": 691.0, "text": " Yes."}, {"start": 691.0, "end": 697.0, "text": " And there is also a bunch of papers on how to simulate this effect."}, {"start": 697.0, "end": 700.0, "text": " So people even try to compute this in real time."}, {"start": 700.0, "end": 704.0, "text": " So you have like a computer game, you would like to see this OK effect."}, {"start": 704.0, "end": 705.0, "text": " How do you do this?"}, {"start": 705.0, "end": 708.0, "text": " And you have to take into consideration the depth."}, {"start": 708.0, "end": 714.0, "text": " If you know what objects are wearing exactly how far away, then you can do a bunch of tricks."}, {"start": 714.0, "end": 721.0, "text": " To get something like that in the approximation real time."}, {"start": 721.0, "end": 724.0, "text": " And if you do this something that I'm going to show you in a second,"}, {"start": 724.0, "end": 727.0, "text": " then you will have the very same effect in your ray tracer from the left."}, {"start": 727.0, "end": 730.0, "text": " Completely only focus image on the right."}, {"start": 730.0, "end": 735.0, "text": " You can see again the depth of field effect, especially in the background."}, {"start": 735.0, "end": 737.0, "text": " The further away you go, the more you see."}, {"start": 737.0, "end": 739.0, "text": " So how do we do this?"}, {"start": 739.0, "end": 740.0, "text": " Very simple."}, {"start": 740.0, "end": 747.0, "text": " Let's skip the text and I just put this here because people who read this at home would know about this."}, {"start": 747.0, "end": 755.0, "text": " So mostly most of what we do is we shoot a ray through the midpoint of the pixel in our ray tracer."}, {"start": 755.0, "end": 759.0, "text": " And this is going to touch this focal point and then hit an object."}, {"start": 759.0, "end": 763.0, "text": " What we can do is that we would also take samples from nearby."}, {"start": 763.0, "end": 767.0, "text": " So not only this pixel and not only the midpoint, but from nearby."}, {"start": 767.0, "end": 772.0, "text": " And shoot all of these where is through the same focal point."}, {"start": 772.0, "end": 777.0, "text": " And compute the very same samples, but these samples will be nearby."}, {"start": 777.0, "end": 780.0, "text": " What we do with these samples is we average them."}, {"start": 780.0, "end": 785.0, "text": " And this is what is going to give you this depth of field effect."}, {"start": 785.0, "end": 788.0, "text": " So this is already some kind of integration."}, {"start": 788.0, "end": 791.0, "text": " This means that, which if you run a ray tracer,"}, {"start": 791.0, "end": 798.0, "text": " you are going to get a completely converged image without any noise, without any problems."}, {"start": 798.0, "end": 801.0, "text": " That image you can consider done."}, {"start": 801.0, "end": 803.0, "text": " But this will not be solved with completely illumination."}, {"start": 803.0, "end": 806.0, "text": " This is a speciality of ray tracers."}, {"start": 806.0, "end": 815.0, "text": " But if you have such an effect, then you may have to wait until more and more of these samples are computed and the more smooth image you move."}, {"start": 815.0, "end": 818.0, "text": " But we will talk about this effect extensively."}, {"start": 818.0, "end": 820.0, "text": " It's going to be very important."}, {"start": 820.0, "end": 824.0, "text": " And just an important question, what kind of material model can this be?"}, {"start": 824.0, "end": 828.0, "text": " Obviously, this is some quick perhaps open gm preview,"}, {"start": 828.0, "end": 833.0, "text": " but it's very apparent what I see in here."}, {"start": 833.0, "end": 840.0, "text": " What kind of shading is this?"}, {"start": 840.0, "end": 842.0, "text": " Is it spectacular?"}, {"start": 842.0, "end": 843.0, "text": " No."}, {"start": 843.0, "end": 845.0, "text": " These are definitely not here."}, {"start": 845.0, "end": 847.0, "text": " What else?"}, {"start": 847.0, "end": 850.0, "text": " Yes, exactly."}, {"start": 850.0, "end": 852.0, "text": " So this is the Lumberian model."}, {"start": 852.0, "end": 880.0, "text": " You can also see this effect that it goes completely black in this facing angles."}]
Two Minute Papers
https://www.youtube.com/watch?v=WFOjJR3nWyQ
TU Wien Rendering #12 - Assignment 1
The assignment file is available under the assignments section, around the last slide in the linked ppt: https://www.cg.tuwien.ac.at/courses/Rendering/VU.SS2019.html After the toy 0th assignment that unfortunately wasn't recorded, here is the first handout. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → https://www.cg.tuwien.ac.at/courses/Rendering/VU.SS2019.html Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
So let's talk about the assignment. We are going to play with Hall-Hackford's Business Card Rate Tracer. This is going to be an enlightening experience because it says it's a Business Card Rate Tracer. This sounds really good because the whole code can fit on the Business Card of yours. Well, you will see that this does not mean that this is well suited for educational purposes. So it's not very easy to understand what is happening in there. In the package, I have also included a version that's a bit more eye-friendly. But when we are going to be talking about global illumination, we're going to use a simple global illumination program. And it's called Small Paint and I wrote it myself in a way that it would maximize both understandability and conciseness. So here is the first part which is not expected to be so convenient. And then the next global illumination program will be much more convenient. So to compile this, you use whatever tool you wish. I'm not going to confine you to use any given tool. This is a one or maybe two free file rate tracer. And in the zip file, I will show you I have put a compiler for Windows, which is called the mean GW. This is a port of different Unix compilers to Windows and they work quite well. So you can use this. If you are a Windows user and you don't like Visual Studio, you don't have Visual Studio or you would like to try a GCC on your Windows that you can use this. If you would like to use something else, you're fine. And if you're a Linux guy, you're a power user anyway, so you can deal with that yourself. But if you need help, just write to me. So what is the practical part? Well, get this file and there's going to be instructions on the readme. And let's make images with different maximum depth values. This means how many bounces are we computing? And do this from one to five ounces and see the difference in the output images. The question is, what did you experience visually and why? There's going to be an observations text file and this is where you need to write these stuff. But I'm going to discuss this on the next slide. And what I would like you to do is to cram out the depth that are you to a really large number and see what happens. Because maybe interesting things will happen. And the question is, what will be the dependence of the runtime, the execution time of the algorithm, with respect to this depth variable? And does it abruptly change at a given point, or somewhere, or not? And usually there is a set of questions for pros. So if you feel like a pro, you should definitely answer these. Sometimes these are really difficult questions. Sometimes not so much. And when I first had this course, I thought that maybe 10 to 20% of the people did want to try, because they are really interesting exercises. And to my surprise, almost 70 or 80% of the people in the first two years of this course, did all the pro exercises. And some of them even came up with exercises on their own, because they thought that, yeah, this is so much fun. I changed the code this way with what I got, what happened there. And if you find out something amazing, then show it to me. So I can also marvel at that. So set of questions for pros. This is plus points for the exam. If you don't do them, you still can get the maximum amount of points for the assignment. Let's take a look and try to do them, because it's a really interesting journey. The first is, what would be the algorithmic complexity of this step variable? So this means that I cannot only vary the execution part of the algorithm with respect to this step, but I can also write up the complexity of the algorithm with the bigger notation. The bigger notation, this is something that tells you the complexity of algorithms with respect to variables. So if I change, let's go back to algorithms and data structures. First is, dice-wise algorithm, if I remember correctly, it's of quadratic complexity. So if I have, is it? I think it is. So it's really favorable. So it means that if I have a big larger city where I need to find the best route between two points, then the complexity of the algorithm is not going to be raised so much. So this means that if it's n squared, means that if I have double the size of the city, then the algorithm is going to run four times as long. So I would like to know the complexity of this algorithm that you have. What do you respect to this big old notation? Second, four question is, what could we do to make this more favorable, whatever weird examples and ideas go? And if you did some change, then tell me the new complexity of the algorithm. And a regular set of questions play with the AOV variable. It's very easy to find out what it does already. The question is, what did you experience and why? And just a note that there is a, I think, more readable version of the same C++ code in the Z5. The format of the table that I would like to see is that we take the different numbers for depth values, maximum amount of bounces, and I would like to know the execution time in seconds. And this is a text file. But after filling such a text file, I would like you to plot this with whatever tool you have. I don't mind. If you like new plot use that. If you like more from alpha or not, what ever. I don't mind. And please put a PNG file of the plot also in your solutions. And one more set of light paths. This is waiting for you. So please draw a camera on this image, where it is exactly. And please denote what kind of light paths do I have here? And please tell me a few words about whether I see these light paths or not. So for instance, I definitely see a light path that is LV, because I can see the light source. So the ray that connects the eye to the light source is definitely accounted for, because I can see the light source. So what about the other light paths? And save it as a PNG or JPEG file. And this is just the names of the different files that I would like to see in your submission. And this is how the submission itself should be named. And about the deadline, I don't know yet. Apparently, Easter is coming. And when I first had this course, I told the other people that, well, next week there's going to be another lecture. And they said, well, not really, because there's Easter rig. And I come from Hungary, where Easter rig is one Monday. So it means that on Monday you get drunk and on Tuesday you go back to work with the hangover. And then they told me that it's not only Monday. So next Wednesday is going to be skipped because it's Easter rig. And I said, well, maybe they are following me, but maybe it's true, I don't know. Okay, then the Wednesday after that. And they said, uh-uh, not even that. And I was like, I'm surely, I'm surely being trolled by like 20 people at the same time. And then they told me that Easter rig in Austria is two weeks, at least in the universities. And I was like, that's amazing because I'm wondering, on that Monday that you have this Easter rig, everyone is drunk. It's ridiculous. Like the whole city goes crazy. And in Austria, I imagine that they're saying may happen, but for two weeks. It's an amazing country. Thanks for your attention and see you sometime. I will announce when the next lecture happens. Thank you.
[{"start": 0.0, "end": 5.0, "text": " So let's talk about the assignment."}, {"start": 5.0, "end": 9.0, "text": " We are going to play with Hall-Hackford's Business Card Rate Tracer."}, {"start": 9.0, "end": 14.0, "text": " This is going to be an enlightening experience because it says it's a Business Card Rate Tracer."}, {"start": 14.0, "end": 20.0, "text": " This sounds really good because the whole code can fit on the Business Card of yours."}, {"start": 20.0, "end": 26.0, "text": " Well, you will see that this does not mean that this is well suited for educational purposes."}, {"start": 26.0, "end": 30.0, "text": " So it's not very easy to understand what is happening in there."}, {"start": 30.0, "end": 35.0, "text": " In the package, I have also included a version that's a bit more eye-friendly."}, {"start": 35.0, "end": 39.0, "text": " But when we are going to be talking about global illumination,"}, {"start": 39.0, "end": 44.0, "text": " we're going to use a simple global illumination program."}, {"start": 44.0, "end": 55.0, "text": " And it's called Small Paint and I wrote it myself in a way that it would maximize both understandability and conciseness."}, {"start": 55.0, "end": 63.0, "text": " So here is the first part which is not expected to be so convenient."}, {"start": 63.0, "end": 67.0, "text": " And then the next global illumination program will be much more convenient."}, {"start": 67.0, "end": 71.0, "text": " So to compile this, you use whatever tool you wish."}, {"start": 71.0, "end": 76.0, "text": " I'm not going to confine you to use any given tool."}, {"start": 76.0, "end": 80.0, "text": " This is a one or maybe two free file rate tracer."}, {"start": 80.0, "end": 88.0, "text": " And in the zip file, I will show you I have put a compiler for Windows,"}, {"start": 88.0, "end": 90.0, "text": " which is called the mean GW."}, {"start": 90.0, "end": 96.0, "text": " This is a port of different Unix compilers to Windows and they work quite well."}, {"start": 96.0, "end": 97.0, "text": " So you can use this."}, {"start": 97.0, "end": 100.0, "text": " If you are a Windows user and you don't like Visual Studio,"}, {"start": 100.0, "end": 105.0, "text": " you don't have Visual Studio or you would like to try a GCC on your Windows that you can use this."}, {"start": 105.0, "end": 107.0, "text": " If you would like to use something else, you're fine."}, {"start": 107.0, "end": 112.0, "text": " And if you're a Linux guy, you're a power user anyway, so you can deal with that yourself."}, {"start": 112.0, "end": 115.0, "text": " But if you need help, just write to me."}, {"start": 115.0, "end": 117.0, "text": " So what is the practical part?"}, {"start": 117.0, "end": 123.0, "text": " Well, get this file and there's going to be instructions on the readme."}, {"start": 123.0, "end": 127.0, "text": " And let's make images with different maximum depth values."}, {"start": 127.0, "end": 131.0, "text": " This means how many bounces are we computing?"}, {"start": 131.0, "end": 138.0, "text": " And do this from one to five ounces and see the difference in the output images."}, {"start": 138.0, "end": 141.0, "text": " The question is, what did you experience visually and why?"}, {"start": 141.0, "end": 146.0, "text": " There's going to be an observations text file and this is where you need to write these stuff."}, {"start": 146.0, "end": 149.0, "text": " But I'm going to discuss this on the next slide."}, {"start": 149.0, "end": 156.0, "text": " And what I would like you to do is to cram out the depth that are you to a really large number and see what happens."}, {"start": 156.0, "end": 161.0, "text": " Because maybe interesting things will happen."}, {"start": 161.0, "end": 167.0, "text": " And the question is, what will be the dependence of the runtime, the execution time of the algorithm,"}, {"start": 167.0, "end": 170.0, "text": " with respect to this depth variable?"}, {"start": 170.0, "end": 180.0, "text": " And does it abruptly change at a given point, or somewhere, or not?"}, {"start": 180.0, "end": 183.0, "text": " And usually there is a set of questions for pros."}, {"start": 183.0, "end": 187.0, "text": " So if you feel like a pro, you should definitely answer these."}, {"start": 187.0, "end": 189.0, "text": " Sometimes these are really difficult questions."}, {"start": 189.0, "end": 191.0, "text": " Sometimes not so much."}, {"start": 191.0, "end": 199.0, "text": " And when I first had this course, I thought that maybe 10 to 20% of the people did want to try,"}, {"start": 199.0, "end": 202.0, "text": " because they are really interesting exercises."}, {"start": 202.0, "end": 212.0, "text": " And to my surprise, almost 70 or 80% of the people in the first two years of this course,"}, {"start": 212.0, "end": 216.0, "text": " did all the pro exercises."}, {"start": 216.0, "end": 221.0, "text": " And some of them even came up with exercises on their own, because they thought that,"}, {"start": 221.0, "end": 223.0, "text": " yeah, this is so much fun."}, {"start": 223.0, "end": 226.0, "text": " I changed the code this way with what I got, what happened there."}, {"start": 226.0, "end": 230.0, "text": " And if you find out something amazing, then show it to me."}, {"start": 230.0, "end": 232.0, "text": " So I can also marvel at that."}, {"start": 232.0, "end": 234.0, "text": " So set of questions for pros."}, {"start": 234.0, "end": 236.0, "text": " This is plus points for the exam."}, {"start": 236.0, "end": 241.0, "text": " If you don't do them, you still can get the maximum amount of points for the assignment."}, {"start": 241.0, "end": 247.0, "text": " Let's take a look and try to do them, because it's a really interesting journey."}, {"start": 247.0, "end": 252.0, "text": " The first is, what would be the algorithmic complexity of this step variable?"}, {"start": 252.0, "end": 259.0, "text": " So this means that I cannot only vary the execution part of the algorithm with respect to this step,"}, {"start": 259.0, "end": 265.0, "text": " but I can also write up the complexity of the algorithm with the bigger notation."}, {"start": 265.0, "end": 271.0, "text": " The bigger notation, this is something that tells you the complexity of algorithms with respect to variables."}, {"start": 271.0, "end": 277.0, "text": " So if I change, let's go back to algorithms and data structures."}, {"start": 277.0, "end": 283.0, "text": " First is, dice-wise algorithm, if I remember correctly, it's of quadratic complexity."}, {"start": 283.0, "end": 285.0, "text": " So if I have, is it?"}, {"start": 285.0, "end": 286.0, "text": " I think it is."}, {"start": 286.0, "end": 288.0, "text": " So it's really favorable."}, {"start": 288.0, "end": 296.0, "text": " So it means that if I have a big larger city where I need to find the best route between two points,"}, {"start": 296.0, "end": 301.0, "text": " then the complexity of the algorithm is not going to be raised so much."}, {"start": 301.0, "end": 308.0, "text": " So this means that if it's n squared, means that if I have double the size of the city,"}, {"start": 308.0, "end": 311.0, "text": " then the algorithm is going to run four times as long."}, {"start": 311.0, "end": 316.0, "text": " So I would like to know the complexity of this algorithm that you have."}, {"start": 316.0, "end": 319.0, "text": " What do you respect to this big old notation?"}, {"start": 319.0, "end": 325.0, "text": " Second, four question is, what could we do to make this more favorable,"}, {"start": 325.0, "end": 329.0, "text": " whatever weird examples and ideas go?"}, {"start": 329.0, "end": 335.0, "text": " And if you did some change, then tell me the new complexity of the algorithm."}, {"start": 335.0, "end": 339.0, "text": " And a regular set of questions play with the AOV variable."}, {"start": 339.0, "end": 341.0, "text": " It's very easy to find out what it does already."}, {"start": 341.0, "end": 347.0, "text": " The question is, what did you experience and why? And just a note that there is a, I think,"}, {"start": 347.0, "end": 354.0, "text": " more readable version of the same C++ code in the Z5."}, {"start": 354.0, "end": 362.0, "text": " The format of the table that I would like to see is that we take the different numbers for depth values,"}, {"start": 362.0, "end": 367.0, "text": " maximum amount of bounces, and I would like to know the execution time in seconds."}, {"start": 367.0, "end": 373.0, "text": " And this is a text file. But after filling such a text file,"}, {"start": 373.0, "end": 376.0, "text": " I would like you to plot this with whatever tool you have."}, {"start": 376.0, "end": 380.0, "text": " I don't mind. If you like new plot use that."}, {"start": 380.0, "end": 384.0, "text": " If you like more from alpha or not, what ever. I don't mind."}, {"start": 384.0, "end": 392.0, "text": " And please put a PNG file of the plot also in your solutions."}, {"start": 392.0, "end": 398.0, "text": " And one more set of light paths."}, {"start": 398.0, "end": 405.0, "text": " This is waiting for you. So please draw a camera on this image, where it is exactly."}, {"start": 405.0, "end": 410.0, "text": " And please denote what kind of light paths do I have here?"}, {"start": 410.0, "end": 415.0, "text": " And please tell me a few words about whether I see these light paths or not."}, {"start": 415.0, "end": 419.0, "text": " So for instance, I definitely see a light path that is LV,"}, {"start": 419.0, "end": 426.0, "text": " because I can see the light source. So the ray that connects the eye to the light source is definitely accounted for,"}, {"start": 426.0, "end": 432.0, "text": " because I can see the light source. So what about the other light paths?"}, {"start": 432.0, "end": 436.0, "text": " And save it as a PNG or JPEG file."}, {"start": 436.0, "end": 443.0, "text": " And this is just the names of the different files that I would like to see in your submission."}, {"start": 443.0, "end": 447.0, "text": " And this is how the submission itself should be named."}, {"start": 447.0, "end": 451.0, "text": " And about the deadline, I don't know yet."}, {"start": 451.0, "end": 456.0, "text": " Apparently, Easter is coming."}, {"start": 456.0, "end": 460.0, "text": " And when I first had this course, I told the other people that,"}, {"start": 460.0, "end": 464.0, "text": " well, next week there's going to be another lecture."}, {"start": 464.0, "end": 468.0, "text": " And they said, well, not really, because there's Easter rig."}, {"start": 468.0, "end": 474.0, "text": " And I come from Hungary, where Easter rig is one Monday."}, {"start": 474.0, "end": 480.0, "text": " So it means that on Monday you get drunk and on Tuesday you go back to work with the hangover."}, {"start": 480.0, "end": 484.0, "text": " And then they told me that it's not only Monday."}, {"start": 484.0, "end": 488.0, "text": " So next Wednesday is going to be skipped because it's Easter rig."}, {"start": 488.0, "end": 492.0, "text": " And I said, well, maybe they are following me, but maybe it's true, I don't know."}, {"start": 492.0, "end": 497.0, "text": " Okay, then the Wednesday after that. And they said, uh-uh, not even that."}, {"start": 497.0, "end": 503.0, "text": " And I was like, I'm surely, I'm surely being trolled by like 20 people at the same time."}, {"start": 503.0, "end": 511.0, "text": " And then they told me that Easter rig in Austria is two weeks, at least in the universities."}, {"start": 511.0, "end": 518.0, "text": " And I was like, that's amazing because I'm wondering, on that Monday that you have this Easter rig,"}, {"start": 518.0, "end": 523.0, "text": " everyone is drunk. It's ridiculous. Like the whole city goes crazy."}, {"start": 523.0, "end": 531.0, "text": " And in Austria, I imagine that they're saying may happen, but for two weeks."}, {"start": 531.0, "end": 535.0, "text": " It's an amazing country. Thanks for your attention and see you sometime."}, {"start": 535.0, "end": 564.0, "text": " I will announce when the next lecture happens. Thank you."}]
Two Minute Papers
https://www.youtube.com/watch?v=Qgsos_kz6pM
TU Wien Rendering #11 - Recursion and Heckbert's Taxonomy
We now know how to intersect a ray with a scene and how to perform simple shading operations. However, this only means one bounce. In real life, rays of light bounce many more times than one. We handle this problem with recursion by reflecting the ray of light off of the surface and start the process again as if the new surface were the camera. We also discuss Paul Heckbert's taxonomy to classify different light transport algorithms based on what kind of light paths they can compute. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Now, let's jump back to recursion. So if I would like to compute multiple bounces, I need to handle this somehow. And we have talked about this briefly. If I intersect the first object, I need to reflect the way off of this object in some way. And after the shading is done, diffuse specular ambient shading, then we can trace the light further. And this tracing step can be both reflection and refraction. You remember, for now, as law and as law, we are going to put them in use in a second. But what I need to tell you is something super weird, but you won't feel why this is weird. So in a ray tracer, in a recursive ray tracer, not normal illumination, the indirected relation, and these goodies that we are going to start with, next lecture. In a ray tracer, if you encounter a mirror, an ideal specular reflector, you will bounce the ray back in the ideal reflection direction. So exactly what you would see in the mirror. It's 45 degrees, it is 45 degrees back. And you do the same with diffuse surfaces as well. So you continue the ray in ideal reflection direction. And now this sounds reasonably okay. But when you will study global illumination and how the whole thing is done, how real men compute the rendering equation and images, you will see that this is a super, super huge simplification. I remember the faces of students when they think back, when they know all about global illumination, when we talk about simple recursive ray tracing. And I asked them how is this reflecting? And they are science. Because they will know that in global illumination, it's going to be so natural that the diffuse surface means that it will reflect right in every direction with the very same probability. So the perfect reflection direction has the same probability as coming back the same direction where it enters the surface. They all have the same probability, all directions. And then suddenly our ray tracer says that even a diffuse object I'm going to treat as a mirror. So this is going to be super weird. And please remember that I say this later on if you take a look at ray tracer's after global illumination. Now how does the recursion work? I hit something, I reflect away. So the ray tracer always the perfect reflection direction. And I'm going to restart the whole process. I'm going to start to trace a new ray that starts on the object. Remember, this is when you get the self intersection t equals 0. I increment this maximum ray depth value to show how many bounces I have computed so far. And I start again. So I start a new ray. I imagine that this object is now the camera. And this is where the ray is starting from. Is there a question or okay. So we got everything. How does it look like in terms of mathematics? This is the illumination equation without recursion. But now I need to add one more term. Let's quickly recap this. The first term is the ambient term. This is to warmen up the completely black shadows. This is basically a bunch of hex that it looks good. So we are going to be okay with this for now. Then we scale the amount of incoming light with the material properties. So a diffuse and a specular shading model. These are weighted by KD and KS. These are values that tell you how diffuse or house-packular this object is. And not only house-packular it is, but what color does it absorb? What color does it reflect? So what is the color of the object? What is the diffuse and specular or beetle of the object? I'm using so many terms not because I'm inconsistent, but because people use all of these terms. And therefore you should be familiar with this. And there's some weird stuff that I now added. And this is the recursion part. So KT is the Fresmission coefficient. This is the probability of refraction. Because you remember that I hit this hair glass interface from different directions. And if I hit them from different directions, then the probability of reflection and refraction is different. So depending on incoming direction you have seen, this laser is stronger in one direction than the other. And we have to account for this with these transmission coefficients. And the IT is the intensity that is coming from that direction. The kr and IR are the other way around. So if there is reflection, not refraction, then I'm going to go in that direction. And I'm going to scale this with the intensity of the incoming lights from the reflection direction. A quick example. Where if I have a glass that's blue? So some kind of glass that looks almost entirely blue. Then this Fresmission coefficient is going to describe the color that's blueish. Therefore all the energy that comes through this ball is going to be colored to blue. And the final reflection coefficient can be whatever. So we are now interested in the transmission. So this is how I can define materials like that. This is the recursion part. And for this, I need to start perhaps multiple rays. So if I hit this object and I say that, hey, but this is a transmissive object, this is a glass. What do I do? Because there is a probability, positive probability usually for reflection and refraction. Do I start two recursions? Do I start two new rays? What do I do exactly? And in the assignment that I'm going to talk about, you will see a piece of code that does something. And then you will see the effect of something. I'm not going to spoil anything. And just a quick introduction to headverse notation. This is important because if you know this kind of notation, then you will be able to discuss what kind of ray tracing algorithm can render, what kind of light paths. So as a status quo, all light paths go between light sources and the eye. If it doesn't hit, if it doesn't hit the lens of the camera, it's not going to be recorded in the image. So every light source is every light path is going to be written as L, something, something E. Or as this is by direction, or you can imagine the other way around. So you can say, E something, something L. The notes once one diffuse interreflection during the way, S is one specular interreflection during the way. And the asterisk means any amount of diffuse bounces, perhaps even zero. So LDE means that either I hit the eye from the light immediately, or there is one diffuse bounce, or maybe an arbitrary number of diffuse bounces. This is what the asterisk tells you. And we can also denote the choice. The choice means that there is only one either specular or one diffuse bounce. And with this small but powerful notation, we can discuss all the algorithms there are to render photo realistic quality images. So for now, some of this will be intuitive. Some of this will be not so intuitive because we don't know global illumination yet. But first, ray casting means that we hit at most one diffuse object. As all it can render no recursion, nothing. I just hit one diffuse object. I do the diffuse shading and the bi. Radiosity can compute something like indirect illumination because multiple diffuse bounces are possible. So remember the example. The light comes in to the classroom through the window, hits the white wall, and then hits the over. And therefore, the over is not going to be perfectly black. This is called indirect illumination. Radiosity is got that covered. Recursify tracing what we are doing with this for now transmission and reflection thing. We know that what we can do is indirect illumination definitely not because we treat a diffuse object also as a mirror. We just use a different shading function for it. So we don't trace rays all along the hemisphere of the object because it collects light from every direction. This is why it doesn't change if I move my head. This is why the sight of it doesn't change. But we cannot account for that. This would be a huge dimensional integration problem that we are going to solve in the rendering equation. So at most one diffuse bounce, but you may have as many specular bounces as you need. So this is why recursive ray tracers usually show you mirrors and glass balls and reflective things like that because it is capable of rendering it, but not so much more. And global illumination that's the full package, an arbitrary number of diffuse or specular bounces. This can also be glossy, whatever kind of complicated material model you have here. This DS can be anything and in any amount. Well, let's take a look at an example with the hardware notation. So here we have light paths and they start up from the light source. So on the right I have something like LDDE. That's exactly what I have been talking about. So I start from the light source. I hit a diffuse wall. I hit the diffuse ground and then I hit the camera afterwards. So that's LDDE. Let's take a look at for instance LSSE. So I start from the light source. I hit the glass ball from the outside, this left glass ball. And then I go inside the ball. There's going to be reflection at least. Let's imagine that there's going to be a reflection. And then I hit it on the other side as well and I come out. So this is two specular bounces, LSSE. So we can denote light paths and understand what algorithms can render what exactly. So here if we imagine that this is a ray tracer, this is an image with a ray tracer. The question is what did they do? And this is a rather low quality image. But let's, it seems to me that the shadows are not completely black. Therefore in their shading models, they definitely use the what kind of term? Raise your hand if you know the answer. So normally this would be completely black because I should a shadow ray towards the light source and it is going to be uploaded by the table. So intensity is 0. Imagine that like all possible shadow rays are blocked. But this is still not completely black because I'm adding a term to it in order to warp up and make the image appear more realistic. So this would be which term I'm doing. This would be the ambient
[{"start": 0.0, "end": 7.8, "text": " Now, let's jump back to recursion. So if I would like to compute multiple bounces, I"}, {"start": 7.8, "end": 14.16, "text": " need to handle this somehow. And we have talked about this briefly. If I intersect the first"}, {"start": 14.16, "end": 21.76, "text": " object, I need to reflect the way off of this object in some way. And after the shading"}, {"start": 21.76, "end": 28.72, "text": " is done, diffuse specular ambient shading, then we can trace the light further. And this"}, {"start": 28.72, "end": 35.56, "text": " tracing step can be both reflection and refraction. You remember, for now, as law and as law, we"}, {"start": 35.56, "end": 42.72, "text": " are going to put them in use in a second. But what I need to tell you is something super"}, {"start": 42.72, "end": 50.56, "text": " weird, but you won't feel why this is weird. So in a ray tracer, in a recursive ray tracer,"}, {"start": 50.56, "end": 55.480000000000004, "text": " not normal illumination, the indirected relation, and these goodies that we are going to start"}, {"start": 55.48, "end": 63.8, "text": " with, next lecture. In a ray tracer, if you encounter a mirror, an ideal specular reflector,"}, {"start": 63.8, "end": 69.16, "text": " you will bounce the ray back in the ideal reflection direction. So exactly what you would"}, {"start": 69.16, "end": 75.72, "text": " see in the mirror. It's 45 degrees, it is 45 degrees back. And you do the same with diffuse"}, {"start": 75.72, "end": 82.6, "text": " surfaces as well. So you continue the ray in ideal reflection direction. And now this"}, {"start": 82.6, "end": 89.08, "text": " sounds reasonably okay. But when you will study global illumination and how the whole thing"}, {"start": 89.08, "end": 95.83999999999999, "text": " is done, how real men compute the rendering equation and images, you will see that this"}, {"start": 95.83999999999999, "end": 103.84, "text": " is a super, super huge simplification. I remember the faces of students when they think back,"}, {"start": 103.84, "end": 109.44, "text": " when they know all about global illumination, when we talk about simple recursive ray tracing."}, {"start": 109.44, "end": 119.03999999999999, "text": " And I asked them how is this reflecting? And they are science. Because they will know that"}, {"start": 119.03999999999999, "end": 124.03999999999999, "text": " in global illumination, it's going to be so natural that the diffuse surface means that"}, {"start": 124.03999999999999, "end": 129.4, "text": " it will reflect right in every direction with the very same probability. So the perfect"}, {"start": 129.4, "end": 135.12, "text": " reflection direction has the same probability as coming back the same direction where it"}, {"start": 135.12, "end": 140.16, "text": " enters the surface. They all have the same probability, all directions. And then suddenly"}, {"start": 140.16, "end": 145.84, "text": " our ray tracer says that even a diffuse object I'm going to treat as a mirror. So this is"}, {"start": 145.84, "end": 150.68, "text": " going to be super weird. And please remember that I say this later on if you take a look"}, {"start": 150.68, "end": 157.48000000000002, "text": " at ray tracer's after global illumination. Now how does the recursion work? I hit something,"}, {"start": 157.48000000000002, "end": 163.24, "text": " I reflect away. So the ray tracer always the perfect reflection direction. And I'm going"}, {"start": 163.24, "end": 168.72, "text": " to restart the whole process. I'm going to start to trace a new ray that starts on the"}, {"start": 168.72, "end": 174.92000000000002, "text": " object. Remember, this is when you get the self intersection t equals 0. I increment"}, {"start": 174.92000000000002, "end": 181.52, "text": " this maximum ray depth value to show how many bounces I have computed so far. And I start"}, {"start": 181.52, "end": 187.24, "text": " again. So I start a new ray. I imagine that this object is now the camera. And this is"}, {"start": 187.24, "end": 195.68, "text": " where the ray is starting from. Is there a question or okay. So we got everything. How"}, {"start": 195.68, "end": 201.04000000000002, "text": " does it look like in terms of mathematics? This is the illumination equation without recursion."}, {"start": 201.04000000000002, "end": 208.32000000000002, "text": " But now I need to add one more term. Let's quickly recap this. The first term is the ambient"}, {"start": 208.32000000000002, "end": 214.68, "text": " term. This is to warmen up the completely black shadows. This is basically a bunch of"}, {"start": 214.68, "end": 220.88, "text": " hex that it looks good. So we are going to be okay with this for now. Then we scale the"}, {"start": 220.88, "end": 227.08, "text": " amount of incoming light with the material properties. So a diffuse and a specular shading"}, {"start": 227.08, "end": 234.44, "text": " model. These are weighted by KD and KS. These are values that tell you how diffuse or"}, {"start": 234.44, "end": 242.56, "text": " house-packular this object is. And not only house-packular it is, but what color does"}, {"start": 242.56, "end": 249.96, "text": " it absorb? What color does it reflect? So what is the color of the object? What is the"}, {"start": 249.96, "end": 254.52, "text": " diffuse and specular or beetle of the object? I'm using so many terms not because I'm"}, {"start": 254.52, "end": 260.16, "text": " inconsistent, but because people use all of these terms. And therefore you should be familiar"}, {"start": 260.16, "end": 265.52, "text": " with this. And there's some weird stuff that I now added. And this is the recursion part."}, {"start": 265.52, "end": 274.08, "text": " So KT is the Fresmission coefficient. This is the probability of refraction. Because you"}, {"start": 274.08, "end": 279.59999999999997, "text": " remember that I hit this hair glass interface from different directions. And if I hit them"}, {"start": 279.59999999999997, "end": 285.59999999999997, "text": " from different directions, then the probability of reflection and refraction is different."}, {"start": 285.59999999999997, "end": 292.47999999999996, "text": " So depending on incoming direction you have seen, this laser is stronger in one direction"}, {"start": 292.48, "end": 297.92, "text": " than the other. And we have to account for this with these transmission coefficients. And"}, {"start": 297.92, "end": 305.8, "text": " the IT is the intensity that is coming from that direction. The kr and IR are the other way"}, {"start": 305.8, "end": 314.52000000000004, "text": " around. So if there is reflection, not refraction, then I'm going to go in that direction. And"}, {"start": 314.52000000000004, "end": 319.04, "text": " I'm going to scale this with the intensity of the incoming lights from the reflection"}, {"start": 319.04, "end": 328.92, "text": " direction. A quick example. Where if I have a glass that's blue? So some kind of glass"}, {"start": 328.92, "end": 335.76, "text": " that looks almost entirely blue. Then this Fresmission coefficient is going to describe the"}, {"start": 335.76, "end": 342.0, "text": " color that's blueish. Therefore all the energy that comes through this ball is going to"}, {"start": 342.0, "end": 349.6, "text": " be colored to blue. And the final reflection coefficient can be whatever. So we are now"}, {"start": 349.6, "end": 355.12, "text": " interested in the transmission. So this is how I can define materials like that. This"}, {"start": 355.12, "end": 362.48, "text": " is the recursion part. And for this, I need to start perhaps multiple rays. So if I"}, {"start": 362.48, "end": 366.88, "text": " hit this object and I say that, hey, but this is a transmissive object, this is a glass."}, {"start": 366.88, "end": 372.36, "text": " What do I do? Because there is a probability, positive probability usually for reflection"}, {"start": 372.36, "end": 378.48, "text": " and refraction. Do I start two recursions? Do I start two new rays? What do I do exactly?"}, {"start": 378.48, "end": 383.56, "text": " And in the assignment that I'm going to talk about, you will see a piece of code that"}, {"start": 383.56, "end": 392.2, "text": " does something. And then you will see the effect of something. I'm not going to spoil anything."}, {"start": 392.2, "end": 398.44, "text": " And just a quick introduction to headverse notation. This is important because if you"}, {"start": 398.44, "end": 403.76, "text": " know this kind of notation, then you will be able to discuss what kind of ray tracing"}, {"start": 403.76, "end": 412.59999999999997, "text": " algorithm can render, what kind of light paths. So as a status quo, all light paths go"}, {"start": 412.59999999999997, "end": 418.0, "text": " between light sources and the eye. If it doesn't hit, if it doesn't hit the lens of the camera,"}, {"start": 418.0, "end": 424.2, "text": " it's not going to be recorded in the image. So every light source is every light path is"}, {"start": 424.2, "end": 430.12, "text": " going to be written as L, something, something E. Or as this is by direction, or you can"}, {"start": 430.12, "end": 437.6, "text": " imagine the other way around. So you can say, E something, something L. The notes once"}, {"start": 437.6, "end": 442.88, "text": " one diffuse interreflection during the way, S is one specular interreflection during the"}, {"start": 442.88, "end": 450.71999999999997, "text": " way. And the asterisk means any amount of diffuse bounces, perhaps even zero. So LDE"}, {"start": 450.71999999999997, "end": 458.8, "text": " means that either I hit the eye from the light immediately, or there is one diffuse"}, {"start": 458.8, "end": 464.36, "text": " bounce, or maybe an arbitrary number of diffuse bounces. This is what the asterisk tells"}, {"start": 464.36, "end": 471.08, "text": " you. And we can also denote the choice. The choice means that there is only one either"}, {"start": 471.08, "end": 477.24, "text": " specular or one diffuse bounce. And with this small but powerful notation, we can discuss"}, {"start": 477.24, "end": 485.4, "text": " all the algorithms there are to render photo realistic quality images. So for now, some"}, {"start": 485.4, "end": 489.64, "text": " of this will be intuitive. Some of this will be not so intuitive because we don't know"}, {"start": 489.64, "end": 496.4, "text": " global illumination yet. But first, ray casting means that we hit at most one diffuse object."}, {"start": 496.4, "end": 501.91999999999996, "text": " As all it can render no recursion, nothing. I just hit one diffuse object. I do the diffuse"}, {"start": 501.91999999999996, "end": 509.12, "text": " shading and the bi. Radiosity can compute something like indirect illumination because"}, {"start": 509.12, "end": 515.6, "text": " multiple diffuse bounces are possible. So remember the example. The light comes in to the"}, {"start": 515.6, "end": 520.48, "text": " classroom through the window, hits the white wall, and then hits the over. And therefore,"}, {"start": 520.48, "end": 526.16, "text": " the over is not going to be perfectly black. This is called indirect illumination. Radiosity"}, {"start": 526.16, "end": 533.04, "text": " is got that covered. Recursify tracing what we are doing with this for now transmission"}, {"start": 533.04, "end": 540.3199999999999, "text": " and reflection thing. We know that what we can do is indirect illumination definitely"}, {"start": 540.3199999999999, "end": 547.4, "text": " not because we treat a diffuse object also as a mirror. We just use a different shading"}, {"start": 547.4, "end": 553.8399999999999, "text": " function for it. So we don't trace rays all along the hemisphere of the object because"}, {"start": 553.84, "end": 558.96, "text": " it collects light from every direction. This is why it doesn't change if I move my head."}, {"start": 558.96, "end": 563.48, "text": " This is why the sight of it doesn't change. But we cannot account for that. This would"}, {"start": 563.48, "end": 568.76, "text": " be a huge dimensional integration problem that we are going to solve in the rendering equation."}, {"start": 568.76, "end": 576.96, "text": " So at most one diffuse bounce, but you may have as many specular bounces as you need."}, {"start": 576.96, "end": 584.0400000000001, "text": " So this is why recursive ray tracers usually show you mirrors and glass balls and reflective"}, {"start": 584.0400000000001, "end": 590.12, "text": " things like that because it is capable of rendering it, but not so much more. And global illumination"}, {"start": 590.12, "end": 595.0400000000001, "text": " that's the full package, an arbitrary number of diffuse or specular bounces. This can"}, {"start": 595.0400000000001, "end": 600.9200000000001, "text": " also be glossy, whatever kind of complicated material model you have here. This DS can"}, {"start": 600.92, "end": 609.9599999999999, "text": " be anything and in any amount. Well, let's take a look at an example with the hardware notation."}, {"start": 609.9599999999999, "end": 616.0799999999999, "text": " So here we have light paths and they start up from the light source. So on the right I"}, {"start": 616.0799999999999, "end": 622.64, "text": " have something like LDDE. That's exactly what I have been talking about. So I start from"}, {"start": 622.64, "end": 627.8, "text": " the light source. I hit a diffuse wall. I hit the diffuse ground and then I hit the camera"}, {"start": 627.8, "end": 636.9599999999999, "text": " afterwards. So that's LDDE. Let's take a look at for instance LSSE. So I start from the"}, {"start": 636.9599999999999, "end": 644.5999999999999, "text": " light source. I hit the glass ball from the outside, this left glass ball. And then I go"}, {"start": 644.5999999999999, "end": 649.4399999999999, "text": " inside the ball. There's going to be reflection at least. Let's imagine that there's going"}, {"start": 649.4399999999999, "end": 655.3599999999999, "text": " to be a reflection. And then I hit it on the other side as well and I come out. So this"}, {"start": 655.36, "end": 664.72, "text": " is two specular bounces, LSSE. So we can denote light paths and understand what algorithms"}, {"start": 664.72, "end": 670.64, "text": " can render what exactly. So here if we imagine that this is a ray tracer, this is an image"}, {"start": 670.64, "end": 679.16, "text": " with a ray tracer. The question is what did they do? And this is a rather low quality"}, {"start": 679.16, "end": 684.72, "text": " image. But let's, it seems to me that the shadows are not completely black. Therefore"}, {"start": 684.72, "end": 690.08, "text": " in their shading models, they definitely use the what kind of term? Raise your hand if"}, {"start": 690.08, "end": 703.52, "text": " you know the answer. So normally this would be completely black because I should a shadow"}, {"start": 703.52, "end": 710.1600000000001, "text": " ray towards the light source and it is going to be uploaded by the table. So intensity"}, {"start": 710.16, "end": 718.64, "text": " is 0. Imagine that like all possible shadow rays are blocked. But this is still not"}, {"start": 718.64, "end": 723.8, "text": " completely black because I'm adding a term to it in order to warp up and make the image"}, {"start": 723.8, "end": 749.76, "text": " appear more realistic. So this would be which term I'm doing. This would be the ambient"}]
Two Minute Papers
https://www.youtube.com/watch?v=ZhN5-o397QI
TU Wien Rendering #10 - Camera models
To build an adequate light simulation program, we also need to model how exactly light interacts with a camera. In this segment, we learn more about perspective and orthographic cameras, and we quickly implement the former in just a few lines of simple C++ code. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's talk about cameras. I'm just going to rush through this because obviously everyone knows about pinhole cameras. The most basic camera model one can imagine is basically your box and you make super, super small hole on the box and you put a film in it. Basically some amount of light will flow into this and it is going to be caught by the film and therefore you are going to see an image from this film. We are not that interested in this model but what we are interested in is for instance a perspective camera. Perspective camera means that I have the lens of the camera. This is what you see the T-POT on and this is where the image is going to be formed and I have a point somewhere and this is going to be where the rays are starting from. So I have this eye and I am going to shoot rays towards the lens of the camera and I am going to be interested in the continuation of these rays, what objects the heat and what color they are. That is multiple things that I need to specify when creating a perspective camera. This plane can have the width, the height, some kind of field of view and the aspect ratio is given by the ratio of the width and the height. What about the field of view? Well if you like to play first person shooters you have probably fiddled with settings like that but the field of view is basically what you see here. And different camera models or different eye models have different fields of view. So for instance it is quite different for a horse. If you would like to model how a horse sees something then the field of view would be much larger because the eyes are on the sides. So you can always see what it is behind it. And we have a given field of view that we can model here and this can be arbitrarily changed if you have a good perspective camera implementation. Now let's quickly put together an implementation of that. What I'm interested in is that I'm going to give an X, Y pair. These are pixel positions. Give me the 0 pixel in terms of X and the 5th pixel of with respect to Y. And this is going to give me back a world space coordinate where this is exactly on the lens. So I'm subdividing this lens into pixels. I only want to care about the pixels because each pixel after each other they are going to be computed. How much light is going through these pixels. And therefore these world space coordinates are interesting. So if I insensiate such a perspective camera the height and the width is definitely given and the field of view with respect to the X axis is also given. And the desired pixel positions are going to be X, P and Y, P. What are these variables are supported on. So X, P and Y, P are on 0 and W and H. So these are really the pixels which pixel am I interested in. The field of view can be reasonably arbitrary but same choices on 0 pi. And the field of view with respect to the Y direction can be computed from the aspect ratio and the other field of view. This is the end result. And before we try to understand what's going on let's try to play with it. And this I do because usually if you read literature, math, books, whatever you never see the journey behind things. You get the answer. And this is usually a huge formula that doesn't seem to make any sense. So let's get a bit more experience on how to play with these formulae. How can we understand these. So for instance let's forget in X and Y let's forget these tangent terms and let's just play with the fraction. So I substitute X and Y equals 0, X, P and Y equals 0. So what do I have for the X coordinate? Well it's 2 times 0 minus the width over the width. Therefore this is minus 1 and I have the same for Y. So this is 0 minus H over H, that's minus 1. So for the 0, 0 pixels I have world's pace positions of minus 1 and minus 1. Therefore this is going to be the bottom left. So far this seems to make some sense. What if I substitute the other end for the pixels? Well if I have w for XP then I have 2 w minus w over w. And therefore this is going to be 1 both for X and both for Y. So this is going to be the upper right. And whatever I specify for XP and YP between these two X-threading values then this is really going to give me the world's pace coordinates of this camera model. We have forgotten about the tangents. Well let's put them back. I don't know what I just did now but it's working again. Yes. I wonder why this presenter has like 2,500 buttons. But okay, let's not progress. So I multiply back these numbers with these tangents and then I can see that basically what it gives me, more perspective distortion. So the higher the field of view with respect to X's the more perspective distortion I'm going to get. Well this is already a good enough description that I can put in code. In fact I have already coded it up and this is a very simple function that does exactly what we have been talking about. It's simple as that. So if you don't take the prototype of the function this is basically five lines and this is still readable. So this could be even less. So not too shabby. I mean a perspective camera in five lines of code passed. There are also photographic cameras. This is a large difference between from the perspective camera because the rays of light are also parallel with each other and they are perpendicular to this camera plane. So basically they don't start from one point looking outwards. They are perfectly parallel with each other and perpendicular to this lens. And they also don't meet at the eye and you can see that the perspective distortion is completely omitted here. So you can see here the same image, the same scene with the same settings with an photographic and the perspective camera. And you can see that the realism is completely different in the two. There's another example with Luxrender. In the next image you won't see the environment map in the background but disregard that because the implementation of environment maps with orthographic cameras is in a way non-trivial. So lots of perspective distortion. Well maybe you don't notice because this is what you're used to but if you have an orthographic camera then this is a perfect distortion-free geometric shape. And back to the perspective camera. So this fall gives you a significant perspective distortion.
[{"start": 0.0, "end": 7.84, "text": " Let's talk about cameras. I'm just going to rush through this because obviously everyone"}, {"start": 7.84, "end": 13.14, "text": " knows about pinhole cameras. The most basic camera model one can imagine is basically"}, {"start": 13.14, "end": 20.66, "text": " your box and you make super, super small hole on the box and you put a film in it. Basically"}, {"start": 20.66, "end": 27.14, "text": " some amount of light will flow into this and it is going to be caught by the film and"}, {"start": 27.14, "end": 33.02, "text": " therefore you are going to see an image from this film. We are not that interested in"}, {"start": 33.02, "end": 37.58, "text": " this model but what we are interested in is for instance a perspective camera. Perspective"}, {"start": 37.58, "end": 43.46, "text": " camera means that I have the lens of the camera. This is what you see the T-POT on and this"}, {"start": 43.46, "end": 50.1, "text": " is where the image is going to be formed and I have a point somewhere and this is going"}, {"start": 50.1, "end": 57.44, "text": " to be where the rays are starting from. So I have this eye and I am going to shoot rays"}, {"start": 57.44, "end": 64.82000000000001, "text": " towards the lens of the camera and I am going to be interested in the continuation of these"}, {"start": 64.82000000000001, "end": 70.42, "text": " rays, what objects the heat and what color they are. That is multiple things that I"}, {"start": 70.42, "end": 78.38, "text": " need to specify when creating a perspective camera. This plane can have the width, the height,"}, {"start": 78.38, "end": 85.66, "text": " some kind of field of view and the aspect ratio is given by the ratio of the width and the"}, {"start": 85.66, "end": 91.86, "text": " height. What about the field of view? Well if you like to play first person shooters you"}, {"start": 91.86, "end": 95.97999999999999, "text": " have probably fiddled with settings like that but the field of view is basically what you"}, {"start": 95.97999999999999, "end": 104.75999999999999, "text": " see here. And different camera models or different eye models have different fields of"}, {"start": 104.76, "end": 110.94, "text": " view. So for instance it is quite different for a horse. If you would like to model how"}, {"start": 110.94, "end": 116.14, "text": " a horse sees something then the field of view would be much larger because the eyes are"}, {"start": 116.14, "end": 123.98, "text": " on the sides. So you can always see what it is behind it. And we have a given field of"}, {"start": 123.98, "end": 128.70000000000002, "text": " view that we can model here and this can be arbitrarily changed if you have a good perspective"}, {"start": 128.7, "end": 135.1, "text": " camera implementation. Now let's quickly put together an implementation of that. What I'm"}, {"start": 135.1, "end": 140.7, "text": " interested in is that I'm going to give an X, Y pair. These are pixel positions. Give"}, {"start": 140.7, "end": 151.42, "text": " me the 0 pixel in terms of X and the 5th pixel of with respect to Y. And this is going to"}, {"start": 151.42, "end": 156.94, "text": " give me back a world space coordinate where this is exactly on the lens. So I'm subdividing"}, {"start": 156.94, "end": 163.62, "text": " this lens into pixels. I only want to care about the pixels because each pixel after each"}, {"start": 163.62, "end": 169.7, "text": " other they are going to be computed. How much light is going through these pixels. And"}, {"start": 169.7, "end": 178.02, "text": " therefore these world space coordinates are interesting. So if I insensiate such a perspective"}, {"start": 178.02, "end": 183.46, "text": " camera the height and the width is definitely given and the field of view with respect to"}, {"start": 183.46, "end": 190.46, "text": " the X axis is also given. And the desired pixel positions are going to be X, P and Y, P."}, {"start": 190.46, "end": 197.74, "text": " What are these variables are supported on. So X, P and Y, P are on 0 and W and H. So these"}, {"start": 197.74, "end": 204.86, "text": " are really the pixels which pixel am I interested in. The field of view can be reasonably arbitrary"}, {"start": 204.86, "end": 212.5, "text": " but same choices on 0 pi. And the field of view with respect to the Y direction can be computed"}, {"start": 212.5, "end": 220.78, "text": " from the aspect ratio and the other field of view. This is the end result. And before we"}, {"start": 220.78, "end": 226.78, "text": " try to understand what's going on let's try to play with it. And this I do because usually"}, {"start": 226.78, "end": 232.62, "text": " if you read literature, math, books, whatever you never see the journey behind things. You"}, {"start": 232.62, "end": 238.02, "text": " get the answer. And this is usually a huge formula that doesn't seem to make any sense."}, {"start": 238.02, "end": 243.10000000000002, "text": " So let's get a bit more experience on how to play with these formulae. How can we understand"}, {"start": 243.10000000000002, "end": 249.78, "text": " these. So for instance let's forget in X and Y let's forget these tangent terms and"}, {"start": 249.78, "end": 258.74, "text": " let's just play with the fraction. So I substitute X and Y equals 0, X, P and Y equals 0. So"}, {"start": 258.74, "end": 265.86, "text": " what do I have for the X coordinate? Well it's 2 times 0 minus the width over the width."}, {"start": 265.86, "end": 271.46000000000004, "text": " Therefore this is minus 1 and I have the same for Y. So this is 0 minus H over H, that's"}, {"start": 271.46000000000004, "end": 280.02000000000004, "text": " minus 1. So for the 0, 0 pixels I have world's pace positions of minus 1 and minus 1. Therefore"}, {"start": 280.02000000000004, "end": 285.62, "text": " this is going to be the bottom left. So far this seems to make some sense. What if I substitute"}, {"start": 285.62, "end": 295.58000000000004, "text": " the other end for the pixels? Well if I have w for XP then I have 2 w minus w over w."}, {"start": 295.58, "end": 303.58, "text": " And therefore this is going to be 1 both for X and both for Y. So this is going to be the"}, {"start": 303.58, "end": 310.46, "text": " upper right. And whatever I specify for XP and YP between these two X-threading values"}, {"start": 310.46, "end": 315.97999999999996, "text": " then this is really going to give me the world's pace coordinates of this camera model."}, {"start": 317.97999999999996, "end": 322.78, "text": " We have forgotten about the tangents. Well let's put them back. I don't know what I just did now"}, {"start": 322.78, "end": 331.73999999999995, "text": " but it's working again. Yes. I wonder why this presenter has like 2,500 buttons."}, {"start": 332.46, "end": 339.17999999999995, "text": " But okay, let's not progress. So I multiply back these numbers with these tangents and then"}, {"start": 339.82, "end": 345.58, "text": " I can see that basically what it gives me, more perspective distortion. So the higher the"}, {"start": 345.58, "end": 352.7, "text": " field of view with respect to X's the more perspective distortion I'm going to get. Well this is"}, {"start": 352.7, "end": 358.46, "text": " already a good enough description that I can put in code. In fact I have already coded it up"}, {"start": 358.46, "end": 362.86, "text": " and this is a very simple function that does exactly what we have been talking about."}, {"start": 362.86, "end": 369.18, "text": " It's simple as that. So if you don't take the prototype of the function this is basically"}, {"start": 369.18, "end": 375.34000000000003, "text": " five lines and this is still readable. So this could be even less. So not too shabby. I mean a"}, {"start": 375.34000000000003, "end": 382.78000000000003, "text": " perspective camera in five lines of code passed. There are also photographic cameras."}, {"start": 383.74, "end": 390.86, "text": " This is a large difference between from the perspective camera because the rays of light"}, {"start": 390.86, "end": 402.86, "text": " are also parallel with each other and they are perpendicular to this camera plane. So basically"}, {"start": 402.86, "end": 408.3, "text": " they don't start from one point looking outwards. They are perfectly parallel with each other"}, {"start": 408.3, "end": 416.62, "text": " and perpendicular to this lens. And they also don't meet at the eye and you can see that"}, {"start": 416.62, "end": 423.1, "text": " the perspective distortion is completely omitted here. So you can see here the same image,"}, {"start": 423.1, "end": 428.78000000000003, "text": " the same scene with the same settings with an photographic and the perspective camera. And you can"}, {"start": 428.78000000000003, "end": 435.98, "text": " see that the realism is completely different in the two. There's another example with Luxrender."}, {"start": 435.98, "end": 441.18, "text": " In the next image you won't see the environment map in the background but disregard that because"}, {"start": 441.18, "end": 446.14, "text": " the implementation of environment maps with orthographic cameras is in a way non-trivial."}, {"start": 446.14, "end": 451.74, "text": " So lots of perspective distortion. Well maybe you don't notice because this is what you're used to"}, {"start": 451.74, "end": 457.9, "text": " but if you have an orthographic camera then this is a perfect distortion-free geometric shape."}, {"start": 457.9, "end": 480.7, "text": " And back to the perspective camera. So this fall gives you a significant perspective distortion."}]
Two Minute Papers
https://www.youtube.com/watch?v=fcvhOC5Q1dI
TU Wien Rendering #9 - Hard and Soft Shadows
Ever wondered why shadows look like the way they do? Some are really hard shadow boundaries while others are smooth gradients. In this segment, we learn how to compute both by sending shadow rays towards light sources. This is a probabilistic technique, which is surprisingly equivalent to Monte Carlo integration, a powerful technique we will learn about later. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, philosophical question. What is the definition of shadows? And because run along with those people already went for now. But let's compute for now, only for now, that shadows are regions that are not visible from light sources. And again, if you're a public transport or if you're a board, a lecture, such as this one, but hopefully not this one. Well, you can take a look at the shadow of regions and you will immediately recognize that these are the regions that are somehow occluded with respect to the light source. Let's take a look at an example. This small red dot on the top, this is a light source. This is a point light source. And this black thing is a sphere. And behind that, what would respect to the light source, we have an umbrella. This is a completely shadowed region. This is the name of the completely shadowed region. And if we are going to shade these points in the ray tracer and if I want shadows, then I need to compute whether this ray, whether I am shading the point, is obstructed or occluded from the light source or not. Now, this is very simple to do. So imagine that I would like to shade this point below on the plane. And what I would do is I would send a ray that I call a shadow ray towards the light source. And what I'm interested in is, is it obstructed means that it hits something before it's blocked by something. So the first question is, is an incredibly difficult question. Is this obstructed or not? It is very straight. It is obstructed. What about this guy? What do you think? This is obstructed. What about these guys? These guys are good. Okay. Cool. Now, it is for now, very simple concept shadows. It means that I also have a visibility signal that I multiply the intensity with. And this is binary. Obviously, the ray either hits an object or it doesn't. That's it. So very simple. This intensity, that is not radiance, but this is the hack that we use in ray tracing. This is the simpler version of things still. I'm going to set to zero. Whatever shading I have at that point, I don't care. It is in shadows. It's going to be completely black. So this is the simpler version. What about wind life? Well, in wind life, point light sources don't exist because the point by mathematical definition is of measure zero. It means that it's infinite is small. And something that is infinite is small. Well, we called it a light source. So this is something that's infinite is small. But it has given amount of energy. Well, if you ask Stephen Hawking, he would say that this is a definition of a black hole. So we would have a black hole. And if this would happen, we would have much bigger problems than computing shadow rays. So that's definitely out of our interest at the moment. So we have an error light source and we still have the umbra because none of the rays make it to the light source. But we have a different region that we call a number which are partially shadowed regions. So things are going to get much more interesting. Now I'm going to shoot two shadow rays from the surface towards different regions of the land source. What about these guys? What about the right shadow ray? It's okay. What about the left? It's not okay. Excellent. So this is already doesn't seem binary anymore. And this visibility signal is going to be continuous. So there may be points which are visible from some part of the light source. But not visible from another part of the light source. And therefore we have some kind of partial occlusion. And the question is how do we compute this? How can we write a program that can give me a beautiful anumbra and not just hard shadows. So if we only have the umbra, this is in the literature. This is called hard shadows and penumbra is soft shadows. We're going to see examples of that in a second. So very simple. Let's try to approximate the area of the light source that is visible from that point over the whole area of the light source. Let's see an example of that. But first I'm interested in how to approximate this because I'm talking about areas. And this sounds like some kind of integration problem. And for now we are not going to evaluate intervals over this. For instance, what we can do is we could shoot a large shadow rays and try to approximate this area. So the approximation is going to be the following. I'm interested in the physical shadow rays, the number of visible shadow rays over all the shadow rays that I have computed. Well, example, how is this region going to look like? Well, I'm going to shoot 100 shadow rays from these small black dots. And I'm interested in how many of them is going to make it to the light source. What do you think? Well, out of 100 shadow rays, does 100 hinter light source? I'm not sure. Definitely not. Well, about 50? Probably not. Well, it's quite reasonably dark there. So let's say that three of them is the light source. It's very simple approximation. I shoot 100 shadow rays, three of them. Therefore, this is what I'm going to multiply this intensity that I have computed with. OK, what about the next region? This is a bit far from away. Out of 100, does 100 of them hit this region? Definitely not. How for them? What do you think? How for them definitely? OK, cool. And if we go even more out of the umbrella, then I have this white dot. And I'm interested in how many of these could hit the light source? Well, I think that there can be there are regions where which are definitely obstructed. But it's not much. So let's say that 95% of these shadow rays hit the light source. So I can already compute in a way soft shadows, not only shadows, but now soft shadows. You're going to see examples of that. And what we have done here is actually Monte Carlo integration. And you're going to hear a lot about Monte Carlo integration. It's in every list. I don't know. Some teenage people look up the top 10 billboard list of the best music clips of Lady Gaga and the others. What I do myself, I confess, I look up the top 10 mathematical techniques nowadays. And I tell you that Monte Carlo integration is always on the billboard's top list. It's one of the most effective techniques to evaluate intervals. And we're going to study them extensively. And it's going to make your life infinitely simple. Now, a quick bonus question is can such an image be physically correct, but obviously it's a drawing. So it's not correct. But there is something that unbelievably so incorrect that it would it would need some attention. Who can tell me that? Yes. That is true. But unfortunately, this is a mind reading exercise. So you have to figure out what I have taught. And just that's absolutely true. Yes, please tell me. Well, as far as I know physics, the line should bend a little bit inwards from the black object. But that's apparently not the. We are very far away from that. But you're absolutely true. So if I say in terms of shadows, wait. I would have said something about the shadows in the air. Shadows in the where? Well, the shadows between the object and the ground. Yes. Okay. What's what's up with the area wouldn't be shadow? The area wouldn't technically be shadow itself. Which arrow? The area. The area. This one or? No, no, no. The the airport between the object and the. At this. No. I don't understand any of this. The area between the surface of the object. There is empty space. Yeah. Which is. Yeah. What about if I ask about this transition from here to the outside? So if you imagine these dots that we have, if I would put it in the umbrella, and I would slowly move out of there, would I experience such a jump that I see here? Okay. Why not? Because if I start from the umbrella, it may be that yes, I cannot construct any kind of rain that is the light source. And as I move outwards, it will continuously increase this probability. There is not going to be a jump that you see this abrupt corner change. It is going to be a perfectly smooth gradient or almost perfectly smooth gradient, depending on many other physical properties. But this is more or less what I would see. And there's going to be an example of that in a second. So this is what I have said for those who are reading this at home. And the question is very simple question. What kind of light source do we have on this image? It's a point light source. Excellent. Why? Because I don't see the moon, I see much of it. Excellent. So this should be a point light source. And what technically you could say that if you have a smaller area light source, but only one shadow rays, so you won't do this integration, just for one shadow, you can have something similar for this. But generally these are point light sources. What about these guys? The left one is point, the left one seems to be a point light source. The right one, I can see this beautiful continuous change. And this is definitely a point light. But if I take a look at this region below this object, then I can also see some kind of point light. So it might be that this is a small on the left. It's a small area light source that is close to the object perhaps. And this is why I don't see the moon. But other places I see it. And here on the right it has a really pronounced effect. So this is definitely an area light source. Well, the next question is that in physical reality, we usually don't see perfectly black shadows. So if I take a look around in this room, I see shadows, I see there, a region that is lit, and then some kind of rock and the moon, rock is something, but this is anything about perfectly black. Because of the balance of the whole area, some reflections, so you never have perfect on the moon. Yes, that's true. So there is an effect that we are going to talk about next lecture. And it is called indirect illumination. And this basically means that in the ray tracing program, we are only accounting for the direct effect of an light source. But in physical reality, it is possible that the light comes in through this window, hits the wall first, this white wall, and then hits the ground in this color region, and then it goes towards the light. And therefore, some of the energy is going to be picked up, so the effect of this white wall is going to make this black or dark shadows lighter. And this we cannot compute yet. This is multiple diffuse bounces after each other. We cannot take this into account. We would need to solve the full rendering equation for this. So what we have is direct illumination. And this is where the ambient term comes into place. What we have been talking about, this ambient term is just basically adding something to this intensity that I have. Why? Because this works up the image of it. So I would have perfectly black shadows. And for instance, for this classroom, I would have an ambient intensity that is a color that is grayish. And therefore, these regions would not be perfectly black, but I would have this sixth grade number to it. And therefore, it would be a bit more gray. So this is a really crude approximation of indirect illumination. But it more or less works. At least it is an accepted way to cheat back this last energy in a ray tracer. Yes. I have a question for the Monte Carlo technique, so I will cast it there, shadow rays. How we how do we determine where at the surface of the light, where we are shooting the points, because there's a surface how do we pick some random points on the surface? These are difficult details of ray tracing programs. There are techniques that help you to pick a uniformly chosen random direction on a sphere, for instance, that I would shoot or uniform directions or points. So I choose a random point on the sphere, and I'm going to connect this to that other point. And so perfectly uniformly chosen random points. This I need to generate on the surface of the light source. And this I would need to sample. And there's also optimizations for that. What if a light source has a non uniform radiation. So some light sources are really intense in one direction, but not in others. How do I account for that? And there is even optimization techniques for that. And the short beauty break. Well, we like clocks render a lot. And but it seems that apparently some nerds are leaving their dreams in our program and creating people like that. There is lots of programs that help you achieving these realistic things. And later on, we will talk a bit about how skin realistic skin can be achieved, such as the one that you can see here. Because skin is not a surface. Skin is a volume. So not everyone knows. But some amount of the light is penetrating the surface of the skin. And it goes beneath the skin. It gets scattered and absorbed maybe even a thousand times there. And it may come out somewhere else from your skin. And this is why the older computer games, this is why humans look really fake and plastic because they don't account for this effect. And the newest computer games can compute this or something like this in real time. And this is what makes them so beautiful.
[{"start": 0.0, "end": 6.0, "text": " Okay, philosophical question. What is the definition of shadows?"}, {"start": 6.0, "end": 11.0, "text": " And because run along with those people already went for now."}, {"start": 11.0, "end": 19.0, "text": " But let's compute for now, only for now, that shadows are regions that are not visible from light sources."}, {"start": 19.0, "end": 26.0, "text": " And again, if you're a public transport or if you're a board, a lecture, such as this one, but hopefully not this one."}, {"start": 26.0, "end": 36.0, "text": " Well, you can take a look at the shadow of regions and you will immediately recognize that these are the regions that are somehow occluded with respect to the light source."}, {"start": 36.0, "end": 46.0, "text": " Let's take a look at an example. This small red dot on the top, this is a light source. This is a point light source."}, {"start": 46.0, "end": 57.0, "text": " And this black thing is a sphere. And behind that, what would respect to the light source, we have an umbrella. This is a completely shadowed region."}, {"start": 57.0, "end": 69.0, "text": " This is the name of the completely shadowed region. And if we are going to shade these points in the ray tracer and if I want shadows, then I need to compute whether this ray,"}, {"start": 69.0, "end": 76.0, "text": " whether I am shading the point, is obstructed or occluded from the light source or not."}, {"start": 76.0, "end": 84.0, "text": " Now, this is very simple to do. So imagine that I would like to shade this point below on the plane."}, {"start": 84.0, "end": 89.0, "text": " And what I would do is I would send a ray that I call a shadow ray towards the light source."}, {"start": 89.0, "end": 98.0, "text": " And what I'm interested in is, is it obstructed means that it hits something before it's blocked by something."}, {"start": 98.0, "end": 105.0, "text": " So the first question is, is an incredibly difficult question. Is this obstructed or not?"}, {"start": 105.0, "end": 110.0, "text": " It is very straight. It is obstructed."}, {"start": 110.0, "end": 119.0, "text": " What about this guy? What do you think? This is obstructed. What about these guys?"}, {"start": 119.0, "end": 122.0, "text": " These guys are good. Okay. Cool."}, {"start": 122.0, "end": 132.0, "text": " Now, it is for now, very simple concept shadows. It means that I also have a visibility signal that I multiply the intensity with."}, {"start": 132.0, "end": 140.0, "text": " And this is binary. Obviously, the ray either hits an object or it doesn't. That's it. So very simple."}, {"start": 140.0, "end": 149.0, "text": " This intensity, that is not radiance, but this is the hack that we use in ray tracing. This is the simpler version of things still."}, {"start": 149.0, "end": 157.0, "text": " I'm going to set to zero. Whatever shading I have at that point, I don't care. It is in shadows. It's going to be completely black."}, {"start": 157.0, "end": 168.0, "text": " So this is the simpler version. What about wind life? Well, in wind life, point light sources don't exist because the point by mathematical definition is of measure zero."}, {"start": 168.0, "end": 175.0, "text": " It means that it's infinite is small. And something that is infinite is small."}, {"start": 175.0, "end": 182.0, "text": " Well, we called it a light source. So this is something that's infinite is small. But it has given amount of energy."}, {"start": 182.0, "end": 188.0, "text": " Well, if you ask Stephen Hawking, he would say that this is a definition of a black hole."}, {"start": 188.0, "end": 195.0, "text": " So we would have a black hole. And if this would happen, we would have much bigger problems than computing shadow rays."}, {"start": 195.0, "end": 207.0, "text": " So that's definitely out of our interest at the moment. So we have an error light source and we still have the umbra because none of the rays make it to the light source."}, {"start": 207.0, "end": 213.0, "text": " But we have a different region that we call a number which are partially shadowed regions."}, {"start": 213.0, "end": 222.0, "text": " So things are going to get much more interesting. Now I'm going to shoot two shadow rays from the surface towards different regions of the land source."}, {"start": 222.0, "end": 229.0, "text": " What about these guys? What about the right shadow ray?"}, {"start": 229.0, "end": 241.0, "text": " It's okay. What about the left? It's not okay. Excellent. So this is already doesn't seem binary anymore."}, {"start": 241.0, "end": 251.0, "text": " And this visibility signal is going to be continuous. So there may be points which are visible from some part of the light source."}, {"start": 251.0, "end": 258.0, "text": " But not visible from another part of the light source. And therefore we have some kind of partial occlusion."}, {"start": 258.0, "end": 268.0, "text": " And the question is how do we compute this? How can we write a program that can give me a beautiful anumbra and not just hard shadows."}, {"start": 268.0, "end": 276.0, "text": " So if we only have the umbra, this is in the literature. This is called hard shadows and penumbra is soft shadows."}, {"start": 276.0, "end": 282.0, "text": " We're going to see examples of that in a second. So very simple."}, {"start": 282.0, "end": 291.0, "text": " Let's try to approximate the area of the light source that is visible from that point over the whole area of the light source."}, {"start": 291.0, "end": 299.0, "text": " Let's see an example of that. But first I'm interested in how to approximate this because I'm talking about areas."}, {"start": 299.0, "end": 308.0, "text": " And this sounds like some kind of integration problem. And for now we are not going to evaluate intervals over this."}, {"start": 308.0, "end": 315.0, "text": " For instance, what we can do is we could shoot a large shadow rays and try to approximate this area."}, {"start": 315.0, "end": 327.0, "text": " So the approximation is going to be the following. I'm interested in the physical shadow rays, the number of visible shadow rays over all the shadow rays that I have computed."}, {"start": 327.0, "end": 337.0, "text": " Well, example, how is this region going to look like? Well, I'm going to shoot 100 shadow rays from these small black dots."}, {"start": 337.0, "end": 346.0, "text": " And I'm interested in how many of them is going to make it to the light source. What do you think?"}, {"start": 346.0, "end": 359.0, "text": " Well, out of 100 shadow rays, does 100 hinter light source? I'm not sure. Definitely not. Well, about 50? Probably not."}, {"start": 359.0, "end": 379.0, "text": " Well, it's quite reasonably dark there. So let's say that three of them is the light source. It's very simple approximation. I shoot 100 shadow rays, three of them. Therefore, this is what I'm going to multiply this intensity that I have computed with."}, {"start": 379.0, "end": 399.0, "text": " OK, what about the next region? This is a bit far from away. Out of 100, does 100 of them hit this region? Definitely not. How for them? What do you think? How for them definitely? OK, cool."}, {"start": 399.0, "end": 416.0, "text": " And if we go even more out of the umbrella, then I have this white dot. And I'm interested in how many of these could hit the light source? Well, I think that there can be there are regions where which are definitely obstructed."}, {"start": 416.0, "end": 431.0, "text": " But it's not much. So let's say that 95% of these shadow rays hit the light source. So I can already compute in a way soft shadows, not only shadows, but now soft shadows. You're going to see examples of that."}, {"start": 431.0, "end": 452.0, "text": " And what we have done here is actually Monte Carlo integration. And you're going to hear a lot about Monte Carlo integration. It's in every list. I don't know. Some teenage people look up the top 10 billboard list of the best music clips of Lady Gaga and the others."}, {"start": 452.0, "end": 475.0, "text": " What I do myself, I confess, I look up the top 10 mathematical techniques nowadays. And I tell you that Monte Carlo integration is always on the billboard's top list. It's one of the most effective techniques to evaluate intervals. And we're going to study them extensively. And it's going to make your life infinitely simple."}, {"start": 475.0, "end": 494.0, "text": " Now, a quick bonus question is can such an image be physically correct, but obviously it's a drawing. So it's not correct. But there is something that unbelievably so incorrect that it would it would need some attention. Who can tell me that?"}, {"start": 494.0, "end": 498.0, "text": " Yes."}, {"start": 498.0, "end": 515.0, "text": " That is true. But unfortunately, this is a mind reading exercise. So you have to figure out what I have taught."}, {"start": 515.0, "end": 531.0, "text": " And just that's absolutely true. Yes, please tell me. Well, as far as I know physics, the line should bend a little bit inwards from the black object. But that's apparently not the."}, {"start": 531.0, "end": 539.0, "text": " We are very far away from that. But you're absolutely true. So if I say in terms of shadows, wait."}, {"start": 539.0, "end": 547.0, "text": " I would have said something about the shadows in the air. Shadows in the where? Well, the shadows between the object and the ground."}, {"start": 547.0, "end": 552.0, "text": " Yes. Okay. What's what's up with the area wouldn't be shadow?"}, {"start": 552.0, "end": 558.0, "text": " The area wouldn't technically be shadow itself. Which arrow? The area."}, {"start": 558.0, "end": 564.0, "text": " The area. This one or? No, no, no. The the airport between the object and the."}, {"start": 564.0, "end": 574.0, "text": " At this. No. I don't understand any of this. The area between the surface of the object. There is empty space."}, {"start": 574.0, "end": 577.0, "text": " Yeah. Which is. Yeah."}, {"start": 577.0, "end": 589.0, "text": " What about if I ask about this transition from here to the outside? So if you imagine these dots that we have, if I would put it in the umbrella,"}, {"start": 589.0, "end": 604.0, "text": " and I would slowly move out of there, would I experience such a jump that I see here? Okay. Why not?"}, {"start": 604.0, "end": 612.0, "text": " Because if I start from the umbrella, it may be that yes, I cannot construct any kind of rain that is the light source."}, {"start": 612.0, "end": 622.0, "text": " And as I move outwards, it will continuously increase this probability. There is not going to be a jump that you see this abrupt corner change."}, {"start": 622.0, "end": 630.0, "text": " It is going to be a perfectly smooth gradient or almost perfectly smooth gradient, depending on many other physical properties."}, {"start": 630.0, "end": 635.0, "text": " But this is more or less what I would see. And there's going to be an example of that in a second."}, {"start": 635.0, "end": 643.0, "text": " So this is what I have said for those who are reading this at home. And the question is very simple question."}, {"start": 643.0, "end": 650.0, "text": " What kind of light source do we have on this image? It's a point light source. Excellent. Why?"}, {"start": 650.0, "end": 657.0, "text": " Because I don't see the moon, I see much of it. Excellent."}, {"start": 657.0, "end": 666.0, "text": " So this should be a point light source. And what technically you could say that if you have a smaller area light source,"}, {"start": 666.0, "end": 672.0, "text": " but only one shadow rays, so you won't do this integration, just for one shadow, you can have something similar for this."}, {"start": 672.0, "end": 677.0, "text": " But generally these are point light sources. What about these guys?"}, {"start": 677.0, "end": 688.0, "text": " The left one is point, the left one seems to be a point light source. The right one, I can see this beautiful continuous change."}, {"start": 688.0, "end": 697.0, "text": " And this is definitely a point light. But if I take a look at this region below this object, then I can also see some kind of point light."}, {"start": 697.0, "end": 707.0, "text": " So it might be that this is a small on the left. It's a small area light source that is close to the object perhaps. And this is why I don't see the moon."}, {"start": 707.0, "end": 713.0, "text": " But other places I see it. And here on the right it has a really pronounced effect."}, {"start": 713.0, "end": 725.0, "text": " So this is definitely an area light source. Well, the next question is that in physical reality, we usually don't see perfectly black shadows."}, {"start": 725.0, "end": 733.0, "text": " So if I take a look around in this room, I see shadows, I see there, a region that is lit, and then some kind of rock and the moon,"}, {"start": 733.0, "end": 738.0, "text": " rock is something, but this is anything about perfectly black."}, {"start": 738.0, "end": 745.0, "text": " Because of the balance of the whole area, some reflections, so you never have perfect on the moon."}, {"start": 745.0, "end": 754.0, "text": " Yes, that's true. So there is an effect that we are going to talk about next lecture. And it is called indirect illumination."}, {"start": 754.0, "end": 763.0, "text": " And this basically means that in the ray tracing program, we are only accounting for the direct effect of an light source."}, {"start": 763.0, "end": 777.0, "text": " But in physical reality, it is possible that the light comes in through this window, hits the wall first, this white wall, and then hits the ground in this color region, and then it goes towards the light."}, {"start": 777.0, "end": 789.0, "text": " And therefore, some of the energy is going to be picked up, so the effect of this white wall is going to make this black or dark shadows lighter."}, {"start": 789.0, "end": 795.0, "text": " And this we cannot compute yet. This is multiple diffuse bounces after each other."}, {"start": 795.0, "end": 800.0, "text": " We cannot take this into account. We would need to solve the full rendering equation for this."}, {"start": 800.0, "end": 813.0, "text": " So what we have is direct illumination. And this is where the ambient term comes into place. What we have been talking about, this ambient term is just basically adding something to this intensity that I have."}, {"start": 813.0, "end": 819.0, "text": " Why? Because this works up the image of it. So I would have perfectly black shadows."}, {"start": 819.0, "end": 833.0, "text": " And for instance, for this classroom, I would have an ambient intensity that is a color that is grayish. And therefore, these regions would not be perfectly black, but I would have this sixth grade number to it."}, {"start": 833.0, "end": 841.0, "text": " And therefore, it would be a bit more gray. So this is a really crude approximation of indirect illumination."}, {"start": 841.0, "end": 851.0, "text": " But it more or less works. At least it is an accepted way to cheat back this last energy in a ray tracer."}, {"start": 851.0, "end": 852.0, "text": " Yes."}, {"start": 852.0, "end": 858.0, "text": " I have a question for the Monte Carlo technique, so I will cast it there, shadow rays."}, {"start": 858.0, "end": 870.0, "text": " How we how do we determine where at the surface of the light, where we are shooting the points, because there's a surface how do we pick some random points on the surface?"}, {"start": 870.0, "end": 889.0, "text": " These are difficult details of ray tracing programs. There are techniques that help you to pick a uniformly chosen random direction on a sphere, for instance, that I would shoot or uniform directions or points."}, {"start": 889.0, "end": 904.0, "text": " So I choose a random point on the sphere, and I'm going to connect this to that other point. And so perfectly uniformly chosen random points. This I need to generate on the surface of the light source. And this I would need to sample."}, {"start": 904.0, "end": 912.0, "text": " And there's also optimizations for that. What if a light source has a non uniform radiation."}, {"start": 912.0, "end": 921.0, "text": " So some light sources are really intense in one direction, but not in others. How do I account for that? And there is even optimization techniques for that."}, {"start": 921.0, "end": 937.0, "text": " And the short beauty break. Well, we like clocks render a lot. And but it seems that apparently some nerds are leaving their dreams in our program and creating people like that."}, {"start": 937.0, "end": 950.0, "text": " There is lots of programs that help you achieving these realistic things. And later on, we will talk a bit about how skin realistic skin can be achieved, such as the one that you can see here."}, {"start": 950.0, "end": 965.0, "text": " Because skin is not a surface. Skin is a volume. So not everyone knows. But some amount of the light is penetrating the surface of the skin. And it goes beneath the skin."}, {"start": 965.0, "end": 975.0, "text": " It gets scattered and absorbed maybe even a thousand times there. And it may come out somewhere else from your skin. And this is why the older computer games,"}, {"start": 975.0, "end": 996.0, "text": " this is why humans look really fake and plastic because they don't account for this effect. And the newest computer games can compute this or something like this in real time. And this is what makes them so beautiful."}]
Two Minute Papers
https://www.youtube.com/watch?v=Zi_CVTgqJqI
TU Wien Rendering #8 - Surface Normals
To be able to perform shading on an object, we have to compute surface normals. It's not so tricky, one only has to be able to compute gradients. We take an intuitive example for an elliptic paraboloid. It is also important to know when we have self-intersecting rays - this means that the ray trivially intersects the object it is sitting on, and this case we'd usually like to discard when we search for the first intersected object. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, let's quickly study how to compute surface normals because we are going to need that. If you remember the diagrams, we always have the surface normal for the diffuse specular and the ambient shading. And just a quick remark, in the previous lecture we have talked about diffuse and specular shading and also ambient, but these are perhaps the most important ones for the simplified VRDS VRDS roles. And you can see them everywhere. So when you are in public transport, you can think about what object could be what is the case. Some objects are like a mixture of these VRDS roles. It is possible that I have a diffuse object with a glossy or specular coating on top of it. And you can move your head around like I told you before and see how the look of the object changes. So a lot of beauty to marvel at. And you will be able to also understand that, for instance, if you watch someone performing stand-up comedy, there are usually a lot of guys saying humorous things, and they almost always wear makeup. And the makeup artist tells them that yes, you need to wear makeup. And the artist says that you don't want to wear makeup. And they say, well, I don't care, you have to. Because otherwise you are sparkly. And this means that they start swearing. And if they start swearing, the skin is going to get a bit oily. And then it is specular. It's more specular. So it means that if I turn my head around, it will look a bit different. So that's going to be specular highlights. And if you use makeup, then this specular highlights disappear. And the whole face is going to be almost perfectly diffused. Therefore, it doesn't distract your audience. So light transport is everywhere. So if you ever wear makeup, then think about this. It's specular and diffused in the reflections. Okay. But I digress. So surface normals, I have an inquisite equation f of x, y equals zero. I would like to know the normals of this surface. How to construct a normal? Well, the normal is given by the gradient of the function. And just a quick just a quick reminder, the gradient of the function is a 3D function of x, y, z. And the gradient on every coordinate gives you the derivative of the function with respect to a given coordinate. Let's see an example. Imagine an elliptic parallel. You don't have to imagine that this is going to be an image. So this is x squared over a squared and so on. I'm not going to read formulae. And this is how it looks like. And if I would like to put together something like this, then I have to know that a and b are the curvatures of this thing in different dimensions. And therefore, these values are scalar values. Well, let's compute the surface normal of this elliptic parallel. But this would include differentiating the whole equation for the first coordinate with respect to x. So let's differentiate this with respect to x. Well, x squared is going to be 2x and x squared is going to remain there because it's a scalar multiplier. Why does it depend on x? It doesn't depend on x, therefore it falls off. z, it doesn't depend on x. Therefore, the first coordinate will be 2x over a squared. What about the second term? The second term is the function differentiated with respect to y. Okay, does x depend on y? No, this term is going to be 0. What about this? y squared is going to be 2y over b squared. z doesn't depend on y. Therefore, this is going to be the second term. What about the third term? A differentiated function with respect to z. z doesn't depend on x. Someone let me know. I mean, correct. It doesn't depend on it. Does he depend on y? It doesn't. Well, what's going to be the derivative of this expression? It's going to be minus a bit louder. Minus 1. It's going to be minus 1. Okay, let's see. Okay, we got this. So we can construct the surface normal of an elliptic parabola. Excellent. So when we do this intersection routine in ray tracing, I have a ray and I would like to intersect this against every single object that is in the room. The question is, what is the first object that the ray hits? Which intersection am I interested in? So there may be many. So if I look somewhere, I may intersect many different objects, but if things are not as distant or things are not as parent, then I'm only going to see the first intersection. And that's it. And the first should be the closest. So this should be easy because we are using parametric equations. They depend on t and t is the distance. So basically what we get as an output, this is going to be a list of t values that I'm interested in. I'm intersecting these objects. A list of t is 2510 minus 2, things like that. What the question is, which one do I choose out of this list of t's? Someone help me out. Smallest positive t? The smallest positive t. Okay, so the negative ones are not interested in like I told you, no politics, politics free zone. And I'm going to be interested in the smallest positive t. This is a more or less true. Negative t's, we are not interested in. We have discussed this. And the question is, can we take t equals 0? And I'm not telling this because I would be an annoying mathematician. I'm only half mathematician and I'm among the kinder ones. Okay, so I'm not asking this because I would be an annoying mathematician. I'm asking this because this is going to happen if you write a ray tracer. So lots of people are, you know, something is not working and I have no idea what went wrong. It is possible that t equals 0 and we need to handle this case. So just a second. Raise your hand if you know what t equals 0 means in an intersection. Okay, excellent. I will ask someone who I don't ask me. So, give your mind. Okay, have I asked you before? Yeah, just a minute ago. I will be out of order. Who did raise your hands? Okay, yeah, that's correct. My hands do not. Okay, so what's happening when t equals 0? I lose it's not worth the amount of camera. Sorry. We will shoot the ray from the original. Yes, exactly. So if we have some amount of bounces, if I get t equals 0, this means that I am exactly on the surface of the object that I am bouncing off of. So if I intersect against the sphere and I bounce the ray back the next, after the next intersection routine, I am almost guaranteed to see machine precision matter. But, mathematically, I am guaranteed to see t equals 0 because it's self intersection. The ray comes to this direction bounces back from the table, but it is on the top of the table. There is going to be like a trivial intersection, which is from the starting point of the ray. So we are not interested in this. So we are going to take as a conclusion the smallest non-negative and non-zero t. So in cold, we usually put there an epsilon like a very small number and we say that if it's the very least this number, then I will take the intersection because it's not self intersection anymore. Okay, after we adjusted this, a small beauty break. This is an image rendered with LuxRender, our glorious renderer. We are going to use this later during the course. And some more motivation. We'll be able to compute images like that. Isn't this beautiful? The previous image, the background was the renderer, so it's just a photo. That's cheating. That's cheating. Well, if you are in the mood to model, like an extremely high polycarbonate. Well, maybe it's from Ceger or more. Yeah, people do that too. But it gives you a really realistic lighting of the scene. You can use this thing in a later assignment to create beautiful images. By the way, there's a course gallery. Make sure to take a look because from previous years, people have created some amazing, amazing scenes. Raise your hand if you have seen this gallery. Okay, from the people who haven't seen, raise your hand if you're going to take a look at this after the course. Okay, excellent. I didn't see your hand. What's up? You have not looked, but you have to. Okay, because there's seriously some amazing things. I wouldn't say some people should have gone artists instead because this would say something about their knowledge. And it's not the case at all because these students are really smart guys, but they have some amazing artistic skills. And I'm sure that there are some artists inside some of you as well.
[{"start": 0.0, "end": 5.28, "text": " Okay, let's quickly study how to compute surface normals because we are going to need that."}, {"start": 5.28, "end": 11.040000000000001, "text": " If you remember the diagrams, we always have the surface normal for the diffuse specular"}, {"start": 11.040000000000001, "end": 13.280000000000001, "text": " and the ambient shading."}, {"start": 13.280000000000001, "end": 21.240000000000002, "text": " And just a quick remark, in the previous lecture we have talked about diffuse and specular"}, {"start": 21.240000000000002, "end": 26.48, "text": " shading and also ambient, but these are perhaps the most important ones for the simplified"}, {"start": 26.48, "end": 32.480000000000004, "text": " VRDS VRDS roles. And you can see them everywhere. So when you are in public transport, you can"}, {"start": 32.480000000000004, "end": 38.72, "text": " think about what object could be what is the case. Some objects are like a mixture of these"}, {"start": 38.72, "end": 45.120000000000005, "text": " VRDS roles. It is possible that I have a diffuse object with a glossy or specular coating"}, {"start": 45.120000000000005, "end": 50.760000000000005, "text": " on top of it. And you can move your head around like I told you before and see how the"}, {"start": 50.76, "end": 57.4, "text": " look of the object changes. So a lot of beauty to marvel at. And you will be able to also"}, {"start": 57.4, "end": 64.52, "text": " understand that, for instance, if you watch someone performing stand-up comedy, there"}, {"start": 64.52, "end": 72.88, "text": " are usually a lot of guys saying humorous things, and they almost always wear makeup. And"}, {"start": 72.88, "end": 78.28, "text": " the makeup artist tells them that yes, you need to wear makeup. And the artist says that"}, {"start": 78.28, "end": 84.92, "text": " you don't want to wear makeup. And they say, well, I don't care, you have to. Because otherwise"}, {"start": 84.92, "end": 92.12, "text": " you are sparkly. And this means that they start swearing. And if they start swearing,"}, {"start": 92.12, "end": 99.72, "text": " the skin is going to get a bit oily. And then it is specular. It's more specular. So it"}, {"start": 99.72, "end": 106.36, "text": " means that if I turn my head around, it will look a bit different. So that's going to be"}, {"start": 106.36, "end": 111.16, "text": " specular highlights. And if you use makeup, then this specular highlights disappear. And"}, {"start": 111.16, "end": 115.72, "text": " the whole face is going to be almost perfectly diffused. Therefore, it doesn't distract your"}, {"start": 115.72, "end": 123.24, "text": " audience. So light transport is everywhere. So if you ever wear makeup, then think about"}, {"start": 123.24, "end": 130.12, "text": " this. It's specular and diffused in the reflections. Okay. But I digress. So surface normals, I have"}, {"start": 130.12, "end": 136.52, "text": " an inquisite equation f of x, y equals zero. I would like to know the normals of this surface."}, {"start": 136.52, "end": 142.6, "text": " How to construct a normal? Well, the normal is given by the gradient of the function. And just a quick"}, {"start": 146.44, "end": 152.6, "text": " just a quick reminder, the gradient of the function is a 3D function of x, y, z. And the gradient"}, {"start": 152.6, "end": 157.96, "text": " on every coordinate gives you the derivative of the function with respect to a given coordinate."}, {"start": 157.96, "end": 165.64000000000001, "text": " Let's see an example. Imagine an elliptic parallel. You don't have to imagine that this is going"}, {"start": 165.64000000000001, "end": 172.04000000000002, "text": " to be an image. So this is x squared over a squared and so on. I'm not going to read formulae."}, {"start": 172.76000000000002, "end": 177.08, "text": " And this is how it looks like. And if I would like to put together something like this,"}, {"start": 177.08, "end": 181.88, "text": " then I have to know that a and b are the curvatures of this thing in different dimensions."}, {"start": 181.88, "end": 189.88, "text": " And therefore, these values are scalar values. Well, let's compute the surface normal of this"}, {"start": 189.88, "end": 197.64, "text": " elliptic parallel. But this would include differentiating the whole equation for the first"}, {"start": 198.2, "end": 204.92, "text": " coordinate with respect to x. So let's differentiate this with respect to x. Well, x squared is going"}, {"start": 204.92, "end": 212.83999999999997, "text": " to be 2x and x squared is going to remain there because it's a scalar multiplier. Why does it depend"}, {"start": 212.83999999999997, "end": 218.35999999999999, "text": " on x? It doesn't depend on x, therefore it falls off. z, it doesn't depend on x. Therefore,"}, {"start": 218.35999999999999, "end": 226.6, "text": " the first coordinate will be 2x over a squared. What about the second term? The second term"}, {"start": 226.6, "end": 232.51999999999998, "text": " is the function differentiated with respect to y. Okay, does x depend on y? No, this term is"}, {"start": 232.52, "end": 239.56, "text": " going to be 0. What about this? y squared is going to be 2y over b squared. z doesn't depend on y."}, {"start": 239.56, "end": 244.20000000000002, "text": " Therefore, this is going to be the second term. What about the third term? A differentiated"}, {"start": 244.20000000000002, "end": 252.76000000000002, "text": " function with respect to z. z doesn't depend on x. Someone let me know. I mean, correct. It doesn't"}, {"start": 252.76000000000002, "end": 258.68, "text": " depend on it. Does he depend on y? It doesn't. Well, what's going to be the derivative of this"}, {"start": 258.68, "end": 266.44, "text": " expression? It's going to be minus a bit louder. Minus 1. It's going to be minus 1. Okay, let's see."}, {"start": 267.64, "end": 273.0, "text": " Okay, we got this. So we can construct the surface normal of an elliptic parabola."}, {"start": 274.36, "end": 282.04, "text": " Excellent. So when we do this intersection routine in ray tracing, I have a ray and I would like"}, {"start": 282.04, "end": 289.08000000000004, "text": " to intersect this against every single object that is in the room. The question is, what is the"}, {"start": 289.08000000000004, "end": 299.72, "text": " first object that the ray hits? Which intersection am I interested in? So there may be many."}, {"start": 299.72, "end": 306.52000000000004, "text": " So if I look somewhere, I may intersect many different objects, but if things are not"}, {"start": 306.52, "end": 313.24, "text": " as distant or things are not as parent, then I'm only going to see the first intersection."}, {"start": 313.24, "end": 321.56, "text": " And that's it. And the first should be the closest. So this should be easy because we are using"}, {"start": 321.56, "end": 327.24, "text": " parametric equations. They depend on t and t is the distance. So basically what we get as an"}, {"start": 327.24, "end": 333.47999999999996, "text": " output, this is going to be a list of t values that I'm interested in. I'm intersecting these"}, {"start": 333.48, "end": 343.08000000000004, "text": " objects. A list of t is 2510 minus 2, things like that. What the question is, which one do I choose"}, {"start": 343.08000000000004, "end": 352.20000000000005, "text": " out of this list of t's? Someone help me out. Smallest positive t? The smallest positive t."}, {"start": 352.20000000000005, "end": 358.12, "text": " Okay, so the negative ones are not interested in like I told you, no politics, politics free zone."}, {"start": 358.12, "end": 365.4, "text": " And I'm going to be interested in the smallest positive t. This is a more or less true."}, {"start": 365.4, "end": 370.28000000000003, "text": " Negative t's, we are not interested in. We have discussed this. And the question is,"}, {"start": 370.28000000000003, "end": 377.8, "text": " can we take t equals 0? And I'm not telling this because I would be an annoying mathematician."}, {"start": 379.16, "end": 384.92, "text": " I'm only half mathematician and I'm among the kinder ones. Okay, so I'm not asking this because I"}, {"start": 384.92, "end": 389.56, "text": " would be an annoying mathematician. I'm asking this because this is going to happen if you write a"}, {"start": 389.56, "end": 396.68, "text": " ray tracer. So lots of people are, you know, something is not working and I have no idea what went wrong."}, {"start": 396.68, "end": 404.12, "text": " It is possible that t equals 0 and we need to handle this case. So just a second. Raise your hand"}, {"start": 404.12, "end": 411.56, "text": " if you know what t equals 0 means in an intersection. Okay, excellent. I will ask someone who I don't"}, {"start": 411.56, "end": 418.76, "text": " ask me. So, give your mind. Okay, have I asked you before? Yeah, just a minute ago."}, {"start": 418.76, "end": 424.76, "text": " I will be out of order. Who did raise your hands? Okay, yeah, that's correct."}, {"start": 424.76, "end": 428.92, "text": " My hands do not. Okay, so what's happening when t equals 0?"}, {"start": 428.92, "end": 432.28, "text": " I lose it's not worth the amount of camera. Sorry."}, {"start": 432.28, "end": 440.04, "text": " We will shoot the ray from the original. Yes, exactly. So if we have some amount of bounces,"}, {"start": 440.04, "end": 447.64000000000004, "text": " if I get t equals 0, this means that I am exactly on the surface of the object that I am bouncing off of."}, {"start": 447.64000000000004, "end": 454.20000000000005, "text": " So if I intersect against the sphere and I bounce the ray back the next, after the next intersection"}, {"start": 454.20000000000005, "end": 459.16, "text": " routine, I am almost guaranteed to see machine precision matter. But,"}, {"start": 459.16, "end": 464.76, "text": " mathematically, I am guaranteed to see t equals 0 because it's self intersection. The ray comes"}, {"start": 464.76, "end": 471.48, "text": " to this direction bounces back from the table, but it is on the top of the table."}, {"start": 471.48, "end": 476.52, "text": " There is going to be like a trivial intersection, which is from the starting point of the ray."}, {"start": 476.52, "end": 484.76, "text": " So we are not interested in this. So we are going to take as a conclusion the smallest non-negative"}, {"start": 484.76, "end": 491.64, "text": " and non-zero t. So in cold, we usually put there an epsilon like a very small number and we say"}, {"start": 491.64, "end": 496.28, "text": " that if it's the very least this number, then I will take the intersection because it's not"}, {"start": 496.28, "end": 504.91999999999996, "text": " self intersection anymore. Okay, after we adjusted this, a small beauty break. This is an image"}, {"start": 504.91999999999996, "end": 510.12, "text": " rendered with LuxRender, our glorious renderer. We are going to use this later during the course."}, {"start": 513.0, "end": 518.12, "text": " And some more motivation. We'll be able to compute images like that. Isn't this beautiful?"}, {"start": 518.12, "end": 522.44, "text": " The previous image, the background was the renderer, so it's just a photo."}, {"start": 522.44, "end": 524.92, "text": " That's cheating. That's cheating."}, {"start": 527.0, "end": 532.36, "text": " Well, if you are in the mood to model, like an extremely high polycarbonate."}, {"start": 532.36, "end": 534.36, "text": " Well, maybe it's from Ceger or more."}, {"start": 536.04, "end": 545.08, "text": " Yeah, people do that too. But it gives you a really realistic lighting of the scene."}, {"start": 545.08, "end": 551.64, "text": " You can use this thing in a later assignment to create beautiful images. By the way,"}, {"start": 551.64, "end": 557.24, "text": " there's a course gallery. Make sure to take a look because from previous years, people have created"}, {"start": 557.24, "end": 563.08, "text": " some amazing, amazing scenes. Raise your hand if you have seen this gallery."}, {"start": 564.12, "end": 571.4000000000001, "text": " Okay, from the people who haven't seen, raise your hand if you're going to take a look at this"}, {"start": 571.4, "end": 576.04, "text": " after the course. Okay, excellent. I didn't see your hand. What's up?"}, {"start": 579.64, "end": 584.36, "text": " You have not looked, but you have to. Okay, because there's seriously some amazing things."}, {"start": 586.76, "end": 592.36, "text": " I wouldn't say some people should have gone artists instead because this would say something"}, {"start": 592.36, "end": 596.92, "text": " about their knowledge. And it's not the case at all because these students are really smart"}, {"start": 596.92, "end": 603.4799999999999, "text": " guys, but they have some amazing artistic skills. And I'm sure that there are some artists inside"}, {"start": 603.48, "end": 633.24, "text": " some of you as well."}]
Two Minute Papers
https://www.youtube.com/watch?v=bQKy3N4TshU
TU Wien Rendering #7 - Ray-Sphere Intersection
We learn how to compute where a ray of light intersects a sphere. The advantages of parametric equations over the classical implicit formulations (in this context) are also discussed. Turns out there is some beauty to be seen and appreciated during the process! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, so I don't just talk. After being immersed into the beauty of Fresno's equation and Snasmo, we are going to continue with kind of putting together a new ray tracing problem. We know all about air-guests interactions and things like that, but we for instance don't know what the representation of a ray of light could be. So let's go with this. So a ray is basically starting somewhere and it is going somewhere. It's basically it. This is what I have written here mathematically. So this is a parametric equation. We'll talk about this in a second. So always the origin, this is where we start from. This is a direction vector. This is where the ray is going and T is the distance that it had gone. It's basically it. So if T is a large number, then the ray had traveled a lot and if T is one, then that's the distance. Now we are always going to talk about vectors of human life if we are talking about direction vector. And most vectors are normed in global illumination anyway, but I would like to state this because now this T is meaningful. If D is a human life, then T is a scalar. It's a number and it tells you the distance that it's traveling. And this notation is a bit weird for many people because this are depends on T. And if you come from the regular math courses, most of what you encounter is implicit equations. So this could be an equation of a surface, f of x and y equals 0. For instance, this is an example. This would be the implicit equation of a sphere. And this is an equation. So basically you can say that whatever x and y that satisfies this equation is going to be the point of this sphere. And this is going to be this collection of points that gives you a sphere. And parametric equations don't look like that. So with these parametric equations, you can see that the x coordinate I can dig out from a function that depends on T, the y coordinate I can dig out from a different, perhaps a different function, but it also depends on T. And I can write off this whole thing as a vector form. So I'm not talking about x, y, but probably vectors. So let's see an example. The equation of a ray is such an example that you have seen above, but we're going to play a bit more with this. And the first question is, why are we doing this? Why parametric equations instead of implicit functions? Well, you will see it's enough. When we encounter a problem, and this is going to be easy to solve with parametric equations. So this is a secret. And now let's try to compute the intersection of a ray and a sphere. So I cast a ray, and I would like to know which is the first object that I hit in the scene. And if I have a scene of spheres, then this is the kind of calculation that I need to do. So the expectations are the following. I have a sphere and the ray. And it is possible that the ray hits the sphere in two different points. Well, what is impossible, if two hit points are possible, then one hit point is also possible. This is essentially the tangent of a sphere. It is just hitting at the very side. Well, this is the rare side, but this still exists. And obviously, it is possible that the ray does not hit the sphere at all. So we have, again, listed our expectations before doing any kind of calculation. And we will see that this will make things much more beautiful. So the solution for this whole problem should be some kind of mathematical something that can give me two solutions, one solution, or maybe no solutions. If I do the intersection routine and I get whatever else, then this should be incorrect. So this is what I expect to see. There is possibility of two one or zero solutions. Well, this is the equation of a sphere. P is, the P's are going to be the points on the surface of this sphere. And the C is the center of the sphere and R is obviously the radius. This is the equation of the ray. We have to mix these two together in some way in order to get an intersection. What I'm going to do is I'm going to substitute this R of T in the place of P. So what it will give me is O plus TD minus C times O plus TD minus C. It was R squared. So this is a big multiplication between the two parenthesis. And if I do this actual multiplication, then I will see that there's going to be a term which gives me the TD times TD. So there's going to be something like T squared, D squared, here. And there's going to be another term where the O minus C is multiplied with a TD on the other side. And this happens twice because both sides. And the rest is going to be a scalar term because O minus C. I'm going to multiply with O minus C. So this is going to be a scalar. I don't see T in there. And this is already quite interesting because if we smell this equation, what does this equation smell like? Raise your hand if you have some idea of what this smells like. Yes, that's going to be correct. It's a point of moment of creation of this code degree 2. It's exactly. Exactly. So this is. But I have to smell it first. So yes, indeed. That's a quadratic equation. So I have T squared, T and the scalar term equal zero. What are the coefficients? Well, simple. And T is going to be D squared. What about the B? I mean, not the T by D gain. We apologize. The B is going to be the 2D O minus C because this is what I'm going to apply. T with and the scalar term is going to be all the rest. So this should be very simple to solve. I have as a solution that possible T1 and T2 that satisfy this equation. And now the question is, is it possible that this equation has two solutions? Someone help me out. I missed some courses at kindergarten. So I don't know anything about this. Is it possible to get two solutions for this? I can't hear anything. OK, excellent. Excellent. This is the interesting part during the lecture because the teacher asked something and no answers. And this can mean two things. One thing is that no one knows the answer. And the second is that anyone knows the answer and it's so trivial that no one wants to look like an idiot. So no one says anything. And I would imagine that maybe this is the second case. So is it possible to get two solutions for this? Yes, yes. Excellent. OK, cool. Well, it's simple. If this B squared minus 4 AC is larger than 0, then the second term under the square root is going to be a number, some real number. And therefore, this is going to be T1 is minus B plus this number, T2 is a minus B minus this number. And therefore, this is going to be two different solutions. Maybe I should be wrong. So much here in the course. Can you hear this? Can you take it off? No, not always. I could take them off. Yes, and look like an academic. OK? Well, it's one solution possible. I still cannot hear anything. Yes. Very cool. When someone tells me the same thing is equals zero. So if this term is zero, then I'm adding zero. I'm subtracting zero. This is the same thing. So T1 is very simple. And it is also possible that we have no real solution if this square root term, I mean, the term under the square root is less than zero. Excellent. So this is quite beautiful because we enlisted our expectations. And it indeed needs to look like something that can give me two one or zero solutions. And if I do the math, this is exactly what happens. So this is the beauty of the whole thing. And let's imagine that I solved this equation and I got the result that T1 is 2 and T2 is minus 2. And now, what if I told you that these T's mean distances. So I'm solving the parametric equations in a way that this T means a distance. So it means that the first intersection is two times the unit distance, therefore two in the front. There could be a solution, which is a classical case for a quadratic equation where I get the second solution that is minus something. What does it mean? Yes. I think we can dismiss that because it's behind our eyes. So we don't really care about that, do we? Precisely. Precisely. So it's possible that the race starts in the middle of the sphere and then this is indeed a perfectly normal thing to happen. That there's one intersection in front of us and there's one intersection to our backs. And obviously we don't care about it too much. And if we find a solution like this, we discard it indeed. So we're studying computer science and we're not studying politics because if we would be studying politics, we would be interested in what happens behind our backs. This is computer science. So we can discard all this information.
[{"start": 0.0, "end": 5.0, "text": " Okay, so I don't just talk."}, {"start": 5.0, "end": 9.24, "text": " After being immersed into the beauty of"}, {"start": 9.24, "end": 14.200000000000001, "text": " Fresno's equation and Snasmo,"}, {"start": 14.200000000000001, "end": 20.8, "text": " we are going to continue with kind of putting together a new ray tracing problem."}, {"start": 20.8, "end": 23.16, "text": " We know all about"}, {"start": 23.16, "end": 25.400000000000002, "text": " air-guests interactions and things like that,"}, {"start": 25.400000000000002, "end": 27.3, "text": " but we for instance don't know what"}, {"start": 27.3, "end": 30.3, "text": " the representation of a ray of light could be."}, {"start": 30.3, "end": 32.3, "text": " So let's go with this."}, {"start": 32.3, "end": 37.02, "text": " So a ray is basically starting somewhere and it is going somewhere."}, {"start": 37.02, "end": 40.46, "text": " It's basically it. This is what I have written here mathematically."}, {"start": 40.46, "end": 42.82, "text": " So this is a parametric equation."}, {"start": 42.82, "end": 44.900000000000006, "text": " We'll talk about this in a second."}, {"start": 44.900000000000006, "end": 47.46, "text": " So always the origin, this is where we start from."}, {"start": 47.46, "end": 48.980000000000004, "text": " This is a direction vector."}, {"start": 48.980000000000004, "end": 54.14, "text": " This is where the ray is going and"}, {"start": 54.14, "end": 56.3, "text": " T is the distance that it had gone."}, {"start": 56.3, "end": 57.3, "text": " It's basically it."}, {"start": 57.3, "end": 58.919999999999995, "text": " So if T is a large number,"}, {"start": 58.919999999999995, "end": 64.3, "text": " then the ray had traveled a lot and if T is one, then that's the distance."}, {"start": 64.3, "end": 69.62, "text": " Now we are always going to talk about vectors of human life"}, {"start": 69.62, "end": 72.3, "text": " if we are talking about direction vector."}, {"start": 72.3, "end": 77.25999999999999, "text": " And most vectors are normed in global illumination anyway,"}, {"start": 77.25999999999999, "end": 81.94, "text": " but I would like to state this because now this T is meaningful."}, {"start": 81.94, "end": 85.02, "text": " If D is a human life, then T is a scalar."}, {"start": 85.02, "end": 87.89999999999999, "text": " It's a number and it tells you the distance that it's traveling."}, {"start": 87.89999999999999, "end": 92.3, "text": " And this notation is a bit weird for many people because this"}, {"start": 92.3, "end": 94.42, "text": " are depends on T."}, {"start": 94.42, "end": 99.14, "text": " And if you come from the regular math courses,"}, {"start": 99.14, "end": 103.02, "text": " most of what you encounter is implicit equations."}, {"start": 103.02, "end": 105.94, "text": " So this could be an equation of a surface,"}, {"start": 105.94, "end": 108.66, "text": " f of x and y equals 0."}, {"start": 108.66, "end": 110.25999999999999, "text": " For instance, this is an example."}, {"start": 110.26, "end": 115.42, "text": " This would be the implicit equation of a sphere."}, {"start": 115.42, "end": 116.94, "text": " And this is an equation."}, {"start": 116.94, "end": 121.22, "text": " So basically you can say that whatever x and y"}, {"start": 121.22, "end": 126.7, "text": " that satisfies this equation is going to be the point of this sphere."}, {"start": 126.7, "end": 131.18, "text": " And this is going to be this collection of points that gives you a sphere."}, {"start": 131.18, "end": 133.86, "text": " And parametric equations don't look like that."}, {"start": 133.86, "end": 135.66, "text": " So with these parametric equations,"}, {"start": 135.66, "end": 139.42000000000002, "text": " you can see that the x coordinate I can dig out from a function"}, {"start": 139.42, "end": 144.38, "text": " that depends on T, the y coordinate I can dig out from a different,"}, {"start": 144.38, "end": 147.85999999999999, "text": " perhaps a different function, but it also depends on T."}, {"start": 147.85999999999999, "end": 150.33999999999997, "text": " And I can write off this whole thing as a vector form."}, {"start": 150.33999999999997, "end": 155.82, "text": " So I'm not talking about x, y, but probably vectors."}, {"start": 155.82, "end": 159.77999999999997, "text": " So let's see an example."}, {"start": 159.77999999999997, "end": 163.1, "text": " The equation of a ray is such an example that you have seen above,"}, {"start": 163.1, "end": 165.82, "text": " but we're going to play a bit more with this."}, {"start": 165.82, "end": 168.29999999999998, "text": " And the first question is, why are we doing this?"}, {"start": 168.3, "end": 173.70000000000002, "text": " Why parametric equations instead of implicit functions?"}, {"start": 173.70000000000002, "end": 175.42000000000002, "text": " Well, you will see it's enough."}, {"start": 175.42000000000002, "end": 179.5, "text": " When we encounter a problem, and this is going to be easy to solve with parametric equations."}, {"start": 179.5, "end": 181.86, "text": " So this is a secret."}, {"start": 181.86, "end": 186.46, "text": " And now let's try to compute the intersection of a ray and a sphere."}, {"start": 186.46, "end": 190.22000000000003, "text": " So I cast a ray, and I would like to know which is the first object"}, {"start": 190.22000000000003, "end": 191.94, "text": " that I hit in the scene."}, {"start": 191.94, "end": 197.86, "text": " And if I have a scene of spheres, then this is the kind of calculation that I need to do."}, {"start": 197.86, "end": 199.94000000000003, "text": " So the expectations are the following."}, {"start": 199.94000000000003, "end": 202.54000000000002, "text": " I have a sphere and the ray."}, {"start": 202.54000000000002, "end": 208.9, "text": " And it is possible that the ray hits the sphere in two different points."}, {"start": 208.9, "end": 215.14000000000001, "text": " Well, what is impossible, if two hit points are possible, then one hit point is also possible."}, {"start": 215.14000000000001, "end": 217.78000000000003, "text": " This is essentially the tangent of a sphere."}, {"start": 217.78000000000003, "end": 220.62, "text": " It is just hitting at the very side."}, {"start": 220.62, "end": 224.46, "text": " Well, this is the rare side, but this still exists."}, {"start": 224.46, "end": 229.5, "text": " And obviously, it is possible that the ray does not hit the sphere at all."}, {"start": 229.5, "end": 234.34, "text": " So we have, again, listed our expectations before doing any kind of calculation."}, {"start": 234.34, "end": 237.98000000000002, "text": " And we will see that this will make things much more beautiful."}, {"start": 237.98000000000002, "end": 246.06, "text": " So the solution for this whole problem should be some kind of mathematical something that"}, {"start": 246.06, "end": 252.9, "text": " can give me two solutions, one solution, or maybe no solutions."}, {"start": 252.9, "end": 257.98, "text": " If I do the intersection routine and I get whatever else, then this should be incorrect."}, {"start": 257.98, "end": 259.86, "text": " So this is what I expect to see."}, {"start": 259.86, "end": 263.26, "text": " There is possibility of two one or zero solutions."}, {"start": 263.26, "end": 267.14, "text": " Well, this is the equation of a sphere."}, {"start": 267.14, "end": 272.1, "text": " P is, the P's are going to be the points on the surface of this sphere."}, {"start": 272.1, "end": 276.3, "text": " And the C is the center of the sphere and R is obviously the radius."}, {"start": 276.3, "end": 278.62, "text": " This is the equation of the ray."}, {"start": 278.62, "end": 283.1, "text": " We have to mix these two together in some way in order to get an intersection."}, {"start": 283.1, "end": 289.7, "text": " What I'm going to do is I'm going to substitute this R of T in the place of P."}, {"start": 289.7, "end": 296.78000000000003, "text": " So what it will give me is O plus TD minus C times O plus TD minus C."}, {"start": 296.78000000000003, "end": 298.22, "text": " It was R squared."}, {"start": 298.22, "end": 302.42, "text": " So this is a big multiplication between the two parenthesis."}, {"start": 302.42, "end": 306.74, "text": " And if I do this actual multiplication, then I will see that there's going to be a term"}, {"start": 306.74, "end": 309.42, "text": " which gives me the TD times TD."}, {"start": 309.42, "end": 315.38, "text": " So there's going to be something like T squared, D squared, here."}, {"start": 315.38, "end": 321.22, "text": " And there's going to be another term where the O minus C is multiplied with a TD on the other side."}, {"start": 321.22, "end": 324.06, "text": " And this happens twice because both sides."}, {"start": 324.06, "end": 327.90000000000003, "text": " And the rest is going to be a scalar term because O minus C."}, {"start": 327.90000000000003, "end": 330.22, "text": " I'm going to multiply with O minus C."}, {"start": 330.22, "end": 331.7, "text": " So this is going to be a scalar."}, {"start": 331.7, "end": 333.78000000000003, "text": " I don't see T in there."}, {"start": 333.78, "end": 341.9, "text": " And this is already quite interesting because if we smell this equation, what does this equation"}, {"start": 341.9, "end": 343.9, "text": " smell like?"}, {"start": 343.9, "end": 351.61999999999995, "text": " Raise your hand if you have some idea of what this smells like."}, {"start": 351.61999999999995, "end": 355.09999999999997, "text": " Yes, that's going to be correct."}, {"start": 355.09999999999997, "end": 359.34, "text": " It's a point of moment of creation of this code degree 2."}, {"start": 359.34, "end": 360.34, "text": " It's exactly."}, {"start": 360.34, "end": 361.34, "text": " Exactly."}, {"start": 361.34, "end": 362.34, "text": " So this is."}, {"start": 362.34, "end": 363.73999999999995, "text": " But I have to smell it first."}, {"start": 363.73999999999995, "end": 367.21999999999997, "text": " So yes, indeed."}, {"start": 367.21999999999997, "end": 368.61999999999995, "text": " That's a quadratic equation."}, {"start": 368.61999999999995, "end": 374.14, "text": " So I have T squared, T and the scalar term equal zero."}, {"start": 374.14, "end": 375.34, "text": " What are the coefficients?"}, {"start": 375.34, "end": 376.34, "text": " Well, simple."}, {"start": 376.34, "end": 378.62, "text": " And T is going to be D squared."}, {"start": 381.82, "end": 384.34, "text": " What about the B?"}, {"start": 384.34, "end": 387.29999999999995, "text": " I mean, not the T by D gain."}, {"start": 387.29999999999995, "end": 388.46, "text": " We apologize."}, {"start": 388.46, "end": 392.02, "text": " The B is going to be the 2D O minus C because this is what I'm going to apply."}, {"start": 392.02, "end": 395.65999999999997, "text": " T with and the scalar term is going to be all the rest."}, {"start": 399.41999999999996, "end": 402.02, "text": " So this should be very simple to solve."}, {"start": 402.02, "end": 407.46, "text": " I have as a solution that possible T1 and T2 that satisfy this equation."}, {"start": 407.46, "end": 415.26, "text": " And now the question is, is it possible that this equation has two solutions?"}, {"start": 415.26, "end": 417.06, "text": " Someone help me out."}, {"start": 417.06, "end": 419.85999999999996, "text": " I missed some courses at kindergarten."}, {"start": 419.86, "end": 422.46000000000004, "text": " So I don't know anything about this."}, {"start": 422.46000000000004, "end": 424.98, "text": " Is it possible to get two solutions for this?"}, {"start": 428.98, "end": 430.78000000000003, "text": " I can't hear anything."}, {"start": 430.78000000000003, "end": 432.1, "text": " OK, excellent."}, {"start": 432.1, "end": 433.1, "text": " Excellent."}, {"start": 433.1, "end": 439.82, "text": " This is the interesting part during the lecture because the teacher asked something and no"}, {"start": 439.82, "end": 440.58000000000004, "text": " answers."}, {"start": 440.58000000000004, "end": 442.74, "text": " And this can mean two things."}, {"start": 442.74, "end": 445.7, "text": " One thing is that no one knows the answer."}, {"start": 445.7, "end": 450.26, "text": " And the second is that anyone knows the answer and it's so trivial that no one wants to"}, {"start": 450.26, "end": 451.26, "text": " look like an idiot."}, {"start": 451.26, "end": 452.62, "text": " So no one says anything."}, {"start": 452.62, "end": 455.34, "text": " And I would imagine that maybe this is the second case."}, {"start": 455.34, "end": 459.53999999999996, "text": " So is it possible to get two solutions for this?"}, {"start": 459.53999999999996, "end": 460.53999999999996, "text": " Yes, yes."}, {"start": 460.53999999999996, "end": 461.06, "text": " Excellent."}, {"start": 461.06, "end": 462.94, "text": " OK, cool."}, {"start": 462.94, "end": 465.9, "text": " Well, it's simple."}, {"start": 465.9, "end": 471.02, "text": " If this B squared minus 4 AC is larger than 0, then the second term under the square root"}, {"start": 471.02, "end": 473.82, "text": " is going to be a number, some real number."}, {"start": 473.82, "end": 480.38, "text": " And therefore, this is going to be T1 is minus B plus this number, T2 is a minus B minus"}, {"start": 480.38, "end": 481.78, "text": " this number."}, {"start": 481.78, "end": 484.78, "text": " And therefore, this is going to be two different solutions."}, {"start": 484.78, "end": 491.46, "text": " Maybe I should be wrong."}, {"start": 491.46, "end": 492.46, "text": " So much here in the course."}, {"start": 492.46, "end": 495.46, "text": " Can you hear this?"}, {"start": 495.46, "end": 500.9, "text": " Can you take it off?"}, {"start": 500.9, "end": 504.21999999999997, "text": " No, not always."}, {"start": 504.21999999999997, "end": 505.21999999999997, "text": " I could take them off."}, {"start": 505.21999999999997, "end": 508.21999999999997, "text": " Yes, and look like an academic."}, {"start": 508.21999999999997, "end": 509.21999999999997, "text": " OK?"}, {"start": 509.21999999999997, "end": 511.02, "text": " Well, it's one solution possible."}, {"start": 511.02, "end": 514.06, "text": " I still cannot hear anything."}, {"start": 514.06, "end": 515.06, "text": " Yes."}, {"start": 515.06, "end": 516.06, "text": " Very cool."}, {"start": 516.06, "end": 523.74, "text": " When someone tells me the same thing is equals zero."}, {"start": 523.74, "end": 527.74, "text": " So if this term is zero, then I'm adding zero."}, {"start": 527.74, "end": 528.74, "text": " I'm subtracting zero."}, {"start": 528.74, "end": 529.74, "text": " This is the same thing."}, {"start": 529.74, "end": 532.78, "text": " So T1 is very simple."}, {"start": 532.78, "end": 539.86, "text": " And it is also possible that we have no real solution if this square root term, I mean,"}, {"start": 539.86, "end": 543.58, "text": " the term under the square root is less than zero."}, {"start": 543.58, "end": 544.34, "text": " Excellent."}, {"start": 544.34, "end": 549.26, "text": " So this is quite beautiful because we enlisted our expectations."}, {"start": 549.26, "end": 554.26, "text": " And it indeed needs to look like something that can give me two one or zero solutions."}, {"start": 554.26, "end": 556.74, "text": " And if I do the math, this is exactly what happens."}, {"start": 556.74, "end": 559.98, "text": " So this is the beauty of the whole thing."}, {"start": 559.98, "end": 566.42, "text": " And let's imagine that I solved this equation and I got the result that T1 is 2 and T2 is"}, {"start": 566.42, "end": 569.5, "text": " minus 2."}, {"start": 569.5, "end": 576.1800000000001, "text": " And now, what if I told you that these T's mean distances."}, {"start": 576.1800000000001, "end": 581.1, "text": " So I'm solving the parametric equations in a way that this T means a distance."}, {"start": 581.1, "end": 587.3000000000001, "text": " So it means that the first intersection is two times the unit distance, therefore two"}, {"start": 587.3000000000001, "end": 589.26, "text": " in the front."}, {"start": 589.26, "end": 593.4200000000001, "text": " There could be a solution, which is a classical case for a quadratic equation where I get"}, {"start": 593.4200000000001, "end": 597.02, "text": " the second solution that is minus something."}, {"start": 597.02, "end": 598.02, "text": " What does it mean?"}, {"start": 598.02, "end": 599.02, "text": " Yes."}, {"start": 599.02, "end": 602.4200000000001, "text": " I think we can dismiss that because it's behind our eyes."}, {"start": 602.4200000000001, "end": 605.1800000000001, "text": " So we don't really care about that, do we?"}, {"start": 605.1800000000001, "end": 606.1800000000001, "text": " Precisely."}, {"start": 606.1800000000001, "end": 607.1800000000001, "text": " Precisely."}, {"start": 607.18, "end": 612.18, "text": " So it's possible that the race starts in the middle of the sphere and then this is indeed"}, {"start": 612.18, "end": 613.9799999999999, "text": " a perfectly normal thing to happen."}, {"start": 613.9799999999999, "end": 620.06, "text": " That there's one intersection in front of us and there's one intersection to our backs."}, {"start": 620.06, "end": 622.9, "text": " And obviously we don't care about it too much."}, {"start": 622.9, "end": 626.42, "text": " And if we find a solution like this, we discard it indeed."}, {"start": 626.42, "end": 631.74, "text": " So we're studying computer science and we're not studying politics because if we would"}, {"start": 631.74, "end": 636.4599999999999, "text": " be studying politics, we would be interested in what happens behind our backs."}, {"start": 636.46, "end": 638.14, "text": " This is computer science."}, {"start": 638.14, "end": 668.1, "text": " So we can discard all this information."}]
Two Minute Papers
https://www.youtube.com/watch?v=LD6xRkCJ6ek
TU Wien Rendering #6 - Snell's Law and Total Internal Reflection
Why does the straw look bent in a glass of water? Why is the world distorted when looking through a lens or a glass marble ball? How can light get trapped in transparent surfaces? We formulate Snell's law, an incredibly simple equation to answer these questions. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
There is still one thing that we don't know and it's about angles. So we know about probabilities. Whatever you give me, I can't tell you what is the probability that life gets reflected or refracted. I know the probabilities. But I don't know about angles. And what we need to know is that light, rays of light slow down. They travel with a speed of light but they slow down as they enter a medium. Because there is atoms, particles in there and it's more difficult to get through. So light slows down. The index of refraction tells you by exactly how much. So the index of refraction of a medium is given by this fraction. The speed of light in vacuum that we know over the speed of light in this medium. So this is how we can write it up. Let's look at an example. The index of refraction of glass is 1.5. So we know exactly what the speed of light inside glass is. Well, it's 300 million meters per second in vacuum. And what we know is this equation. So this is the index of refraction. So I can just reorder this thing and conclude that light, if it travels in vacuum, 300 million meters per second. But in glass, it loses the third of its velocity and it's only 200 million meters per second. So that's a pretty easy calculation and it's pretty neat. And another absolutely beautiful thing, hopefully you have studied the Maxwell equations and pointing vectors in physics. Light is ultimately a wave. So here above and below you can see some wave behavior. And the ray is essentially the wavefronts of these waves. So light can be imagined as rays if you take into consideration that I would need to compute many of these wavefronts in order to take account for the wave behavior. And don't look at the red. Only look at the blue. This is above. This is vacuum and below this could be for instance glass. And you can see that the waves flow down in this medium. And what this means if we go back to the definition of the wavefronts, the red lines, then they are essentially bending because the wavefronts are going to look like this. So it's very interesting because if you imagine light as a wave, it only slows down. But if you imagine light as a ray, then it bends. It changes direction. So I think that's absolutely beautiful. And the question is why is the light refracting inwards? Because what I would imagine is that it continues normally, it continues its way with this theta t equals theta i. Because it will just continue its way. And it doesn't continue its way, but it it's denser. And the question is why. And now we have Khan Academy time. Raise your hand if you know Khan Academy. Awesome. Well, well educated people. So this is shamelessly stolen from Khan Academy because this is the best way to describe how a refractual works. So basically you imagine that you have a large car and the air vacuole interface is now road and mud. I mean, the road is the air and mud is glass, for instance. And imagine as you are approaching this boundary line between the two, then the first wheel of the car, like the lower left on this image, is entering the mud. But on the other side, the wheels are still on the road. So therefore this wheel will slow down in the mud, but this is still going as fast as it used to be. So what do the car do? If this happens, it will start to turn. And you know exactly where it will turn because this is going slow. This is going faster. So therefore it will turn inwards. So this is, I think, an amazing interpretation of the bull thing. I think also it's easy to explain with the waves because when the waves are slowed down, then the direction, then we can see that the circles will get bigger radius. And if you go perpendicular to the waves, then we will go down. Exactly. Like in the previous figure. That's another intuition. That's one of the first pieces actually. Because I don't know things. No, this is intuition. That's why it's a bit misleading. It's nice. This is strictly intuition. So if you would start to model a race of like trucks on going to... We're going to encounter problems. You can take a picture because this part of the wave hits the median first and by that the entire wave is rotating. It's like a problem. I wouldn't have prayed. No. I tend to give multiple ways to interpret things because different minds work differently. Some graphical ways are working for different people better. Okay. So, Stas law. And we're almost done for today. Stas law tells you in what angle-refracted race are going to continue their path. And this is given by this expression signs of the angles against velocities, against the reciprocal of indisputable refraction. Okay. So let's do the error plus example of the previous image. Let's state our expectations before we go. So I'm interested in the relation of theta i versus theta t. So I know these expressions exactly. How much is theta i? How many? 60. It's 60. Okay. Excellent. How much is theta t in degrees? Very far. It's something around 35. Exactly. Okay. So the light is reflected in worse. Therefore, the theta t must be less than the theta i. So this must be less than this. Let's compute the equation and see if this works. And if it doesn't work, we're going to call out the physicist. So let's just reorder some things and let's put there the indices of refractions. And the incoming light angle that we know. And just some very simple reordering. We are almost there. And if we actually compute the size of 60 degrees, we get this. Well, this is all we can also carry out the division. But at this point, I'm not interested in the size of theta t. I'm interested in theta t. So I would multiply both sides invert the equation by multiplying with the inverse of the theta t. So this theta t should be the arc sign of this. And if I compute everything back to degrees, then I will get this theta t, which is 34.75. So whoever said that 35 was very close to the actual result. But also not to forget that there are different kinds of glasses. I mean, there's multiple ways of creating and manufacturing glasses. And they have different indices of refraction. More or less the same, but it's still different. But we can see that this is in a really good agreement with what we see in real life. Well, what did we say? Theta t should be less than theta i. But 35 is definitely less than 60. So again, physics works and physicists are smart people. And just another example. If you think about the car example or whichever example you like better, you will hopefully immediately see that if you would be going with the yellow arrow, this is going to bend inwards after going back from the water to the air. Now, whoa, hold it right there. What is happening? I don't see any reflection whatsoever. Right? So it seems to me that if I go back at around how much is this in degrees, that's 50 degrees. Exactly. So something fishy happens at 50 degrees. Well, I don't know what is happening. I'll tell you the name and we're going to compute whether this is possible or not. Well, if it's not possible that our math sucks. But let's see. So what we call this is total internal reflection. There is a critical angle. And after this critical angle, there's no more reflection. There's no more reflection. Only reflection happens. So many examples of that and there's many applications of that. This is one of the more beautiful examples. So let's compute what's going on here. What I know is that I have the indices of reflections I know. I know this degree that we just dealt with. And something interesting should happen here. And something interesting already happened. So I just plugged in everything what I have seen on this image. And I get this. And this is awfully horribly, terribly wrong. Someone please help me out. Why is that? It's okay. Yes. I can be bigger than one. Exactly. So the support of the sign is between one and minus one, at least according to my experiences. So it's saying that the sign of an angle is more than one. It's mathematically not possible. So it says that there's no such angle. What would be the angle of reflection if I would use using 50 degrees? Then it says something that mathematically doesn't make any sense. So math actually suggests to you if you use the right numbers. It suggests to you already that this totally internal reflection would happen. Let's try to compute the critical angle. And this, I just reorder things. This is hopefully the critical angle that I will be trying to compute. Well, if I have this theta one, this is relatively small, I would then there is going to be a reflection. There is this critical angle on the second figure, at which I have this 90 degree reflection. So it says that at the critical angle, this thing is going to be 90 degrees. And after that, so this is smaller than this. After this critical angle, it's only going to be reflection. Now let's try to compute this. Note is 90 degrees here. So what I put here is this is what I'm interested in. And this happens when this reflection is at 90 degrees. So I put there this 90 degrees explicitly, and I want to know this theta one. That's going to be the critical angle. Well, if I actually do the computation with the 90 degrees, then I'm going to get one for the sign. So this is n2 over n1. Well, I'm still not interested in the sign of this angle. I'm interested in the angle. So I have to invert the other side, both sides of the equation. And this is the definition of the critical angle. And if you write it in Wikipedia, critical angle, you are going to get the very same formula. But the most interesting thing is that you can actually derive this yourself. And this is not a huge derivation. This is very simple. This is where this 90 degree reflection happens. So what is our expectation for this critical angle? Let's look at the reality again. What? So this is, I'm just trying to hint without saying, telling you the solution. Let's try it without hints. What could be the critical angle here? Raise your hand if you don't answer. Sorry, I will, for a pedagogical reason, I will ask someone who I haven't asked before. Let's see if it's correct. I have to ask you. Not to be important. Not to be important. How do you get important questions? I have to ask you something smaller than 50 degrees. Because that was only... Exactly. So the usual answer I get is that 50 degrees. Because I see total internal reflection. But total internal reflection means that after some point, after that is only going to be reflection. So it doesn't mean that if it doesn't mean that this is that point. I will be trying 60 degrees, I will also see reflection. But this doesn't mean that 60 degrees is the critical angle. It's before that, that's some point. What is your answer to? What is your answer? Yes, but I was thinking that if it's critical angle, for example, say 50 is critical angle. Maybe we have also reflection and then horizontal one. Exactly. What you have seen on the figure. So at the critical angle, you see this. So this is over the critical angle. So it has to be less than 50 degrees. Less than 60, 50 degrees. So this is very simple from here. Let's just substitute the indices of refraction. 41.81. Physics works and we are still alive. So that's basically it for today. We have used reality to VR judge. We are not just writing formula on paper and then be happy about how much we can understand or how much we can memorize of them. We put everything to use and you will see all of this in C++ code. Not so long from now. So that would be the introductory course and I'll see you next week..
[{"start": 0.0, "end": 4.8, "text": " There is still one thing that we don't know and it's about angles."}, {"start": 4.8, "end": 6.8, "text": " So we know about probabilities."}, {"start": 6.8, "end": 12.8, "text": " Whatever you give me, I can't tell you what is the probability that life gets reflected or refracted."}, {"start": 12.8, "end": 14.1, "text": " I know the probabilities."}, {"start": 14.1, "end": 16.1, "text": " But I don't know about angles."}, {"start": 16.1, "end": 20.6, "text": " And what we need to know is that light, rays of light slow down."}, {"start": 20.6, "end": 25.1, "text": " They travel with a speed of light but they slow down as they enter a medium."}, {"start": 25.1, "end": 30.200000000000003, "text": " Because there is atoms, particles in there and it's more difficult to get through."}, {"start": 30.200000000000003, "end": 31.700000000000003, "text": " So light slows down."}, {"start": 31.700000000000003, "end": 36.7, "text": " The index of refraction tells you by exactly how much."}, {"start": 36.7, "end": 40.7, "text": " So the index of refraction of a medium is given by this fraction."}, {"start": 40.7, "end": 45.7, "text": " The speed of light in vacuum that we know over the speed of light in this medium."}, {"start": 45.7, "end": 48.7, "text": " So this is how we can write it up."}, {"start": 48.7, "end": 50.6, "text": " Let's look at an example."}, {"start": 50.6, "end": 53.6, "text": " The index of refraction of glass is 1.5."}, {"start": 53.6, "end": 59.1, "text": " So we know exactly what the speed of light inside glass is."}, {"start": 59.1, "end": 64.4, "text": " Well, it's 300 million meters per second in vacuum."}, {"start": 64.4, "end": 66.6, "text": " And what we know is this equation."}, {"start": 66.6, "end": 68.2, "text": " So this is the index of refraction."}, {"start": 68.2, "end": 75.6, "text": " So I can just reorder this thing and conclude that light, if it travels in vacuum,"}, {"start": 75.6, "end": 79.4, "text": " 300 million meters per second."}, {"start": 79.4, "end": 86.0, "text": " But in glass, it loses the third of its velocity and it's only 200 million meters per second."}, {"start": 86.0, "end": 89.5, "text": " So that's a pretty easy calculation and it's pretty neat."}, {"start": 89.5, "end": 94.80000000000001, "text": " And another absolutely beautiful thing, hopefully you have studied the Maxwell equations"}, {"start": 94.80000000000001, "end": 97.0, "text": " and pointing vectors in physics."}, {"start": 97.0, "end": 99.0, "text": " Light is ultimately a wave."}, {"start": 99.0, "end": 104.0, "text": " So here above and below you can see some wave behavior."}, {"start": 104.0, "end": 108.80000000000001, "text": " And the ray is essentially the wavefronts of these waves."}, {"start": 108.8, "end": 118.1, "text": " So light can be imagined as rays if you take into consideration that I would need to compute"}, {"start": 118.1, "end": 122.39999999999999, "text": " many of these wavefronts in order to take account for the wave behavior."}, {"start": 122.39999999999999, "end": 124.39999999999999, "text": " And don't look at the red."}, {"start": 124.39999999999999, "end": 126.4, "text": " Only look at the blue."}, {"start": 126.4, "end": 127.6, "text": " This is above."}, {"start": 127.6, "end": 131.6, "text": " This is vacuum and below this could be for instance glass."}, {"start": 131.6, "end": 135.6, "text": " And you can see that the waves flow down in this medium."}, {"start": 135.6, "end": 142.6, "text": " And what this means if we go back to the definition of the wavefronts, the red lines,"}, {"start": 142.6, "end": 146.79999999999998, "text": " then they are essentially bending because the wavefronts are going to look like this."}, {"start": 146.79999999999998, "end": 153.4, "text": " So it's very interesting because if you imagine light as a wave, it only slows down."}, {"start": 153.4, "end": 157.2, "text": " But if you imagine light as a ray, then it bends."}, {"start": 157.2, "end": 160.2, "text": " It changes direction."}, {"start": 160.2, "end": 166.39999999999998, "text": " So I think that's absolutely beautiful."}, {"start": 166.39999999999998, "end": 171.6, "text": " And the question is why is the light refracting inwards?"}, {"start": 171.6, "end": 180.6, "text": " Because what I would imagine is that it continues normally, it continues its way with this theta"}, {"start": 180.6, "end": 183.6, "text": " t equals theta i."}, {"start": 183.6, "end": 186.39999999999998, "text": " Because it will just continue its way."}, {"start": 186.39999999999998, "end": 189.79999999999998, "text": " And it doesn't continue its way, but it it's denser."}, {"start": 189.8, "end": 191.8, "text": " And the question is why."}, {"start": 191.8, "end": 195.60000000000002, "text": " And now we have Khan Academy time."}, {"start": 195.60000000000002, "end": 198.4, "text": " Raise your hand if you know Khan Academy."}, {"start": 198.4, "end": 199.4, "text": " Awesome."}, {"start": 199.4, "end": 202.20000000000002, "text": " Well, well educated people."}, {"start": 202.20000000000002, "end": 208.0, "text": " So this is shamelessly stolen from Khan Academy because this is the best way to describe"}, {"start": 208.0, "end": 209.8, "text": " how a refractual works."}, {"start": 209.8, "end": 217.4, "text": " So basically you imagine that you have a large car and the air vacuole interface is now"}, {"start": 217.4, "end": 221.4, "text": " road and mud."}, {"start": 221.4, "end": 225.70000000000002, "text": " I mean, the road is the air and mud is glass, for instance."}, {"start": 225.70000000000002, "end": 231.48000000000002, "text": " And imagine as you are approaching this boundary line between the two, then the first wheel"}, {"start": 231.48000000000002, "end": 236.36, "text": " of the car, like the lower left on this image, is entering the mud."}, {"start": 236.36, "end": 240.52, "text": " But on the other side, the wheels are still on the road."}, {"start": 240.52, "end": 245.6, "text": " So therefore this wheel will slow down in the mud, but this is still going as fast as"}, {"start": 245.6, "end": 246.88, "text": " it used to be."}, {"start": 246.88, "end": 248.76, "text": " So what do the car do?"}, {"start": 248.76, "end": 252.72, "text": " If this happens, it will start to turn."}, {"start": 252.72, "end": 256.08, "text": " And you know exactly where it will turn because this is going slow."}, {"start": 256.08, "end": 257.08, "text": " This is going faster."}, {"start": 257.08, "end": 259.64, "text": " So therefore it will turn inwards."}, {"start": 259.64, "end": 262.88, "text": " So this is, I think, an amazing interpretation of the bull thing."}, {"start": 262.88, "end": 268.48, "text": " I think also it's easy to explain with the waves because when the waves are slowed down,"}, {"start": 268.48, "end": 273.48, "text": " then the direction, then we can see that the circles will get bigger radius."}, {"start": 273.48, "end": 278.48, "text": " And if you go perpendicular to the waves, then we will go down."}, {"start": 278.48, "end": 279.48, "text": " Exactly."}, {"start": 279.48, "end": 281.48, "text": " Like in the previous figure."}, {"start": 281.48, "end": 283.48, "text": " That's another intuition."}, {"start": 283.48, "end": 285.48, "text": " That's one of the first pieces actually."}, {"start": 285.48, "end": 288.48, "text": " Because I don't know things."}, {"start": 288.48, "end": 289.48, "text": " No, this is intuition."}, {"start": 289.48, "end": 292.48, "text": " That's why it's a bit misleading."}, {"start": 292.48, "end": 294.48, "text": " It's nice."}, {"start": 294.48, "end": 296.48, "text": " This is strictly intuition."}, {"start": 296.48, "end": 303.48, "text": " So if you would start to model a race of like trucks on going to..."}, {"start": 303.48, "end": 305.48, "text": " We're going to encounter problems."}, {"start": 305.48, "end": 310.48, "text": " You can take a picture because this part of the wave hits the median first and by that"}, {"start": 310.48, "end": 312.48, "text": " the entire wave is rotating."}, {"start": 312.48, "end": 313.48, "text": " It's like a problem."}, {"start": 313.48, "end": 315.48, "text": " I wouldn't have prayed."}, {"start": 315.48, "end": 317.48, "text": " No."}, {"start": 317.48, "end": 323.48, "text": " I tend to give multiple ways to interpret things because different minds work differently."}, {"start": 323.48, "end": 328.48, "text": " Some graphical ways are working for different people better."}, {"start": 328.48, "end": 329.48, "text": " Okay."}, {"start": 329.48, "end": 331.48, "text": " So, Stas law."}, {"start": 331.48, "end": 333.48, "text": " And we're almost done for today."}, {"start": 333.48, "end": 338.48, "text": " Stas law tells you in what angle-refracted race are going to continue their path."}, {"start": 338.48, "end": 344.48, "text": " And this is given by this expression signs of the angles against velocities,"}, {"start": 344.48, "end": 349.48, "text": " against the reciprocal of indisputable refraction."}, {"start": 349.48, "end": 355.48, "text": " Okay. So let's do the error plus example of the previous image."}, {"start": 355.48, "end": 358.48, "text": " Let's state our expectations before we go."}, {"start": 358.48, "end": 363.48, "text": " So I'm interested in the relation of theta i versus theta t."}, {"start": 363.48, "end": 366.48, "text": " So I know these expressions exactly."}, {"start": 366.48, "end": 368.48, "text": " How much is theta i?"}, {"start": 368.48, "end": 369.48, "text": " How many?"}, {"start": 369.48, "end": 370.48, "text": " 60."}, {"start": 370.48, "end": 371.48, "text": " It's 60."}, {"start": 371.48, "end": 372.48, "text": " Okay. Excellent."}, {"start": 372.48, "end": 375.48, "text": " How much is theta t in degrees?"}, {"start": 375.48, "end": 378.48, "text": " Very far."}, {"start": 378.48, "end": 381.48, "text": " It's something around 35."}, {"start": 381.48, "end": 382.48, "text": " Exactly."}, {"start": 382.48, "end": 384.48, "text": " Okay. So the light is reflected in worse."}, {"start": 384.48, "end": 388.48, "text": " Therefore, the theta t must be less than the theta i."}, {"start": 388.48, "end": 391.48, "text": " So this must be less than this."}, {"start": 391.48, "end": 394.48, "text": " Let's compute the equation and see if this works."}, {"start": 394.48, "end": 398.48, "text": " And if it doesn't work, we're going to call out the physicist."}, {"start": 398.48, "end": 405.48, "text": " So let's just reorder some things and let's put there the indices of refractions."}, {"start": 405.48, "end": 408.48, "text": " And the incoming light angle that we know."}, {"start": 408.48, "end": 411.48, "text": " And just some very simple reordering."}, {"start": 411.48, "end": 415.48, "text": " We are almost there."}, {"start": 415.48, "end": 419.48, "text": " And if we actually compute the size of 60 degrees, we get this."}, {"start": 419.48, "end": 424.48, "text": " Well, this is all we can also carry out the division."}, {"start": 424.48, "end": 428.48, "text": " But at this point, I'm not interested in the size of theta t."}, {"start": 428.48, "end": 430.48, "text": " I'm interested in theta t."}, {"start": 430.48, "end": 434.48, "text": " So I would multiply both sides invert the equation by multiplying with the inverse of the theta t."}, {"start": 434.48, "end": 437.48, "text": " So this theta t should be the arc sign of this."}, {"start": 437.48, "end": 442.48, "text": " And if I compute everything back to degrees, then I will get this theta t,"}, {"start": 442.48, "end": 445.48, "text": " which is 34.75."}, {"start": 445.48, "end": 450.48, "text": " So whoever said that 35 was very close to the actual result."}, {"start": 450.48, "end": 454.48, "text": " But also not to forget that there are different kinds of glasses."}, {"start": 454.48, "end": 460.48, "text": " I mean, there's multiple ways of creating and manufacturing glasses."}, {"start": 460.48, "end": 465.48, "text": " And they have different indices of refraction."}, {"start": 465.48, "end": 468.48, "text": " More or less the same, but it's still different."}, {"start": 468.48, "end": 474.48, "text": " But we can see that this is in a really good agreement with what we see in real life."}, {"start": 474.48, "end": 476.48, "text": " Well, what did we say?"}, {"start": 476.48, "end": 478.48, "text": " Theta t should be less than theta i."}, {"start": 478.48, "end": 481.48, "text": " But 35 is definitely less than 60."}, {"start": 481.48, "end": 487.48, "text": " So again, physics works and physicists are smart people."}, {"start": 487.48, "end": 491.48, "text": " And just another example."}, {"start": 491.48, "end": 496.48, "text": " If you think about the car example or whichever example you like better,"}, {"start": 496.48, "end": 504.48, "text": " you will hopefully immediately see that if you would be going with the yellow arrow,"}, {"start": 504.48, "end": 510.48, "text": " this is going to bend inwards after going back from the water to the air."}, {"start": 510.48, "end": 516.48, "text": " Now, whoa, hold it right there."}, {"start": 516.48, "end": 520.48, "text": " What is happening?"}, {"start": 520.48, "end": 525.48, "text": " I don't see any reflection whatsoever."}, {"start": 525.48, "end": 526.48, "text": " Right?"}, {"start": 526.48, "end": 535.48, "text": " So it seems to me that if I go back at around how much is this in degrees,"}, {"start": 535.48, "end": 539.48, "text": " that's 50 degrees."}, {"start": 539.48, "end": 540.48, "text": " Exactly."}, {"start": 540.48, "end": 543.48, "text": " So something fishy happens at 50 degrees."}, {"start": 543.48, "end": 546.48, "text": " Well, I don't know what is happening."}, {"start": 546.48, "end": 550.48, "text": " I'll tell you the name and we're going to compute whether this is possible or not."}, {"start": 550.48, "end": 554.48, "text": " Well, if it's not possible that our math sucks."}, {"start": 554.48, "end": 556.48, "text": " But let's see."}, {"start": 556.48, "end": 559.48, "text": " So what we call this is total internal reflection."}, {"start": 559.48, "end": 561.48, "text": " There is a critical angle."}, {"start": 561.48, "end": 566.48, "text": " And after this critical angle, there's no more reflection."}, {"start": 566.48, "end": 569.48, "text": " There's no more reflection."}, {"start": 569.48, "end": 571.48, "text": " Only reflection happens."}, {"start": 571.48, "end": 574.48, "text": " So many examples of that and there's many applications of that."}, {"start": 574.48, "end": 577.48, "text": " This is one of the more beautiful examples."}, {"start": 577.48, "end": 581.48, "text": " So let's compute what's going on here."}, {"start": 581.48, "end": 585.48, "text": " What I know is that I have the indices of reflections I know."}, {"start": 585.48, "end": 590.48, "text": " I know this degree that we just dealt with."}, {"start": 590.48, "end": 593.48, "text": " And something interesting should happen here."}, {"start": 593.48, "end": 596.48, "text": " And something interesting already happened."}, {"start": 596.48, "end": 601.48, "text": " So I just plugged in everything what I have seen on this image."}, {"start": 601.48, "end": 602.48, "text": " And I get this."}, {"start": 602.48, "end": 608.48, "text": " And this is awfully horribly, terribly wrong."}, {"start": 608.48, "end": 610.48, "text": " Someone please help me out."}, {"start": 610.48, "end": 614.48, "text": " Why is that?"}, {"start": 614.48, "end": 619.48, "text": " It's okay."}, {"start": 619.48, "end": 620.48, "text": " Yes."}, {"start": 620.48, "end": 621.48, "text": " I can be bigger than one."}, {"start": 621.48, "end": 622.48, "text": " Exactly."}, {"start": 622.48, "end": 626.48, "text": " So the support of the sign is between one and minus one,"}, {"start": 626.48, "end": 629.48, "text": " at least according to my experiences."}, {"start": 629.48, "end": 633.48, "text": " So it's saying that the sign of an angle is more than one."}, {"start": 633.48, "end": 635.48, "text": " It's mathematically not possible."}, {"start": 635.48, "end": 639.48, "text": " So it says that there's no such angle."}, {"start": 639.48, "end": 643.48, "text": " What would be the angle of reflection if I would use using 50 degrees?"}, {"start": 643.48, "end": 646.48, "text": " Then it says something that mathematically doesn't make any sense."}, {"start": 646.48, "end": 651.48, "text": " So math actually suggests to you if you use the right numbers."}, {"start": 651.48, "end": 656.48, "text": " It suggests to you already that this totally internal reflection would happen."}, {"start": 656.48, "end": 659.48, "text": " Let's try to compute the critical angle."}, {"start": 659.48, "end": 661.48, "text": " And this, I just reorder things."}, {"start": 661.48, "end": 667.48, "text": " This is hopefully the critical angle that I will be trying to compute."}, {"start": 667.48, "end": 671.48, "text": " Well, if I have this theta one, this is relatively small,"}, {"start": 671.48, "end": 674.48, "text": " I would then there is going to be a reflection."}, {"start": 674.48, "end": 677.48, "text": " There is this critical angle on the second figure,"}, {"start": 677.48, "end": 682.48, "text": " at which I have this 90 degree reflection."}, {"start": 682.48, "end": 684.48, "text": " So it says that at the critical angle,"}, {"start": 684.48, "end": 687.48, "text": " this thing is going to be 90 degrees."}, {"start": 687.48, "end": 691.48, "text": " And after that, so this is smaller than this."}, {"start": 691.48, "end": 696.48, "text": " After this critical angle, it's only going to be reflection."}, {"start": 696.48, "end": 699.48, "text": " Now let's try to compute this."}, {"start": 699.48, "end": 702.48, "text": " Note is 90 degrees here."}, {"start": 702.48, "end": 707.48, "text": " So what I put here is this is what I'm interested in."}, {"start": 707.48, "end": 712.48, "text": " And this happens when this reflection is at 90 degrees."}, {"start": 712.48, "end": 715.48, "text": " So I put there this 90 degrees explicitly,"}, {"start": 715.48, "end": 718.48, "text": " and I want to know this theta one."}, {"start": 718.48, "end": 725.48, "text": " That's going to be the critical angle."}, {"start": 725.48, "end": 729.48, "text": " Well, if I actually do the computation with the 90 degrees,"}, {"start": 729.48, "end": 732.48, "text": " then I'm going to get one for the sign."}, {"start": 732.48, "end": 734.48, "text": " So this is n2 over n1."}, {"start": 734.48, "end": 738.48, "text": " Well, I'm still not interested in the sign of this angle."}, {"start": 738.48, "end": 740.48, "text": " I'm interested in the angle."}, {"start": 740.48, "end": 743.48, "text": " So I have to invert the other side, both sides of the equation."}, {"start": 743.48, "end": 747.48, "text": " And this is the definition of the critical angle."}, {"start": 747.48, "end": 750.48, "text": " And if you write it in Wikipedia, critical angle,"}, {"start": 750.48, "end": 753.48, "text": " you are going to get the very same formula."}, {"start": 753.48, "end": 757.48, "text": " But the most interesting thing is that you can actually derive this yourself."}, {"start": 757.48, "end": 759.48, "text": " And this is not a huge derivation."}, {"start": 759.48, "end": 761.48, "text": " This is very simple."}, {"start": 761.48, "end": 764.48, "text": " This is where this 90 degree reflection happens."}, {"start": 764.48, "end": 768.48, "text": " So what is our expectation for this critical angle?"}, {"start": 768.48, "end": 772.48, "text": " Let's look at the reality again."}, {"start": 772.48, "end": 776.48, "text": " What?"}, {"start": 776.48, "end": 782.48, "text": " So this is, I'm just trying to hint without saying,"}, {"start": 782.48, "end": 784.48, "text": " telling you the solution."}, {"start": 784.48, "end": 787.48, "text": " Let's try it without hints."}, {"start": 787.48, "end": 789.48, "text": " What could be the critical angle here?"}, {"start": 789.48, "end": 792.48, "text": " Raise your hand if you don't answer."}, {"start": 792.48, "end": 795.48, "text": " Sorry, I will, for a pedagogical reason,"}, {"start": 795.48, "end": 799.48, "text": " I will ask someone who I haven't asked before."}, {"start": 799.48, "end": 801.48, "text": " Let's see if it's correct."}, {"start": 801.48, "end": 803.48, "text": " I have to ask you."}, {"start": 803.48, "end": 805.48, "text": " Not to be important."}, {"start": 805.48, "end": 807.48, "text": " Not to be important."}, {"start": 807.48, "end": 810.48, "text": " How do you get important questions?"}, {"start": 810.48, "end": 814.48, "text": " I have to ask you something smaller than 50 degrees."}, {"start": 814.48, "end": 816.48, "text": " Because that was only..."}, {"start": 816.48, "end": 817.48, "text": " Exactly."}, {"start": 817.48, "end": 820.48, "text": " So the usual answer I get is that 50 degrees."}, {"start": 820.48, "end": 824.48, "text": " Because I see total internal reflection."}, {"start": 824.48, "end": 828.48, "text": " But total internal reflection means that after some point,"}, {"start": 828.48, "end": 831.48, "text": " after that is only going to be reflection."}, {"start": 831.48, "end": 837.48, "text": " So it doesn't mean that if it doesn't mean that this is that point."}, {"start": 837.48, "end": 841.48, "text": " I will be trying 60 degrees, I will also see reflection."}, {"start": 841.48, "end": 845.48, "text": " But this doesn't mean that 60 degrees is the critical angle."}, {"start": 845.48, "end": 848.48, "text": " It's before that, that's some point."}, {"start": 848.48, "end": 851.48, "text": " What is your answer to?"}, {"start": 851.48, "end": 854.48, "text": " What is your answer?"}, {"start": 854.48, "end": 863.48, "text": " Yes, but I was thinking that if it's critical angle,"}, {"start": 863.48, "end": 866.48, "text": " for example, say 50 is critical angle."}, {"start": 866.48, "end": 870.48, "text": " Maybe we have also reflection and then horizontal one."}, {"start": 870.48, "end": 872.48, "text": " Exactly."}, {"start": 872.48, "end": 875.48, "text": " What you have seen on the figure."}, {"start": 875.48, "end": 878.48, "text": " So at the critical angle, you see this."}, {"start": 878.48, "end": 881.48, "text": " So this is over the critical angle."}, {"start": 881.48, "end": 884.48, "text": " So it has to be less than 50 degrees."}, {"start": 884.48, "end": 887.48, "text": " Less than 60, 50 degrees."}, {"start": 887.48, "end": 889.48, "text": " So this is very simple from here."}, {"start": 889.48, "end": 892.48, "text": " Let's just substitute the indices of refraction."}, {"start": 892.48, "end": 894.48, "text": " 41.81."}, {"start": 894.48, "end": 897.48, "text": " Physics works and we are still alive."}, {"start": 897.48, "end": 902.48, "text": " So that's basically it for today."}, {"start": 902.48, "end": 905.48, "text": " We have used reality to VR judge."}, {"start": 905.48, "end": 911.48, "text": " We are not just writing formula on paper and then be happy about how much we can understand"}, {"start": 911.48, "end": 914.48, "text": " or how much we can memorize of them."}, {"start": 914.48, "end": 919.48, "text": " We put everything to use and you will see all of this in C++ code."}, {"start": 919.48, "end": 921.48, "text": " Not so long from now."}, {"start": 921.48, "end": 927.48, "text": " So that would be the introductory course and I'll see you next week."}, {"start": 927.48, "end": 954.48, "text": "."}]
Two Minute Papers
https://www.youtube.com/watch?v=iKNSPETJNgo
TU Wien Rendering #5 - The Fresnel Equation and Schlick's Approximation
What portion of light is reflected and refracted by glass-like surfaces? The Fresnel equation shows us why we see such strong reflections in windows from grazing angles, and why it's so simple to look through them. Schlick's approximation of the original equation provides simple and powerful means that can be computed rapidly. However, it has its own limitations. I wonder what they are? In this segment, we try to find out! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Now, we have some image from physical reality. We have an interface that is air and glass. What I see here is that there is reflection and there is reflection. So in case not everyone understands the term, reflection is least reflection on. And reflection is reflection, right? With my horribly broken German. Thank God this course is not in German, everyone would cry. But you guys and girls have it easy because in Hungarian you would say, think this is a very dish and think the dish. So I think reflection and reflection is much better than that. At least much more convenient. Question, which is stronger? Which effect is stronger? Raise your hand if you think that reflection that this is this effect is stronger. You are asking this question about this example in the example. Yes. So I can see that the reflection has a more pronounced effect than reflection here. Because we will deal with the cilistackrum. And what we can do is we can actually write up the vectors that we have been talking about for a case like that. This is towards the direction of the light source. There is a surface normal. It looks like this so the normal looks upwards. This is where it is reflected. This is where it is transmitted. We don't have a vector for that. And we have the different angles for this. Wonderful. So let's take a look at the simplified version of the analysis equation. At the simplified version of the analysis equation, this one at least is called Schlich's approximation. We have no idea what this is about. This is not so complicated thing. What this gives me is the probability of reflection. So R of theta is the probability of reflection. So as I have seen the image, I am interested in what is the probability of reflection and reflection. Because I imagine that the probability of reflection is higher in this case. And I would like to compute it in some way in my computer program. Well, let's take a quick look at this. R of theta is the probability of reflection. And this is important to remember because during our calculations, I will forget this approximately 15 times. So may I ask your name? Lisa. Lisa, OK. Well, if I ask you what is R theta, you will tell me that this is the probability of reflection. The probability of reflection. Yes, and exactly. Because seriously, I will be forgetting it all the time. R0 is the probability of reflection on normal incidence. This means that light is coming from, if it would be coming from above, what are the chances that it gets reflected? And R0 can be given by this expression. We will be talking about this. N1 and N2 are basically indices of reflection. We will have examples with that too. But let's quickly go through this and see if physics makes any sense. Well, for an R vacuum medium, we have, let's say, the index of reflection of R is 1. And this N1 is the medium that we go into. So for instance, here glass. And let's see what happens there. But before we do that, T is the probability of transmission. Obviously, if the light is not, we forget absorption for now. If the light is not reflected, then there is reflection. It's simple as that. So if I add up these two probabilities, I give 1. So let's play with it. R at 0 degrees is R0. Why? Because cosine of theta, so the cosine of 0 degrees is 1. So on the right side, I have 1 minus 1. So the second term is killed by the 0. Therefore, I will have R0. And R at 0 degrees, this is the theta is the degrees, the angle of the incoming light, 0 degrees. So it means that it comes from upwards. What is it? This is the probability of reflection of normal incidence. So this is basically the very same thing. So if it comes like this, what is the probability of it coming back, bouncing off of the glass? What's up with 90 degrees? Well, 90 degrees, the cosine of theta is 0. Therefore, I will have both of these terms. Very simple. And this is going to be 1. So the probability of reflection at 90 degrees is 1. Because imagine that I'm coming from above. Then this means that there is a very likely chance that I'm going to get through. So imagine a super crowded bus in the morning. And you just cannot fit in there. How would you like to go in if you don't care about the health and the effectiveness of the other people? Well, you will just run in there and hopefully they will make some space. I have the best probability if I run towards them. If I would be running from the side, it would be very likely that they would just push me back. There is a high chance that I would be reflected. So I want reflection. I want to get on the bus. So as I raise this angle from normal incidence, the more probability there is for the rate of bounce back. So this, for now, seems to make some sense. But is it still reflection at 90 degrees and not just the much itself? You have to think in terms of limits. So what is the probability at 89 degrees? It's going to get reflected. If you just raise that, you are going to approach the probability of 1. So this is a bit boggling the mind that you can see that there is a continuous transition. From here, there is a high probability for reflection. As I go towards 90 degrees, there is more probability for reflection. And we define that at 90 degrees, we say that it is reflection because I am moving along the boundary and entering the glass. By the way, that's a great question. I was thinking about this too. So let's say that index of reflection of glass is 1.5. Let's compute this thing quickly. It's 0.5 over 2.5 squared. And I do the very same substitution for the rest of the equation. But before I get this, what do I expect from the plot? That's another important mathematical principle. Do this all the time. Before you compute the result, state what you would expect from the result. Because this gives you a much higher level of understanding. Well, I'm interesting in R of theta. What does R of theta mean? How about building the reflection? Excellent. Please note it again. So the probability of reflection at 0 degrees, I would say, is something very low. So I have written this here. That R of 0 is less than 0.1. Because it's the probability of... I mean, R 0 is the probability of... Oh, sorry. So the probability of reflection is low. I would say less than 10%. So if I come from upwards, reflection is likely. I'm very likely to get on the bus if I ran into the people from the front. What's up with, for instance, 60 degrees? Well, we know exactly what happens at 60 degrees. Because how many degrees do we have here? What is the incidence now? It's 60. And we can see that at 60 degrees, there is a chance for reflection and reflection. And the reflection chance is... Higher. Higher, exactly. So what I would imagine is that at 60 degrees, there is a higher chance of reflection. There is... Then the previous one, I mean, but reflection is still stronger. As you can see on the image, we are going to compute this and we are going to be... We're going to let nature be our judge whether the calculation is correct or not. So 60 degrees, I could be converted to radiance. That's more or less 1. And so R of 1 is 0.2. This means 20% chance of reflection, 80% of reflection. It seems to be in line of what I see here. But this is just my expectation. And what we have been talking about, 5 or 2, this means 90 degree angle. I would expect it to be 1. So if I convert it to radiance, then this is 1.7. So R of 1.7, I expect it to be 1. Let's put all of these together and let's do what engineers do all the time. We'll come from Alpha and try to plot this. So I imagine that R of 0 is less than 0.1. So getting on the bus easily, well, R at 0 is less than 0.1. So far so good. R at 1 is less than 0.2. This is the 60 degrees, both later, reflection. Well, R at 1 is less than 0.2. So checkmark. And R at around 1.7 is indeed around 1. So apparently physicists are small people. And physics makes sense. So, particularly. But there is something fishy about this. So this is correct. I mean, what we see here is in line with our expectations. But there is something fishy about this plot. Raise your hand if you know what it is. Okay, I'll give you a hint. This plot R of theta, which is. This is the probability of reflection. Okay. What happens after, I don't know, if I would just extrapolate. What would happen if not 1.5 or 2? What would I measure? I think it would be something like the very least minus. It's going upwards. But it would be at least 3. I don't know about you, but I don't know about probabilities that can be larger than 1. So this would give me some fishy results if I would substitute something like that. So let's try to share some more light on it. What if I have a vacuum, vacuum interaction? So below, I don't have glass anymore. I have vacuum. Well, the index of reflection of vacuum is 1. So let's just substitute 1 here. So this is going to be 0. And I'm going to have the second term. 1 minus cosine of theta to the fifth. Why? Because this is 0. And this is 1 minus 0. So I will keep this term. Okay. Engineering mode. What do we expect from this plot? I have vacuum or air, if you wish, if you will. And vacuum again. I start a way of light. What will happen with probability of what? Reflection or refraction? Raise your hand if you know. There should be no reflection. There should be no reflection. Exactly. Why? Because there is no interaction of doing medium. Exactly. So the definition of vacuum is again, nothing. There's nothing in there. There's nothing that could reflect this light, this ray of light back. If there is vacuum, we expect rays of light to travel in vacuum indefinitely. There is no way that it could be reflected. So since r0 is the probability of......thin this r theta. What do I think this should be? If there is only refraction, then this r of theta will be fantastic. Well, let's plot this. This looks like 1 minus cosine theta to the fifth. This is already fishy. But let's take a look. Blah blah blah. This is what I've been talking about. So I expected to be 0. Let's plot it. And this looks like this, which is not constant 0 by any stretch. So the question is, you know, what went terribly wrong here? And it's very... See, the sleek approximation is an approximation. It's good for interfaces which are vacuum or air and something. And not something, something. So A and B. It works well, but it's a quick approximation because this is the original Fresnel equation. And this is much more expensive to compute. And this other one was much quicker, but it's a bit limited in use. So let's give it a crack. So what would this say about a vacuum interaction? Well, I would substitute n1 and n2 equals 1. This is the index of refraction of vacuum. Again, the very same thing back just the n1s and n2s are gone. I'm going to use a trigonometric identity, which says that the square root of 1 minus the sine of square something is at the side. So let's substitute these for the cosines. So what I see here is the cosine of theta minus cosine theta. So what is this expression exactly? How much is it? 0. Exactly. And this is what I was expecting. So apparently physicists, again, are smart people.
[{"start": 0.0, "end": 9.0, "text": " Now, we have some image from physical reality. We have an interface that is air and glass."}, {"start": 9.0, "end": 15.0, "text": " What I see here is that there is reflection and there is reflection."}, {"start": 15.0, "end": 23.0, "text": " So in case not everyone understands the term, reflection is least reflection on."}, {"start": 23.0, "end": 30.0, "text": " And reflection is reflection, right? With my horribly broken German."}, {"start": 30.0, "end": 33.0, "text": " Thank God this course is not in German, everyone would cry."}, {"start": 33.0, "end": 38.0, "text": " But you guys and girls have it easy because in Hungarian you would say,"}, {"start": 38.0, "end": 41.0, "text": " think this is a very dish and think the dish."}, {"start": 41.0, "end": 45.0, "text": " So I think reflection and reflection is much better than that."}, {"start": 45.0, "end": 47.0, "text": " At least much more convenient."}, {"start": 47.0, "end": 55.0, "text": " Question, which is stronger? Which effect is stronger?"}, {"start": 55.0, "end": 61.0, "text": " Raise your hand if you think that reflection that this is this effect is stronger."}, {"start": 61.0, "end": 67.0, "text": " You are asking this question about this example in the example."}, {"start": 67.0, "end": 73.0, "text": " Yes. So I can see that the reflection has a more pronounced effect than reflection here."}, {"start": 73.0, "end": 77.0, "text": " Because we will deal with the cilistackrum."}, {"start": 77.0, "end": 85.0, "text": " And what we can do is we can actually write up the vectors that we have been talking about for a case like that."}, {"start": 85.0, "end": 89.0, "text": " This is towards the direction of the light source. There is a surface normal."}, {"start": 89.0, "end": 93.0, "text": " It looks like this so the normal looks upwards. This is where it is reflected."}, {"start": 93.0, "end": 96.0, "text": " This is where it is transmitted. We don't have a vector for that."}, {"start": 96.0, "end": 100.0, "text": " And we have the different angles for this."}, {"start": 100.0, "end": 106.0, "text": " Wonderful. So let's take a look at the simplified version of the analysis equation."}, {"start": 106.0, "end": 111.0, "text": " At the simplified version of the analysis equation, this one at least is called Schlich's approximation."}, {"start": 111.0, "end": 116.0, "text": " We have no idea what this is about. This is not so complicated thing."}, {"start": 116.0, "end": 120.0, "text": " What this gives me is the probability of reflection."}, {"start": 120.0, "end": 124.0, "text": " So R of theta is the probability of reflection."}, {"start": 124.0, "end": 131.0, "text": " So as I have seen the image, I am interested in what is the probability of reflection and reflection."}, {"start": 131.0, "end": 135.0, "text": " Because I imagine that the probability of reflection is higher in this case."}, {"start": 135.0, "end": 139.0, "text": " And I would like to compute it in some way in my computer program."}, {"start": 139.0, "end": 145.0, "text": " Well, let's take a quick look at this. R of theta is the probability of reflection."}, {"start": 145.0, "end": 149.0, "text": " And this is important to remember because during our calculations,"}, {"start": 149.0, "end": 156.0, "text": " I will forget this approximately 15 times. So may I ask your name?"}, {"start": 156.0, "end": 157.0, "text": " Lisa."}, {"start": 157.0, "end": 165.0, "text": " Lisa, OK. Well, if I ask you what is R theta, you will tell me that this is the probability of reflection."}, {"start": 165.0, "end": 167.0, "text": " The probability of reflection."}, {"start": 167.0, "end": 172.0, "text": " Yes, and exactly. Because seriously, I will be forgetting it all the time."}, {"start": 172.0, "end": 176.0, "text": " R0 is the probability of reflection on normal incidence."}, {"start": 176.0, "end": 180.0, "text": " This means that light is coming from, if it would be coming from above,"}, {"start": 180.0, "end": 185.0, "text": " what are the chances that it gets reflected?"}, {"start": 185.0, "end": 190.0, "text": " And R0 can be given by this expression. We will be talking about this."}, {"start": 190.0, "end": 194.0, "text": " N1 and N2 are basically indices of reflection."}, {"start": 194.0, "end": 197.0, "text": " We will have examples with that too."}, {"start": 197.0, "end": 203.0, "text": " But let's quickly go through this and see if physics makes any sense."}, {"start": 203.0, "end": 210.0, "text": " Well, for an R vacuum medium, we have, let's say, the index of reflection of R is 1."}, {"start": 210.0, "end": 216.0, "text": " And this N1 is the medium that we go into. So for instance, here glass."}, {"start": 216.0, "end": 220.0, "text": " And let's see what happens there."}, {"start": 220.0, "end": 224.0, "text": " But before we do that, T is the probability of transmission."}, {"start": 224.0, "end": 229.0, "text": " Obviously, if the light is not, we forget absorption for now."}, {"start": 229.0, "end": 234.0, "text": " If the light is not reflected, then there is reflection. It's simple as that."}, {"start": 234.0, "end": 238.0, "text": " So if I add up these two probabilities, I give 1."}, {"start": 238.0, "end": 244.0, "text": " So let's play with it. R at 0 degrees is R0. Why?"}, {"start": 244.0, "end": 249.0, "text": " Because cosine of theta, so the cosine of 0 degrees is 1."}, {"start": 249.0, "end": 255.0, "text": " So on the right side, I have 1 minus 1. So the second term is killed by the 0."}, {"start": 255.0, "end": 262.0, "text": " Therefore, I will have R0. And R at 0 degrees, this is the theta is the degrees,"}, {"start": 262.0, "end": 268.0, "text": " the angle of the incoming light, 0 degrees. So it means that it comes from upwards."}, {"start": 268.0, "end": 273.0, "text": " What is it? This is the probability of reflection of normal incidence."}, {"start": 273.0, "end": 277.0, "text": " So this is basically the very same thing."}, {"start": 277.0, "end": 284.0, "text": " So if it comes like this, what is the probability of it coming back, bouncing off of the glass?"}, {"start": 284.0, "end": 290.0, "text": " What's up with 90 degrees? Well, 90 degrees, the cosine of theta is 0."}, {"start": 290.0, "end": 296.0, "text": " Therefore, I will have both of these terms. Very simple."}, {"start": 296.0, "end": 303.0, "text": " And this is going to be 1. So the probability of reflection at 90 degrees is 1."}, {"start": 303.0, "end": 307.0, "text": " Because imagine that I'm coming from above."}, {"start": 307.0, "end": 312.0, "text": " Then this means that there is a very likely chance that I'm going to get through."}, {"start": 312.0, "end": 318.0, "text": " So imagine a super crowded bus in the morning. And you just cannot fit in there."}, {"start": 318.0, "end": 326.0, "text": " How would you like to go in if you don't care about the health and the effectiveness of the other people?"}, {"start": 326.0, "end": 331.0, "text": " Well, you will just run in there and hopefully they will make some space."}, {"start": 331.0, "end": 337.0, "text": " I have the best probability if I run towards them. If I would be running from the side,"}, {"start": 337.0, "end": 340.0, "text": " it would be very likely that they would just push me back."}, {"start": 340.0, "end": 343.0, "text": " There is a high chance that I would be reflected."}, {"start": 343.0, "end": 346.0, "text": " So I want reflection. I want to get on the bus."}, {"start": 346.0, "end": 354.0, "text": " So as I raise this angle from normal incidence, the more probability there is for the rate of bounce back."}, {"start": 354.0, "end": 358.0, "text": " So this, for now, seems to make some sense."}, {"start": 358.0, "end": 363.0, "text": " But is it still reflection at 90 degrees and not just the much itself?"}, {"start": 363.0, "end": 366.0, "text": " You have to think in terms of limits."}, {"start": 366.0, "end": 371.0, "text": " So what is the probability at 89 degrees? It's going to get reflected."}, {"start": 371.0, "end": 376.0, "text": " If you just raise that, you are going to approach the probability of 1."}, {"start": 376.0, "end": 386.0, "text": " So this is a bit boggling the mind that you can see that there is a continuous transition."}, {"start": 386.0, "end": 389.0, "text": " From here, there is a high probability for reflection."}, {"start": 389.0, "end": 394.0, "text": " As I go towards 90 degrees, there is more probability for reflection."}, {"start": 394.0, "end": 403.0, "text": " And we define that at 90 degrees, we say that it is reflection because I am moving along the boundary and entering the glass."}, {"start": 403.0, "end": 406.0, "text": " By the way, that's a great question. I was thinking about this too."}, {"start": 406.0, "end": 411.0, "text": " So let's say that index of reflection of glass is 1.5."}, {"start": 411.0, "end": 416.0, "text": " Let's compute this thing quickly. It's 0.5 over 2.5 squared."}, {"start": 416.0, "end": 422.0, "text": " And I do the very same substitution for the rest of the equation."}, {"start": 422.0, "end": 428.0, "text": " But before I get this, what do I expect from the plot?"}, {"start": 428.0, "end": 433.0, "text": " That's another important mathematical principle. Do this all the time."}, {"start": 433.0, "end": 438.0, "text": " Before you compute the result, state what you would expect from the result."}, {"start": 438.0, "end": 442.0, "text": " Because this gives you a much higher level of understanding."}, {"start": 442.0, "end": 448.0, "text": " Well, I'm interesting in R of theta. What does R of theta mean?"}, {"start": 448.0, "end": 450.0, "text": " How about building the reflection?"}, {"start": 450.0, "end": 453.0, "text": " Excellent. Please note it again."}, {"start": 453.0, "end": 463.0, "text": " So the probability of reflection at 0 degrees, I would say, is something very low."}, {"start": 463.0, "end": 468.0, "text": " So I have written this here. That R of 0 is less than 0.1."}, {"start": 468.0, "end": 471.0, "text": " Because it's the probability of..."}, {"start": 471.0, "end": 476.0, "text": " I mean, R 0 is the probability of..."}, {"start": 476.0, "end": 481.0, "text": " Oh, sorry. So the probability of reflection is low."}, {"start": 481.0, "end": 483.0, "text": " I would say less than 10%."}, {"start": 483.0, "end": 488.0, "text": " So if I come from upwards, reflection is likely."}, {"start": 488.0, "end": 493.0, "text": " I'm very likely to get on the bus if I ran into the people from the front."}, {"start": 493.0, "end": 501.0, "text": " What's up with, for instance, 60 degrees?"}, {"start": 501.0, "end": 507.0, "text": " Well, we know exactly what happens at 60 degrees. Because how many degrees do we have here?"}, {"start": 507.0, "end": 510.0, "text": " What is the incidence now? It's 60."}, {"start": 510.0, "end": 516.0, "text": " And we can see that at 60 degrees, there is a chance for reflection and reflection."}, {"start": 516.0, "end": 520.0, "text": " And the reflection chance is..."}, {"start": 520.0, "end": 522.0, "text": " Higher."}, {"start": 522.0, "end": 530.0, "text": " Higher, exactly. So what I would imagine is that at 60 degrees, there is a higher chance of reflection."}, {"start": 530.0, "end": 532.0, "text": " There is..."}, {"start": 532.0, "end": 536.0, "text": " Then the previous one, I mean, but reflection is still stronger."}, {"start": 536.0, "end": 540.0, "text": " As you can see on the image, we are going to compute this and we are going to be..."}, {"start": 540.0, "end": 545.0, "text": " We're going to let nature be our judge whether the calculation is correct or not."}, {"start": 545.0, "end": 552.0, "text": " So 60 degrees, I could be converted to radiance. That's more or less 1."}, {"start": 552.0, "end": 559.0, "text": " And so R of 1 is 0.2. This means 20% chance of reflection, 80% of reflection."}, {"start": 559.0, "end": 565.0, "text": " It seems to be in line of what I see here. But this is just my expectation."}, {"start": 565.0, "end": 571.0, "text": " And what we have been talking about, 5 or 2, this means 90 degree angle."}, {"start": 571.0, "end": 576.0, "text": " I would expect it to be 1. So if I convert it to radiance, then this is 1.7."}, {"start": 576.0, "end": 579.0, "text": " So R of 1.7, I expect it to be 1."}, {"start": 579.0, "end": 583.0, "text": " Let's put all of these together and let's do what engineers do all the time."}, {"start": 583.0, "end": 586.0, "text": " We'll come from Alpha and try to plot this."}, {"start": 586.0, "end": 593.0, "text": " So I imagine that R of 0 is less than 0.1. So getting on the bus easily,"}, {"start": 593.0, "end": 599.0, "text": " well, R at 0 is less than 0.1. So far so good."}, {"start": 599.0, "end": 604.0, "text": " R at 1 is less than 0.2. This is the 60 degrees, both later, reflection."}, {"start": 604.0, "end": 610.0, "text": " Well, R at 1 is less than 0.2. So checkmark."}, {"start": 610.0, "end": 620.0, "text": " And R at around 1.7 is indeed around 1. So apparently physicists are small people."}, {"start": 620.0, "end": 627.0, "text": " And physics makes sense. So, particularly. But there is something fishy about this."}, {"start": 627.0, "end": 633.0, "text": " So this is correct. I mean, what we see here is in line with our expectations."}, {"start": 633.0, "end": 640.0, "text": " But there is something fishy about this plot. Raise your hand if you know what it is."}, {"start": 640.0, "end": 647.0, "text": " Okay, I'll give you a hint. This plot R of theta, which is."}, {"start": 647.0, "end": 651.0, "text": " This is the probability of reflection."}, {"start": 651.0, "end": 656.0, "text": " Okay. What happens after, I don't know, if I would just extrapolate."}, {"start": 656.0, "end": 661.0, "text": " What would happen if not 1.5 or 2? What would I measure?"}, {"start": 661.0, "end": 666.0, "text": " I think it would be something like the very least minus."}, {"start": 666.0, "end": 671.0, "text": " It's going upwards. But it would be at least 3."}, {"start": 671.0, "end": 677.0, "text": " I don't know about you, but I don't know about probabilities that can be larger than 1."}, {"start": 677.0, "end": 683.0, "text": " So this would give me some fishy results if I would substitute something like that."}, {"start": 683.0, "end": 686.0, "text": " So let's try to share some more light on it."}, {"start": 686.0, "end": 693.0, "text": " What if I have a vacuum, vacuum interaction? So below, I don't have glass anymore."}, {"start": 693.0, "end": 696.0, "text": " I have vacuum."}, {"start": 696.0, "end": 700.0, "text": " Well, the index of reflection of vacuum is 1."}, {"start": 700.0, "end": 704.0, "text": " So let's just substitute 1 here. So this is going to be 0."}, {"start": 704.0, "end": 709.0, "text": " And I'm going to have the second term. 1 minus cosine of theta to the fifth."}, {"start": 709.0, "end": 713.0, "text": " Why? Because this is 0. And this is 1 minus 0."}, {"start": 713.0, "end": 718.0, "text": " So I will keep this term. Okay. Engineering mode."}, {"start": 718.0, "end": 721.0, "text": " What do we expect from this plot?"}, {"start": 721.0, "end": 725.0, "text": " I have vacuum or air, if you wish, if you will."}, {"start": 725.0, "end": 728.0, "text": " And vacuum again."}, {"start": 728.0, "end": 735.0, "text": " I start a way of light. What will happen with probability of what?"}, {"start": 735.0, "end": 742.0, "text": " Reflection or refraction? Raise your hand if you know."}, {"start": 742.0, "end": 744.0, "text": " There should be no reflection."}, {"start": 744.0, "end": 748.0, "text": " There should be no reflection. Exactly. Why?"}, {"start": 748.0, "end": 751.0, "text": " Because there is no interaction of doing medium."}, {"start": 751.0, "end": 757.0, "text": " Exactly. So the definition of vacuum is again, nothing. There's nothing in there."}, {"start": 757.0, "end": 761.0, "text": " There's nothing that could reflect this light, this ray of light back."}, {"start": 761.0, "end": 766.0, "text": " If there is vacuum, we expect rays of light to travel in vacuum indefinitely."}, {"start": 766.0, "end": 769.0, "text": " There is no way that it could be reflected."}, {"start": 769.0, "end": 774.0, "text": " So since r0 is the probability of..."}, {"start": 774.0, "end": 777.0, "text": "...thin this r theta."}, {"start": 777.0, "end": 786.0, "text": " What do I think this should be? If there is only refraction, then this r of theta will be fantastic."}, {"start": 786.0, "end": 790.0, "text": " Well, let's plot this. This looks like 1 minus cosine theta to the fifth."}, {"start": 790.0, "end": 794.0, "text": " This is already fishy. But let's take a look."}, {"start": 794.0, "end": 799.0, "text": " Blah blah blah. This is what I've been talking about. So I expected to be 0."}, {"start": 799.0, "end": 808.0, "text": " Let's plot it. And this looks like this, which is not constant 0 by any stretch."}, {"start": 808.0, "end": 813.0, "text": " So the question is, you know, what went terribly wrong here?"}, {"start": 813.0, "end": 818.0, "text": " And it's very... See, the sleek approximation is an approximation."}, {"start": 818.0, "end": 824.0, "text": " It's good for interfaces which are vacuum or air and something."}, {"start": 824.0, "end": 827.0, "text": " And not something, something."}, {"start": 827.0, "end": 836.0, "text": " So A and B. It works well, but it's a quick approximation because this is the original Fresnel equation."}, {"start": 836.0, "end": 839.0, "text": " And this is much more expensive to compute."}, {"start": 839.0, "end": 844.0, "text": " And this other one was much quicker, but it's a bit limited in use."}, {"start": 844.0, "end": 852.0, "text": " So let's give it a crack. So what would this say about a vacuum interaction?"}, {"start": 852.0, "end": 860.0, "text": " Well, I would substitute n1 and n2 equals 1. This is the index of refraction of vacuum."}, {"start": 860.0, "end": 865.0, "text": " Again, the very same thing back just the n1s and n2s are gone."}, {"start": 865.0, "end": 873.0, "text": " I'm going to use a trigonometric identity, which says that the square root of 1 minus the sine of square something is at the side."}, {"start": 873.0, "end": 878.0, "text": " So let's substitute these for the cosines."}, {"start": 878.0, "end": 884.0, "text": " So what I see here is the cosine of theta minus cosine theta."}, {"start": 884.0, "end": 888.0, "text": " So what is this expression exactly? How much is it?"}, {"start": 888.0, "end": 891.0, "text": " 0. Exactly. And this is what I was expecting."}, {"start": 891.0, "end": 904.0, "text": " So apparently physicists, again, are smart people."}]
Two Minute Papers
https://www.youtube.com/watch?v=Gm7szS1hQxs
TU Wien Rendering #4 - Diffuse, Specular and Ambient Shading
As we don't yet know enough to solve the full rendering equations, we invoke simplified BRDF models to capture the most common materials seen in nature: one for diffuse, specular, and ambient shading. They provide quite beautiful and powerful results with a staggeringly simple formulation. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's go with a simplified version of the whole thing. We're going to talk about simplified BRDF models. Well, there's going to be the ambient BRDF. How does it look like? Well, first, first. On the left side, I see I. This is intensity. Well, what is this? Well, no one knows because we have not radians, not something very physical here. This is going to be simplified version of the whole rendering equation. Basically, a bunch of hex, if you know something that is vastly simplified, it doesn't really have a physical meaning, it doesn't have physical feelings, but it works. It's beautiful and it's a good way to understand what's going on. So the intensity that we measure is going to be an ambient, the product of an ambient coefficient of an object. This is dependent of the object. This means that this means something like the color of the object. And the eye is going to be the intensity, the ambient intensity of a scene, or the light source. And later on, we're going to be talking about why this is interesting. So this is an example. We have a blue object over here and it's the same color everywhere. Why? Because the farther that doesn't depend on anything. There's just one coefficient that's multiplied by this intensity of the scene. So that's an ambient shading. What else is there? There's the diffuse BRDF. This is what we compute. It's a diffuse coefficient. What is the diffuse color? The diffuse albedo of this thing. And there's going to be a product of L and M. This is what we did before. Diffuse objects look like that. Please raise your hand if you have ever done any kind of diffuse lambershine model in graphics. Okay, excellent. Great. And just another thing. This diffuse coefficient is the very least RGB. Okay, so this is how much light is not absorbed on every different wavelength. Because I cannot describe colors in one number. The very least RGB or a continuous spectrum. Just for the background. And now it's looking better because I can more or less see where the light source is for this diffuse shading. There's also a specular BRDF. What I compute is V dot R times specular coefficient and V is the vector pointing towards the viewer and R is the reflected. There's going to be examples of that. Okay, so just that you see the formula here. And there's an M which is a shining as factor in the next assignment you will play with this yourself. So for now I will keep this a secret what this exactly does. And whoops, I'm going to jump through this because I would like to ask a question later on and you're going to find out. Yes, I'm looking at, excuse me. So this is how the specular highlights look. And if I add up all of these things ambient and diffuse and specular I get some complex-looking model that looks something that is that approximates physical reality. So I just simply add all these terms up. Okay, well I have something like this here and I have on purpose removed the light source from this image. But probably everyone can tell where the light source is expected to be. So raise your hand if you know where the light source should be. Okay, cool. Where should it be? Exactly. So it's going to be above the spheres. This is exactly where it is. So these material models are descriptive in a way that I get images that have some physical meaning that resemble physical reality. Well let's take a look at an actual example. The question is what would this region look like? The one that I marked, this pixel existed in the real world. Would it look the same if I wound my head in reality? And that sounds like a tricking question. I have seen the answer. Yes. Well let's say that this part is purely diffuse. I don't see any specular reflections in there. The diffuse is L dot n. So light vector direction times the normal. Does it change if I move my head? Well how to answer this question? You don't only need to see what is in an equation. You have to be aware of what is not in there. Doesn't change if I move my head. Raise your hand if you know the answer. It's very apparent to many of you. Yes. So the answer? Yes. It does not change if I move the head. It does not change if I move the head. It does not change because the specularity might move. Yes. That's very true. So it does not change because we know that it does not change. The walls look the same if I move around. I mean I'm not talking about shapes. I'm talking about colors. They don't change. The mirror, however, does change. The mathematical reason for this is that the view direction is not in this equation. I can change the view direction all I want and nothing will change in the diffuse. The idea. So this is like a general mathematical trick or principle that you can use in a number of different things. Don't just don't just look at what variables are in there. Try to think of variables what you would imagine that would be there. Okay, why are they missing? That's also information. That's what they're not only what's there but what is missing is valuable information. So what about these regions? These are specular highlights. These are described by the specular V or V of R. So viewing direction times the reflected light direction. Let's actually compute what's going on. So I would be interested in the intensity. This fake something of this point where this is the light vector. This is where it points to. It is probably reflected somewhere there because it comes in and it's an ideal reflection. So it's going to be reflected in this direction. And this is where I am just for example. So I'm interested in V dot R. Well, this is going to be a cosine. There is a small angle between V and R. So if there is a small angle that's cosine of a small number, that's large. That's close to one. And that's going to be a huge scalar product. Therefore, this point is bright and this is indeed bright. And the question is, which is very easy to answer in a second, is doesn't change if I move around. Does it change? Obviously, it does change because V isn't the equation and if I change this around, this is going to be different. For the specular BRDF, this is going to be bright. Just one of my favorite intuitions of this V dot R because otherwise this is just letters. This means how much am I standing in the way of the light? So, a life lesson. If you can't find the water droplets on the floor after having a shower, move your head around. Because that's specular. If the windshield of a car is too bright and you just can't take it anymore, move your head around. This connects to the physical reality around us. And that's good tips. In case you didn't know that you need to move your head around. Thanks, thank you. Now you know. Okay, so this is the point where we can just for a second stop and Marvel have how beautiful things we can create with such simple equations. And the rendering equation is going to be even more beautiful than that infinitely more beautiful. And there is some additional beauty that you can think about when you look at images like that. Okay, how would I shape this point? Is this diffusive? Is this specular? Why does it look the way it does? So, you can, if you have nothing better to do, you can think about these things when on public transport. Let's call this thing the illumination equation. This is the simpler version of the rendering equation. Now, what is in there? Most of this is familiar. There is an ambient-sharing term. And then there is the diffuse L.M. There is the specular V.R. We add all these together. And we multiply this by the amount of incoming light. Because if there is no light sources in the scene, there is no light. The whole light is not coming from anywhere. Therefore, this is all multiplied by zero. If there is a bright light source, that things get brighter. So, we multiply by this incoming light. And what is important to know is that this is only the direct effect of light sources. This sounds a bit as a taric at the moment, but later on a few lectures down the road. We are going to be more about indirect illumination and goodies like that. And this is neglected, and the ambient term is used to make up for it. You will see the examples of this in the next lecture. And this is a crude approximation, but it's still beautiful. It's easy to understand. And it serves as a stepping stone to solve the real rendering equation. But this is not done. One thing is that if there are multiple light sources, the scene is expected to be brighter. So, I would compute the whole thing for multiple light sources. So, there is going to be a sum in there. And inside the sum, the indexes are the number of light sources. Basically, I just didn't want to overcomplicate this. But still, something is still missing. This is not done. I arrived to a point. I compute this specular ambient and diffuse shading. And I am not done. Let's discuss how ray tracing works, and we will find out. So, the first thing is that what you see here is non-trivial, because what you would imagine is that you start shooting rays from the light source. And then, some of the rays would make up to make it to the camera to your eye. And most of them won't. So, we go with a simple optimization that we turn the whole thing around and then we start tracing rays from the camera. Because if I start tracing from there, I can guarantee that I didn't with rays that are not wasted, because I am not interested in the light rays that do not make it to the camera. So, if I start from there, I can guarantee that this is not wasted computation. So, how do we do this? There is this camera plane. We will discuss how to construct such a thing. And we construct rays through this camera plane. And what I am interested in is the projection of the 3D world to this plane. This is what you will see on your monitor. So, I should raise from this camera, and I intersect this, I guess, objects that are in the scene. I want to know where is the light stopping? What objects does it hit? And where does it get reflected? So, the second is intersection with scene objects. I have to realize that it hits this sphere. Then I stop there, I compute the basic shading, turns, lighted diffuse and the rest. And then I don't stop there, but I am interested in where the light is reflected. I need to continue from here. And this light ray may be reflected or refracted. And I need some kind of recursion in order to account for that. And the recursion works in the following way. I stop at this point where I hit the ball, the sphere. And what I do is that I imagine that this is now the starting point of the ray. And I am shooting the ray outwards and I start this ray tracing algorithm again. So, this is how the recursion works. This was missing from the formula. And this is just what the text version of what I have set for those who are meeting this at home. And you will be in live refractions for some.
[{"start": 0.0, "end": 4.64, "text": " Let's go with a simplified version of the whole thing. We're going to talk about"}, {"start": 4.64, "end": 11.16, "text": " simplified BRDF models. Well, there's going to be the ambient BRDF. How does it"}, {"start": 11.16, "end": 21.740000000000002, "text": " look like? Well, first, first. On the left side, I see I. This is intensity. Well, what"}, {"start": 21.740000000000002, "end": 26.84, "text": " is this? Well, no one knows because we have not radians, not something very"}, {"start": 26.84, "end": 31.16, "text": " physical here. This is going to be simplified version of the whole rendering equation."}, {"start": 31.16, "end": 36.480000000000004, "text": " Basically, a bunch of hex, if you know something that is vastly simplified, it"}, {"start": 36.480000000000004, "end": 40.64, "text": " doesn't really have a physical meaning, it doesn't have physical feelings, but it"}, {"start": 40.64, "end": 44.44, "text": " works. It's beautiful and it's a good way to understand what's going on. So the"}, {"start": 44.44, "end": 49.24, "text": " intensity that we measure is going to be an ambient, the product of an ambient"}, {"start": 49.24, "end": 56.32, "text": " coefficient of an object. This is dependent of the object. This means that"}, {"start": 56.32, "end": 63.52, "text": " this means something like the color of the object. And the eye is going to be the"}, {"start": 63.52, "end": 70.04, "text": " intensity, the ambient intensity of a scene, or the light source. And later on, we're"}, {"start": 70.04, "end": 74.56, "text": " going to be talking about why this is interesting. So this is an example. We have a"}, {"start": 74.56, "end": 78.52, "text": " blue object over here and it's the same color everywhere. Why? Because the"}, {"start": 78.52, "end": 82.03999999999999, "text": " farther that doesn't depend on anything. There's just one coefficient that's"}, {"start": 82.04, "end": 89.16000000000001, "text": " multiplied by this intensity of the scene. So that's an ambient shading. What"}, {"start": 89.16000000000001, "end": 95.52000000000001, "text": " else is there? There's the diffuse BRDF. This is what we compute. It's a diffuse"}, {"start": 95.52000000000001, "end": 100.16000000000001, "text": " coefficient. What is the diffuse color? The diffuse albedo of this thing. And"}, {"start": 100.16000000000001, "end": 106.16000000000001, "text": " there's going to be a product of L and M. This is what we did before. Diffuse"}, {"start": 106.16000000000001, "end": 111.04, "text": " objects look like that. Please raise your hand if you have ever done any kind of"}, {"start": 111.04, "end": 117.76, "text": " diffuse lambershine model in graphics. Okay, excellent. Great. And just another"}, {"start": 117.76, "end": 125.44000000000001, "text": " thing. This diffuse coefficient is the very least RGB. Okay, so this is how much"}, {"start": 125.44000000000001, "end": 132.28, "text": " light is not absorbed on every different wavelength. Because I cannot describe"}, {"start": 132.28, "end": 138.72, "text": " colors in one number. The very least RGB or a continuous spectrum. Just for the"}, {"start": 138.72, "end": 144.92, "text": " background. And now it's looking better because I can more or less see where the"}, {"start": 144.92, "end": 151.84, "text": " light source is for this diffuse shading. There's also a specular BRDF. What I"}, {"start": 151.84, "end": 158.88, "text": " compute is V dot R times specular coefficient and V is the vector pointing towards"}, {"start": 158.88, "end": 163.6, "text": " the viewer and R is the reflected. There's going to be examples of that. Okay, so"}, {"start": 163.6, "end": 167.96, "text": " just that you see the formula here. And there's an M which is a shining as factor"}, {"start": 167.96, "end": 173.16, "text": " in the next assignment you will play with this yourself. So for now I will keep"}, {"start": 173.16, "end": 178.56, "text": " this a secret what this exactly does. And whoops, I'm going to jump through this"}, {"start": 178.56, "end": 184.52, "text": " because I would like to ask a question later on and you're going to find out."}, {"start": 184.52, "end": 191.24, "text": " Yes, I'm looking at, excuse me. So this is how the specular highlights look. And if I"}, {"start": 191.24, "end": 198.24, "text": " add up all of these things ambient and diffuse and specular I get some complex-looking model"}, {"start": 198.24, "end": 208.04000000000002, "text": " that looks something that is that approximates physical reality. So I just simply add all"}, {"start": 208.04000000000002, "end": 216.20000000000002, "text": " these terms up. Okay, well I have something like this here and I have on purpose"}, {"start": 216.2, "end": 221.79999999999998, "text": " removed the light source from this image. But probably everyone can tell where the"}, {"start": 221.79999999999998, "end": 227.2, "text": " light source is expected to be. So raise your hand if you know where the light source should be."}, {"start": 227.2, "end": 241.48, "text": " Okay, cool. Where should it be? Exactly. So it's going to be above the spheres. This is exactly where it is. So these"}, {"start": 241.48, "end": 248.79999999999998, "text": " material models are descriptive in a way that I get images that have some physical meaning"}, {"start": 248.79999999999998, "end": 255.48, "text": " that resemble physical reality. Well let's take a look at an actual example. The question is"}, {"start": 255.48, "end": 262.48, "text": " what would this region look like? The one that I marked, this pixel existed in the real world."}, {"start": 262.48, "end": 271.44, "text": " Would it look the same if I wound my head in reality? And that sounds like a tricking"}, {"start": 271.44, "end": 278.44, "text": " question. I have seen the answer. Yes. Well let's say that this part is purely diffuse. I"}, {"start": 278.44, "end": 285.44, "text": " don't see any specular reflections in there. The diffuse is L dot n. So light vector direction times"}, {"start": 285.44, "end": 292.44, "text": " the normal. Does it change if I move my head? Well how to answer this question? You don't only need"}, {"start": 292.44, "end": 302.44, "text": " to see what is in an equation. You have to be aware of what is not in there. Doesn't change if I move my"}, {"start": 302.44, "end": 316.44, "text": " head. Raise your hand if you know the answer. It's very apparent to many of you. Yes. So the answer? Yes. It does not change if I move the"}, {"start": 316.44, "end": 322.44, "text": " head. It does not change if I move the head. It does not change because the specularity might move."}, {"start": 322.44, "end": 329.44, "text": " Yes. That's very true. So it does not change because we know that it does not change. The walls look the same"}, {"start": 329.44, "end": 336.44, "text": " if I move around. I mean I'm not talking about shapes. I'm talking about colors. They don't change."}, {"start": 336.44, "end": 343.44, "text": " The mirror, however, does change. The mathematical reason for this is that the view direction is not in this"}, {"start": 343.44, "end": 350.44, "text": " equation. I can change the view direction all I want and nothing will change in the diffuse. The idea."}, {"start": 350.44, "end": 362.44, "text": " So this is like a general mathematical trick or principle that you can use in a number of different"}, {"start": 362.44, "end": 369.44, "text": " things. Don't just don't just look at what variables are in there. Try to think of variables what you would imagine"}, {"start": 369.44, "end": 376.44, "text": " that would be there. Okay, why are they missing? That's also information. That's what they're not only"}, {"start": 376.44, "end": 381.44, "text": " what's there but what is missing is valuable information. So what about these regions? These are"}, {"start": 381.44, "end": 388.44, "text": " specular highlights. These are described by the specular V or V of R. So viewing direction times"}, {"start": 388.44, "end": 396.44, "text": " the reflected light direction. Let's actually compute what's going on. So I would be interested in the intensity."}, {"start": 396.44, "end": 403.44, "text": " This fake something of this point where this is the light vector. This is where it points to."}, {"start": 403.44, "end": 411.44, "text": " It is probably reflected somewhere there because it comes in and it's an ideal reflection. So it's going to be"}, {"start": 411.44, "end": 421.44, "text": " reflected in this direction. And this is where I am just for example. So I'm interested in V dot R. Well, this is going to be a"}, {"start": 421.44, "end": 428.44, "text": " cosine. There is a small angle between V and R. So if there is a small angle that's cosine of a small"}, {"start": 428.44, "end": 437.44, "text": " number, that's large. That's close to one. And that's going to be a huge scalar product. Therefore, this point is"}, {"start": 437.44, "end": 446.44, "text": " bright and this is indeed bright. And the question is, which is very easy to answer in a second, is doesn't change if"}, {"start": 446.44, "end": 453.44, "text": " I move around. Does it change? Obviously, it does change because V isn't the equation and if I change this"}, {"start": 453.44, "end": 461.44, "text": " around, this is going to be different. For the specular BRDF, this is going to be bright. Just one of my favorite"}, {"start": 461.44, "end": 468.44, "text": " intuitions of this V dot R because otherwise this is just letters. This means how much am I standing in the way of the light?"}, {"start": 468.44, "end": 480.44, "text": " So, a life lesson. If you can't find the water droplets on the floor after having a shower, move your head around."}, {"start": 480.44, "end": 489.44, "text": " Because that's specular. If the windshield of a car is too bright and you just can't take it anymore, move your head around."}, {"start": 489.44, "end": 498.44, "text": " This connects to the physical reality around us. And that's good tips. In case you didn't know that you need to move your head around."}, {"start": 498.44, "end": 502.44, "text": " Thanks, thank you. Now you know."}, {"start": 502.44, "end": 513.44, "text": " Okay, so this is the point where we can just for a second stop and Marvel have how beautiful things we can create with such simple equations."}, {"start": 513.44, "end": 526.44, "text": " And the rendering equation is going to be even more beautiful than that infinitely more beautiful. And there is some additional beauty that you can think about when you look at images like that."}, {"start": 526.44, "end": 533.44, "text": " Okay, how would I shape this point? Is this diffusive? Is this specular? Why does it look the way it does?"}, {"start": 533.44, "end": 539.44, "text": " So, you can, if you have nothing better to do, you can think about these things when on public transport."}, {"start": 539.44, "end": 546.44, "text": " Let's call this thing the illumination equation. This is the simpler version of the rendering equation."}, {"start": 546.44, "end": 553.44, "text": " Now, what is in there? Most of this is familiar. There is an ambient-sharing term. And then there is the diffuse L.M."}, {"start": 553.44, "end": 560.44, "text": " There is the specular V.R. We add all these together. And we multiply this by the amount of incoming light."}, {"start": 560.44, "end": 563.44, "text": " Because if there is no light sources in the scene, there is no light."}, {"start": 563.44, "end": 570.44, "text": " The whole light is not coming from anywhere. Therefore, this is all multiplied by zero. If there is a bright light source, that things get brighter."}, {"start": 570.44, "end": 579.44, "text": " So, we multiply by this incoming light. And what is important to know is that this is only the direct effect of light sources."}, {"start": 579.44, "end": 588.44, "text": " This sounds a bit as a taric at the moment, but later on a few lectures down the road. We are going to be more about indirect illumination and goodies like that."}, {"start": 588.44, "end": 595.44, "text": " And this is neglected, and the ambient term is used to make up for it. You will see the examples of this in the next lecture."}, {"start": 595.44, "end": 606.44, "text": " And this is a crude approximation, but it's still beautiful. It's easy to understand. And it serves as a stepping stone to solve the real rendering equation."}, {"start": 606.44, "end": 620.44, "text": " But this is not done. One thing is that if there are multiple light sources, the scene is expected to be brighter. So, I would compute the whole thing for multiple light sources. So, there is going to be a sum in there."}, {"start": 620.44, "end": 629.44, "text": " And inside the sum, the indexes are the number of light sources. Basically, I just didn't want to overcomplicate this."}, {"start": 629.44, "end": 640.44, "text": " But still, something is still missing. This is not done. I arrived to a point. I compute this specular ambient and diffuse shading. And I am not done."}, {"start": 640.44, "end": 644.44, "text": " Let's discuss how ray tracing works, and we will find out."}, {"start": 644.44, "end": 654.44, "text": " So, the first thing is that what you see here is non-trivial, because what you would imagine is that you start shooting rays from the light source."}, {"start": 654.44, "end": 663.44, "text": " And then, some of the rays would make up to make it to the camera to your eye. And most of them won't."}, {"start": 663.44, "end": 673.44, "text": " So, we go with a simple optimization that we turn the whole thing around and then we start tracing rays from the camera."}, {"start": 673.44, "end": 684.44, "text": " Because if I start tracing from there, I can guarantee that I didn't with rays that are not wasted, because I am not interested in the light rays that do not make it to the camera."}, {"start": 684.44, "end": 689.44, "text": " So, if I start from there, I can guarantee that this is not wasted computation."}, {"start": 689.44, "end": 696.44, "text": " So, how do we do this? There is this camera plane. We will discuss how to construct such a thing."}, {"start": 696.44, "end": 706.44, "text": " And we construct rays through this camera plane. And what I am interested in is the projection of the 3D world to this plane."}, {"start": 706.44, "end": 708.44, "text": " This is what you will see on your monitor."}, {"start": 708.44, "end": 714.44, "text": " So, I should raise from this camera, and I intersect this, I guess, objects that are in the scene."}, {"start": 714.44, "end": 721.44, "text": " I want to know where is the light stopping? What objects does it hit? And where does it get reflected?"}, {"start": 721.44, "end": 727.44, "text": " So, the second is intersection with scene objects. I have to realize that it hits this sphere."}, {"start": 727.44, "end": 733.44, "text": " Then I stop there, I compute the basic shading, turns, lighted diffuse and the rest."}, {"start": 733.44, "end": 740.44, "text": " And then I don't stop there, but I am interested in where the light is reflected. I need to continue from here."}, {"start": 740.44, "end": 746.44, "text": " And this light ray may be reflected or refracted."}, {"start": 746.44, "end": 753.44, "text": " And I need some kind of recursion in order to account for that. And the recursion works in the following way."}, {"start": 753.44, "end": 762.44, "text": " I stop at this point where I hit the ball, the sphere. And what I do is that I imagine that this is now the starting point of the ray."}, {"start": 762.44, "end": 768.44, "text": " And I am shooting the ray outwards and I start this ray tracing algorithm again."}, {"start": 768.44, "end": 774.44, "text": " So, this is how the recursion works. This was missing from the formula."}, {"start": 774.44, "end": 781.44, "text": " And this is just what the text version of what I have set for those who are meeting this at home."}, {"start": 781.44, "end": 809.44, "text": " And you will be in live refractions for some."}]
Two Minute Papers
https://www.youtube.com/watch?v=4gXPVoippTs
TU Wien Rendering #3 - BRDF models, The Rendering Equation
There are many materials in the world that we'd like to model in our program: mirrors, walls, car paint and so on. How do we characterize these different material properties mathematically? We use BRDFs (Bidirectional reflectance distribution functions) as a vehicle to accomplish this, and also discuss how to use it to formulate the holy grail of computer graphics, the rendering equation, the most fundamental equation of light transport. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Now there's another fundamental question which is what makes the difference between different materials? And the other question is how do we model it? Well different materials reflecting coming right to different directions and they absorb different amounts of it in different wavelengths. That's the answer. We are going to talk a lot about this, but this is an example. These are different material models. So the specular case, there is one incoming direction and there is one possible output in direction. That's it. This is what always happens. This is for instance a mirror because I see exactly the reflection of myself. There's no other thing that I see in the mirror. But for a diffuse surface, for one incoming direction, there is many possible outcomes in many possible directions. And this gives a diffuse surface. We are going to see examples of that. It writes spread. Please forget this term. Let's call this go or see instead because this is what it is. This is like the mixture of these two. So these are some basic material models that we are going to see in our renderers later on. Now to formalize this in a way, let's create a function that's a probability density function with three parameters. So this is a three-dimensional function. One variable is the incoming light direction. The other variable is a point on the surface. And what I'm interested in is how much light is flowing out from this point in different directions. Now a bit more formalized. This FR is going to be this function. I'm interested in the incoming direction and the point in space. This is true is what I have. And I would be interested in the outgoing directions. What is the probability of different outgoing directions? And this is how we will write formally. Omega is an incoming direction. The x is the point in space that we choose. And omega prime is the outgoing direction. And this we call the BRDF or by direction or reflect as distribution of function. So this is a very complicated name for something that's very simple. BRDF. Now what about materials that don't reflect all incoming light? There are some materials that transmit some of it. So for instance glass, water, gemstones and such. But it could look like that. And here above you can see some BRDFs and below you can see some things because it's not reflected. It's transmitted. There are some materials that let them flew. So here's an example. Well everyone had seen windows and things like that. Well, the question is why, like just a physical question, why are these objects transparent? Sorry? Yes, they transmit the light. But what is happening here exactly? So just some physical intricacy that the most fundamental question, you know, what is inside of an ever? And the best answer is nothing because an atom is 99% empty space. There is the nucleus, which is the whole atom is the size, for instance, of a football field. If you imagine that. Then the nucleus is a small piece of rice in the middle of the football field. That's the nucleus. And the electrons are also very small things like small rises, which are orbiting the nucleus from very far away, like the side sides of the football field. And in between, there's nothing. Absolutely nothing. So the more interesting question will be why is not everything transparent? I mean, there is absolutely nothing in there that would divert the or absorb the light. Right? Everything just everything should go through. Why is not everything transparent, not only glass, but everything. And the reason is absorption. So these electrons are orbiting the nucleus. And what essentially is happening is that electrons can absorb hormones. Phonons are, if you imagine, light as much rays or not waves, but particles, then the phonon is the basic particle of light. So electrons, they absorb hormones. And if they do, they go from an inner orbit, like a lower energy level. They jump to a higher energy level. Because it's basically you after lunch, you eat something, you get more energetic, you get more jumpy. So it jumps to an outer orbit from the nucleus. It's a bit further away. So it absorbs the light so the light doesn't go through. So this is why most things are not transparent. But the question is why is 10 glass transparent? And the answer is that these orbits, these different places around the nucleus, they are so far apart that in the visible light spectrum, if the electrons absorb a photon, they don't get enough energy to jump to the next orbit. This is why most of the light is going through these plastic materials. And the interesting thing is that this is not always the case. This is the case for visible light spectrum. There is another spectrum, which is absorbed. So if you have a spectrum that gives that is a higher energy spectrum, then it may give enough energy for this electron to jump to a different orbit. And we can easily find out what spectrum it is. Because for instance we use glass for a number of different beneficial things. Well, for instance, you cannot get sunburn if you are inside of the house and you have your windows closed. And we are wearing sun glasses in order to protect our eyes from something. So is there someone who tells me what this spectrum is? That is exactly just a bit louder. Ultraviolet. So ultraviolet is a spectrum with a higher amount of energy. And if you absorb it, then this jump is possible. So this is why it is absorbed. So just some physical intricacies. So lights may get reflected. If we have a material that most of the time reflects light, then we call it the BRDF. The BR is the interesting part. That is the reflection. And if it transmits, it is possible with the material model. We have the BTDF, which is the bidirectional transmittance distribution function. And as an umbrella term for both of these, this is basically the whatever term is BSDF. So bidirectional scattering distribution function. I am not saying this because this is lots of fun. I am saying this because you are going to find these terms in the literature all the time. So BSDF is basically things that reflect and things that transmit. Okay, what are the properties of BRDS? And after this, we will suddenly put together something beautiful, very rapidly. So there is handholds reciprocity. It means that the direction of the ray of light can be reversed. What it means mathematically is that I can swap the incoming and outgoing directions. And I am going to get the same probabilities. So the probability of going here to there is the same probability as coming from there to here. If I look at things from both sides, I will get the same probabilities. So that is often useful in physics. Positivity, this is a suffix for a Tory. Well, it cannot be less than zero. A probability cannot be less than zero. For every outgoing direction, there is some positive probability or there is zero. That is it. Nothing else is really possible. So formally this is how it looks like and it makes some additions awfully happy. And there is energy conservation, perhaps the most important probability. An object may reflect or absorb incoming light, but it is impossible that more is coming out than the incoming amount. Well, obviously we have light sources and things like that, but we are talking about street interior models. So this means that if I integrate this function for all possible incoming directions, then I get, if I take into consideration light attenuation that we have talked about, this is why it is so hot and known and why it is so cold at night, then I am going to get one or less. And this is because if it equals one, then this means that this kind of material reflects all light that comes in. And if it is less than one, then this means that some amount of light is absorbed. Okay, we are almost there at the rendering equation. Generally what we are going to do is that we pick a point x and this direction is going to point towards the camera or my eye. This basically means the same thing. It is just an abstraction. And what I am going to be doing is I am going to sum up all the possible incoming directions where light can come to this point and I am interested in how much is reflected towards my direction. And let us not forget that objects can emit light themselves. And we will also compute this reflected amount of light. So just intuition, light exiting the surface towards my eye is the amount that it emits itself, it is a light source, and the amount that it reflects from the incoming light that comes from the surroundings. And this is how we can formally write this with this beautiful integral equation. Let us see, let us tear it a some and see what means what. This is the emitted light. So this is light from point x going towards my eye. How much of it? Well the amount that is in point x emitted towards my eye, if it is a light source like that one, then I definitely have this amount. And there is an amount the battery left that is reflected. Let us see what is going on. This is what I just told you. And again, and this is the integration. This is the interesting part. So I am integrating this omega prime. So all possible incoming directions. What you have seen the hemisphere on the previous image. hemisphere means basically half the one half of the sphere. We are integrating over a hemisphere not over a full sphere because if we take into consideration the cosine, if the light comes from above that cosine 0 degrees is 1. And as I rotate this light source around this point, then this cosine will get to 90 degrees. So from here to there. And the cosine of 90 degrees is 0. Therefore there is going to be no throughput if it comes from that direction. And if I have something that is higher that would be negative. We don't deal with these cases. So this is why I am integrating over a hemisphere. So some light is coming to this point in different directions. And what I am interested in is how much is this of this light is reflected towards my eye. This is multiplied by the incoming radiance. There is the PRDF and light attenuation. That's it. This is still a bit difficult. This is still a bit conducive. So first we are going to train ourselves like bodybuilders on smaller weights. So we are going to create an easier version of this. Because apparently this is terribly difficult to solve. If you take a look and if you sit down and try to solve it for a difficult scene where you have objects and geometries and different PRDFs, different material models, you will find that this is impossible to solve analytically. And one of the first problems is yes. This is the equation is just for one point. So we are looking at one point and then we want to calculate. Yes. And here comes the catch. So I am interested in how much light is going towards my eye from this point. How much is it? Well it depends. If I turn on other light sources then this point is going to be brighter. Because the radiance coming out of this point depends on its surroundings. Is the window open? Are the curtains cool or not? So x, this point x depends on this other point y, for instance. All other points. Then we can say let's not compute this x first. Let's compute this y point instead first because then I will know x. But this y also depends on x because how bright light is on the other side of the room also depends on how bright it is in this side of the room. So there is some recursion in there. And if you don't think out of the box this is impossible to solve because you don't know where to start. This integral is hopeless to compute in closed form because there may be shapes of different objects in there and this will make integration immensely difficult. The integral is infinite dimensional. Later you will see that if I compute 1 bounce this x that I have been talking about that's okay. But I need to compute multiple bounces. I need to start tracing rays from the camera and see how much light is entering the lens of the camera. But 1 bounce is not enough. Is 2 bounces enough? So after the x I continue somewhere else. Is this enough? Say something. It's not enough. Okay. But I think maybe 3 is enough. It's 3 enough. It's not enough. Okay. Well, you guys are very picky. Okay. Is 10 bounces enough? Okay. Why not? Because there is still some amount of energy left. If I would continue this light path I would encounter other objects and I don't have any knowledge of that. We need to compute an infinite amount of bounces. Even 1000 is not enough. So and this rendering equation is going to be 1.1 bounce. And if I want to compute the second bounce that's going to be embedded there's going to be embedded another integration another rendering equation. And this goes on infinitely. This is the biggest equation in the whole universe. It's impossible to solve. And it is often singular. I will later show you why. So even if you would want to integrate it you could. So this is far a bit difficult. This seems impossible. And apparently at this point we cannot solve it. So this is the end of the course. And we have an impossible problem. There is no reason even to try. And goodbye. See you. Never because there's never going to be any more actors. But in order to understand what's going on first we're going to put together a simple version of this equation that we can understand and we can work our way up. There is another formulation of the rendering equation and not going to deal with this too much. You can imagine this other version as moving points around. So there is a light source in P3 and there is the sensor at P0. And this is one example light path. And what I'm doing is I'm not stopping at one point and integrating all possible incoming directions. Because this is what I did with the original formulation. What I do is I compute one light path. I compute how much light is going through. I add that to the sensor. And then I move this P2 around. I move it a bit. I compute the new light path. How much is going through? I move this P2 around again. So imagine this moving everywhere. And imagine also P1 moving everywhere. So all these points are moving everywhere. And I compute the contribution of this light source to the sensor. So this is another kind of integration. I'm not going to go through this. What is interesting is that there is a geometry term in there. And this describes the geometry relation of different points and light attenuation between them. I'm not going to deal with this too much. I just put it here because if you are interested then true your way through it. In literature they often write it this way.
[{"start": 0.0, "end": 7.44, "text": " Now there's another fundamental question which is what makes the difference between different materials?"}, {"start": 7.44, "end": 12.52, "text": " And the other question is how do we model it?"}, {"start": 12.52, "end": 21.16, "text": " Well different materials reflecting coming right to different directions and they absorb different amounts of it in different wavelengths."}, {"start": 21.16, "end": 28.560000000000002, "text": " That's the answer. We are going to talk a lot about this, but this is an example. These are different material models."}, {"start": 28.56, "end": 35.56, "text": " So the specular case, there is one incoming direction and there is one possible output in direction."}, {"start": 35.56, "end": 43.56, "text": " That's it. This is what always happens. This is for instance a mirror because I see exactly the reflection of myself."}, {"start": 43.56, "end": 53.56, "text": " There's no other thing that I see in the mirror. But for a diffuse surface, for one incoming direction, there is many possible outcomes in many possible directions."}, {"start": 53.56, "end": 60.56, "text": " And this gives a diffuse surface. We are going to see examples of that. It writes spread."}, {"start": 60.56, "end": 67.56, "text": " Please forget this term. Let's call this go or see instead because this is what it is. This is like the mixture of these two."}, {"start": 67.56, "end": 73.56, "text": " So these are some basic material models that we are going to see in our renderers later on."}, {"start": 73.56, "end": 84.56, "text": " Now to formalize this in a way, let's create a function that's a probability density function with three parameters. So this is a three-dimensional function."}, {"start": 84.56, "end": 98.56, "text": " One variable is the incoming light direction. The other variable is a point on the surface. And what I'm interested in is how much light is flowing out from this point in different directions."}, {"start": 98.56, "end": 109.56, "text": " Now a bit more formalized. This FR is going to be this function. I'm interested in the incoming direction and the point in space."}, {"start": 109.56, "end": 118.56, "text": " This is true is what I have. And I would be interested in the outgoing directions. What is the probability of different outgoing directions?"}, {"start": 118.56, "end": 129.56, "text": " And this is how we will write formally. Omega is an incoming direction. The x is the point in space that we choose. And omega prime is the outgoing direction."}, {"start": 129.56, "end": 141.56, "text": " And this we call the BRDF or by direction or reflect as distribution of function. So this is a very complicated name for something that's very simple. BRDF."}, {"start": 141.56, "end": 155.56, "text": " Now what about materials that don't reflect all incoming light? There are some materials that transmit some of it. So for instance glass, water, gemstones and such."}, {"start": 155.56, "end": 174.56, "text": " But it could look like that. And here above you can see some BRDFs and below you can see some things because it's not reflected. It's transmitted. There are some materials that let them flew. So here's an example. Well everyone had seen windows and things like that."}, {"start": 174.56, "end": 203.56, "text": " Well, the question is why, like just a physical question, why are these objects transparent? Sorry? Yes, they transmit the light. But what is happening here exactly? So just some physical intricacy that the most fundamental question, you know, what is inside of an ever?"}, {"start": 203.56, "end": 220.56, "text": " And the best answer is nothing because an atom is 99% empty space. There is the nucleus, which is the whole atom is the size, for instance, of a football field. If you imagine that."}, {"start": 220.56, "end": 246.56, "text": " Then the nucleus is a small piece of rice in the middle of the football field. That's the nucleus. And the electrons are also very small things like small rises, which are orbiting the nucleus from very far away, like the side sides of the football field. And in between, there's nothing. Absolutely nothing."}, {"start": 246.56, "end": 266.56, "text": " So the more interesting question will be why is not everything transparent? I mean, there is absolutely nothing in there that would divert the or absorb the light. Right? Everything just everything should go through. Why is not everything transparent, not only glass, but everything."}, {"start": 266.56, "end": 280.56, "text": " And the reason is absorption. So these electrons are orbiting the nucleus. And what essentially is happening is that electrons can absorb hormones."}, {"start": 280.56, "end": 300.56, "text": " Phonons are, if you imagine, light as much rays or not waves, but particles, then the phonon is the basic particle of light. So electrons, they absorb hormones. And if they do, they go from an inner orbit, like a lower energy level. They jump to a higher energy level."}, {"start": 300.56, "end": 317.56, "text": " Because it's basically you after lunch, you eat something, you get more energetic, you get more jumpy. So it jumps to an outer orbit from the nucleus. It's a bit further away. So it absorbs the light so the light doesn't go through."}, {"start": 317.56, "end": 346.56, "text": " So this is why most things are not transparent. But the question is why is 10 glass transparent? And the answer is that these orbits, these different places around the nucleus, they are so far apart that in the visible light spectrum, if the electrons absorb a photon, they don't get enough energy to jump to the next orbit."}, {"start": 346.56, "end": 362.56, "text": " This is why most of the light is going through these plastic materials. And the interesting thing is that this is not always the case. This is the case for visible light spectrum. There is another spectrum, which is absorbed."}, {"start": 362.56, "end": 378.56, "text": " So if you have a spectrum that gives that is a higher energy spectrum, then it may give enough energy for this electron to jump to a different orbit. And we can easily find out what spectrum it is."}, {"start": 378.56, "end": 395.56, "text": " Because for instance we use glass for a number of different beneficial things. Well, for instance, you cannot get sunburn if you are inside of the house and you have your windows closed. And we are wearing sun glasses in order to protect our eyes from something."}, {"start": 395.56, "end": 414.56, "text": " So is there someone who tells me what this spectrum is? That is exactly just a bit louder. Ultraviolet. So ultraviolet is a spectrum with a higher amount of energy."}, {"start": 414.56, "end": 430.56, "text": " And if you absorb it, then this jump is possible. So this is why it is absorbed. So just some physical intricacies. So lights may get reflected. If we have a material that most of the time reflects light, then we call it the BRDF."}, {"start": 430.56, "end": 450.56, "text": " The BR is the interesting part. That is the reflection. And if it transmits, it is possible with the material model. We have the BTDF, which is the bidirectional transmittance distribution function. And as an umbrella term for both of these, this is basically the whatever term is BSDF."}, {"start": 450.56, "end": 468.56, "text": " So bidirectional scattering distribution function. I am not saying this because this is lots of fun. I am saying this because you are going to find these terms in the literature all the time. So BSDF is basically things that reflect and things that transmit."}, {"start": 468.56, "end": 483.56, "text": " Okay, what are the properties of BRDS? And after this, we will suddenly put together something beautiful, very rapidly. So there is handholds reciprocity. It means that the direction of the ray of light can be reversed."}, {"start": 483.56, "end": 490.56, "text": " What it means mathematically is that I can swap the incoming and outgoing directions. And I am going to get the same probabilities."}, {"start": 490.56, "end": 505.56, "text": " So the probability of going here to there is the same probability as coming from there to here. If I look at things from both sides, I will get the same probabilities."}, {"start": 505.56, "end": 525.56, "text": " So that is often useful in physics. Positivity, this is a suffix for a Tory. Well, it cannot be less than zero. A probability cannot be less than zero. For every outgoing direction, there is some positive probability or there is zero. That is it. Nothing else is really possible."}, {"start": 525.56, "end": 538.56, "text": " So formally this is how it looks like and it makes some additions awfully happy. And there is energy conservation, perhaps the most important probability."}, {"start": 538.56, "end": 553.56, "text": " An object may reflect or absorb incoming light, but it is impossible that more is coming out than the incoming amount. Well, obviously we have light sources and things like that, but we are talking about street interior models."}, {"start": 553.56, "end": 574.56, "text": " So this means that if I integrate this function for all possible incoming directions, then I get, if I take into consideration light attenuation that we have talked about, this is why it is so hot and known and why it is so cold at night, then I am going to get one or less."}, {"start": 574.56, "end": 588.56, "text": " And this is because if it equals one, then this means that this kind of material reflects all light that comes in. And if it is less than one, then this means that some amount of light is absorbed."}, {"start": 588.56, "end": 606.56, "text": " Okay, we are almost there at the rendering equation. Generally what we are going to do is that we pick a point x and this direction is going to point towards the camera or my eye. This basically means the same thing. It is just an abstraction."}, {"start": 606.56, "end": 621.56, "text": " And what I am going to be doing is I am going to sum up all the possible incoming directions where light can come to this point and I am interested in how much is reflected towards my direction."}, {"start": 621.56, "end": 627.56, "text": " And let us not forget that objects can emit light themselves."}, {"start": 627.56, "end": 646.56, "text": " And we will also compute this reflected amount of light. So just intuition, light exiting the surface towards my eye is the amount that it emits itself, it is a light source, and the amount that it reflects from the incoming light that comes from the surroundings."}, {"start": 646.56, "end": 658.56, "text": " And this is how we can formally write this with this beautiful integral equation. Let us see, let us tear it a some and see what means what."}, {"start": 658.56, "end": 673.56, "text": " This is the emitted light. So this is light from point x going towards my eye. How much of it? Well the amount that is in point x emitted towards my eye, if it is a light source like that one, then I definitely have this amount."}, {"start": 673.56, "end": 685.56, "text": " And there is an amount the battery left that is reflected. Let us see what is going on. This is what I just told you. And again, and this is the integration. This is the interesting part."}, {"start": 685.56, "end": 694.56, "text": " So I am integrating this omega prime. So all possible incoming directions. What you have seen the hemisphere on the previous image."}, {"start": 694.56, "end": 712.56, "text": " hemisphere means basically half the one half of the sphere. We are integrating over a hemisphere not over a full sphere because if we take into consideration the cosine, if the light comes from above that cosine 0 degrees is 1."}, {"start": 712.56, "end": 729.56, "text": " And as I rotate this light source around this point, then this cosine will get to 90 degrees. So from here to there. And the cosine of 90 degrees is 0. Therefore there is going to be no throughput if it comes from that direction."}, {"start": 729.56, "end": 750.56, "text": " And if I have something that is higher that would be negative. We don't deal with these cases. So this is why I am integrating over a hemisphere. So some light is coming to this point in different directions. And what I am interested in is how much is this of this light is reflected towards my eye."}, {"start": 750.56, "end": 763.56, "text": " This is multiplied by the incoming radiance. There is the PRDF and light attenuation. That's it."}, {"start": 763.56, "end": 777.56, "text": " This is still a bit difficult. This is still a bit conducive. So first we are going to train ourselves like bodybuilders on smaller weights. So we are going to create an easier version of this."}, {"start": 777.56, "end": 797.56, "text": " Because apparently this is terribly difficult to solve. If you take a look and if you sit down and try to solve it for a difficult scene where you have objects and geometries and different PRDFs, different material models, you will find that this is impossible to solve analytically."}, {"start": 797.56, "end": 808.56, "text": " And one of the first problems is yes. This is the equation is just for one point. So we are looking at one point and then we want to calculate."}, {"start": 808.56, "end": 818.56, "text": " Yes. And here comes the catch. So I am interested in how much light is going towards my eye from this point."}, {"start": 818.56, "end": 830.56, "text": " How much is it? Well it depends. If I turn on other light sources then this point is going to be brighter. Because the radiance coming out of this point depends on its surroundings."}, {"start": 830.56, "end": 841.56, "text": " Is the window open? Are the curtains cool or not? So x, this point x depends on this other point y, for instance. All other points."}, {"start": 841.56, "end": 848.56, "text": " Then we can say let's not compute this x first. Let's compute this y point instead first because then I will know x."}, {"start": 848.56, "end": 858.56, "text": " But this y also depends on x because how bright light is on the other side of the room also depends on how bright it is in this side of the room."}, {"start": 858.56, "end": 872.56, "text": " So there is some recursion in there. And if you don't think out of the box this is impossible to solve because you don't know where to start."}, {"start": 872.56, "end": 883.56, "text": " This integral is hopeless to compute in closed form because there may be shapes of different objects in there and this will make integration immensely difficult."}, {"start": 883.56, "end": 892.56, "text": " The integral is infinite dimensional. Later you will see that if I compute 1 bounce this x that I have been talking about that's okay."}, {"start": 892.56, "end": 904.56, "text": " But I need to compute multiple bounces. I need to start tracing rays from the camera and see how much light is entering the lens of the camera."}, {"start": 904.56, "end": 916.56, "text": " But 1 bounce is not enough. Is 2 bounces enough? So after the x I continue somewhere else. Is this enough? Say something."}, {"start": 916.56, "end": 931.56, "text": " It's not enough. Okay. But I think maybe 3 is enough. It's 3 enough. It's not enough. Okay. Well, you guys are very picky. Okay. Is 10 bounces enough?"}, {"start": 931.56, "end": 942.56, "text": " Okay. Why not? Because there is still some amount of energy left. If I would continue this light path I would encounter other objects and I don't have any knowledge of that."}, {"start": 942.56, "end": 952.56, "text": " We need to compute an infinite amount of bounces. Even 1000 is not enough. So and this rendering equation is going to be 1.1 bounce."}, {"start": 952.56, "end": 962.56, "text": " And if I want to compute the second bounce that's going to be embedded there's going to be embedded another integration another rendering equation. And this goes on infinitely."}, {"start": 962.56, "end": 970.56, "text": " This is the biggest equation in the whole universe. It's impossible to solve. And it is often singular. I will later show you why."}, {"start": 970.56, "end": 984.56, "text": " So even if you would want to integrate it you could. So this is far a bit difficult. This seems impossible. And apparently at this point we cannot solve it. So this is the end of the course."}, {"start": 984.56, "end": 995.56, "text": " And we have an impossible problem. There is no reason even to try. And goodbye. See you. Never because there's never going to be any more actors."}, {"start": 995.56, "end": 1005.56, "text": " But in order to understand what's going on first we're going to put together a simple version of this equation that we can understand and we can work our way up."}, {"start": 1005.56, "end": 1016.56, "text": " There is another formulation of the rendering equation and not going to deal with this too much. You can imagine this other version as moving points around."}, {"start": 1016.56, "end": 1030.56, "text": " So there is a light source in P3 and there is the sensor at P0. And this is one example light path. And what I'm doing is I'm not stopping at one point and integrating all possible incoming directions."}, {"start": 1030.56, "end": 1043.56, "text": " Because this is what I did with the original formulation. What I do is I compute one light path. I compute how much light is going through. I add that to the sensor. And then I move this P2 around."}, {"start": 1043.56, "end": 1053.56, "text": " I move it a bit. I compute the new light path. How much is going through? I move this P2 around again. So imagine this moving everywhere. And imagine also P1 moving everywhere."}, {"start": 1053.56, "end": 1065.56, "text": " So all these points are moving everywhere. And I compute the contribution of this light source to the sensor. So this is another kind of integration. I'm not going to go through this."}, {"start": 1065.56, "end": 1077.56, "text": " What is interesting is that there is a geometry term in there. And this describes the geometry relation of different points and light attenuation between them."}, {"start": 1077.56, "end": 1098.56, "text": " I'm not going to deal with this too much. I just put it here because if you are interested then true your way through it. In literature they often write it this way."}]
Two Minute Papers
https://www.youtube.com/watch?v=fSB4mqnm5lA
TU Wien Rendering #2 - Radiometry Recap, Light Attenuation
Course website: https://www.cg.tuwien.ac.at/courses/Rendering/VU.SS2019.html Before trying to understand the nature of light, we have to have a grip on the basic units our simulation can rely on. What quantity would be adequate for this? Radiant flux? Irradiance? Radiance? And what do these words mean, anyway? We also briefly discuss how light is attenuated, why it's so hot at noon and why it's getting golder in the afternoon. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, so let's jump into the thick of it. What do we measure in a simulation? A quick recap from the last lecture, this is going to be just a few minutes. So, first, radiant flux. This is the total amount of energy passing through a surface per second. What does it mean? I imagine some shape anywhere in this space, and I count the amount of energy that is passing through this shape every second. What is the unit of it? It's watts or joule per second. This is the state. And this is apparently not enough. This is not descriptive enough to create light simulations. And please raise your hand if you know why. Okay, well, let's take a look. So this says that amount of energy passing through a surface is measured per second. So when we measure a high radiant flux value somewhere, we don't know if we have measured a lot of energy that passes through a small surface, or if we have drawn or imagined a large surface, and there's just a bit of energy passing through. This is the same amount of radiant flux. So this metric is ambiguous. It's not good enough. And this is just an image to imagine what is really happening. So, what is the solution for the time being? Let's compute the flux by unit area. This we call iridians. And this unit area means that we don't imagine any kind of shape. We imagine something that is one square meter. So we normed by square meters. So I have explicitly said that it's going to be this big and whatever is going through this, this is what I'm interested in. Okay, well, unfortunately this is still ambiguous. And the reason for this is that we haven't taken into consideration what angle the light comes in. And you will hear about this in a second. So it matters whether you get a lot of energy in a big angle or a small amount of energy in a small angle. This is ambiguous. So let's remedy this by also norming with the angle. So we are talking about unit angle. So this meters, this square meters, we also divide by steridians. Well, what does it mean? So steridians is basically angle in multiple dimensions. Because in the textbook, there is only one angle to take into consideration if you draw a triangle. But if you would like to look at, for instance, you, it matters that I turn my head to the right direction in this direction. But if I would be looking here, I wouldn't be seeing you. So I need to take kind of another direction. So this is what we need to know of the piste steridians. So multiple directions. Next question is, so this was radians. What's normed piste square meters, normed piste steridians? Why is this still not good enough? Raise your hand if you know the answer. Well, nothing. It's fine as it is. So it's going to be questions like that. Make sure to think it through because I think last year someone was almost folding out of the chair. I know. I know. I know. Okay. This is fine. You can build simulations. Okay. So how do we do the actual light simulation? What I'm interested in is how much light exits the surface at a given point. So I pick a point in space and the direction is going to be the direction of pi i. How much is light? How much light is coming through from there? Solution is obviously the Maxwell equations. Why? Maxwell equations tell you how electromagnetic waves behave and light is an electromagnetic wave in a given spectra. That is around visible light as you heard in the last lecture about from 400 nanometers to 730. That's more or less the visible spectra. Well, apparently some people are overly excited about from Maxwell equations. Myself included. Well, I don't have a clue to have to like that. I reserve this part for the rendering equation at some point. Let's see about that. So, but unfortunately this doesn't work. Hopefully Thomas have said some things about this. The basic principle is that if it's really nanometers, then we would need to have a simulation on the scale of nanometers. And that's impossible. That's the simple way to put it. And the solution is going to be the rendering equation. And if you would like a tattoo of an equation, I would propose definitely having the rendering equation. You will see how beautiful it is. But at this point, we are not ready to digest all of it. So, let's have some theory before that. This is the trivial part. Okay. So, scalar product. Scalar product is a number. So, on the left side, I have two vectors. On the right side, I have a number. And the scalar product is of A and B. Vectors is the length of A times the length of B times the cosine of the angle between the two vectors. In this course, if even if I don't say anything about the length of the vectors, a length of one is assumed. Almost every single vector is going to be normalized. So, if they are normalized, then A, length, and B, length is one. So, this is strictly going to be the angle between the two vectors. So, the cosines are going to be angles. I mean, the cosine of the angle. Okay. Sound notation. This is what you are going to see in many of the figures in the literature. What's going on? This point of, this is, axis the point of interest. This is where we compute some unit. And V is the direction towards the viewer. It's flipped on purpose. I'm going to fix that in a second. So, V is a direction towards the viewer. Okay. So, if I have this projector above me, the V vector would be pointing towards me if the axis is there. And is the surface normal? L is the vector pointing towards the light source. Okay. So, if I would be at this point, then this L vector would be towards, for instance, that light source. R is the reflected ray direction. This means that I have a point. I have a light source. Light is coming towards that point. And R is where it's going to be reflected. So, again, an example. There is the projector. This is the point text. This is where the light comes from. And this is the reflected direction. So, this is flipped along the surface normal. You will see examples of all of these. And theta i and R are incidental from your reflected angles. And because we are going to be computing scalar products and things with vectors, it is important that these vectors that we are talking about are starting from the same point. So, generally in the images, you are going to see this X and some vectors that are pointing outwards all the time. Because these vectors I can use for computations. And just another important thing. This is the mathematical definition of R. This is how you compute the actual reflected vector. But I think you have done this before in previous courses. I think is it not the ECG, but unfortunately I don't remember the name. But there is some basic ray tracing. Is there? There is some ECG. You will need it for a shadow. But even if you haven't seen it, you will see this info. And you will see how this works. Let's talk about light attenuation. With some experience. Let's be practical. So, the sun shines onto a point of the surface from above. What portion of the output of one ray will hit the surface? Well, this is something like a diffuse shading. So, I am going to compute a dot program between L and N. L is towards the light vector and is the surface normal. Well, it seems to me that L and N is the very same thing in this scene. So, this cosine is going to be 0 degrees. So, the cosine of 0 is 1. So, I am not going to have any kind of light attenuation in this case. So, let's take another example. So, the sun is around here. And this is the light vector and you can also see the R. Just as an example. This is where it is reflected. So, I am computing this diffuse shading formula again. So, L dot N. Now, there is some angle. Let's say that this is 45 degrees. 45 degrees is the cosine of 45 degrees is 1 over square root of 2. So, the square root of 2 is 1.41. So, 1 over 1.41, that is around 1.7. So, there is some light attenuation if the sun is located here. And what about the extreme case? Another extreme case where it is almost at a 90 degree angle. Well, the cosine of 90 degrees is 0. So, this means that there is tons of light attenuation. And this is the reason why it is the hardest point of the day is moon when the sun is exactly above us. And after that, it is just, it is usually, if you do not take into consideration anything else, then it is only going to get colder and colder. And this is why it is so cold at night. So, we can neatly model this light attenuation with a simple dot product, which is the cosine of these vectors.
[{"start": 0.0, "end": 7.0, "text": " Okay, so let's jump into the thick of it. What do we measure in a simulation?"}, {"start": 7.0, "end": 11.0, "text": " A quick recap from the last lecture, this is going to be just a few minutes."}, {"start": 11.0, "end": 18.0, "text": " So, first, radiant flux. This is the total amount of energy passing through a surface per second."}, {"start": 18.0, "end": 23.0, "text": " What does it mean? I imagine some shape anywhere in this space,"}, {"start": 23.0, "end": 29.0, "text": " and I count the amount of energy that is passing through this shape every second."}, {"start": 29.0, "end": 37.0, "text": " What is the unit of it? It's watts or joule per second. This is the state."}, {"start": 37.0, "end": 44.0, "text": " And this is apparently not enough. This is not descriptive enough to create light simulations."}, {"start": 44.0, "end": 51.0, "text": " And please raise your hand if you know why."}, {"start": 51.0, "end": 57.0, "text": " Okay, well, let's take a look. So this says that amount of energy passing through a surface"}, {"start": 57.0, "end": 66.0, "text": " is measured per second. So when we measure a high radiant flux value somewhere,"}, {"start": 66.0, "end": 72.0, "text": " we don't know if we have measured a lot of energy that passes through a small surface,"}, {"start": 72.0, "end": 78.0, "text": " or if we have drawn or imagined a large surface, and there's just a bit of energy passing through."}, {"start": 78.0, "end": 85.0, "text": " This is the same amount of radiant flux. So this metric is ambiguous."}, {"start": 85.0, "end": 94.0, "text": " It's not good enough. And this is just an image to imagine what is really happening."}, {"start": 94.0, "end": 100.0, "text": " So, what is the solution for the time being? Let's compute the flux by unit area."}, {"start": 100.0, "end": 106.0, "text": " This we call iridians. And this unit area means that we don't imagine any kind of shape."}, {"start": 106.0, "end": 113.0, "text": " We imagine something that is one square meter. So we normed by square meters."}, {"start": 113.0, "end": 119.0, "text": " So I have explicitly said that it's going to be this big and whatever is going through this,"}, {"start": 119.0, "end": 125.0, "text": " this is what I'm interested in. Okay, well, unfortunately this is still ambiguous."}, {"start": 125.0, "end": 133.0, "text": " And the reason for this is that we haven't taken into consideration what angle the light comes in."}, {"start": 133.0, "end": 141.0, "text": " And you will hear about this in a second. So it matters whether you get a lot of energy in a big angle"}, {"start": 141.0, "end": 149.0, "text": " or a small amount of energy in a small angle. This is ambiguous."}, {"start": 149.0, "end": 154.0, "text": " So let's remedy this by also norming with the angle."}, {"start": 154.0, "end": 159.0, "text": " So we are talking about unit angle. So this meters, this square meters,"}, {"start": 159.0, "end": 164.0, "text": " we also divide by steridians. Well, what does it mean?"}, {"start": 164.0, "end": 173.0, "text": " So steridians is basically angle in multiple dimensions. Because in the textbook,"}, {"start": 173.0, "end": 177.0, "text": " there is only one angle to take into consideration if you draw a triangle."}, {"start": 177.0, "end": 184.0, "text": " But if you would like to look at, for instance, you, it matters that I turn my head to the right direction in this direction."}, {"start": 184.0, "end": 190.0, "text": " But if I would be looking here, I wouldn't be seeing you. So I need to take kind of another direction."}, {"start": 190.0, "end": 199.0, "text": " So this is what we need to know of the piste steridians. So multiple directions."}, {"start": 199.0, "end": 207.0, "text": " Next question is, so this was radians. What's normed piste square meters, normed piste steridians?"}, {"start": 207.0, "end": 218.0, "text": " Why is this still not good enough? Raise your hand if you know the answer."}, {"start": 218.0, "end": 225.0, "text": " Well, nothing. It's fine as it is. So it's going to be questions like that."}, {"start": 225.0, "end": 233.0, "text": " Make sure to think it through because I think last year someone was almost folding out of the chair."}, {"start": 233.0, "end": 236.0, "text": " I know. I know. I know."}, {"start": 236.0, "end": 242.0, "text": " Okay. This is fine. You can build simulations."}, {"start": 242.0, "end": 251.0, "text": " Okay. So how do we do the actual light simulation? What I'm interested in is how much light exits the surface at a given point."}, {"start": 251.0, "end": 256.0, "text": " So I pick a point in space and the direction is going to be the direction of pi i."}, {"start": 256.0, "end": 260.0, "text": " How much is light? How much light is coming through from there?"}, {"start": 260.0, "end": 264.0, "text": " Solution is obviously the Maxwell equations. Why?"}, {"start": 264.0, "end": 275.0, "text": " Maxwell equations tell you how electromagnetic waves behave and light is an electromagnetic wave in a given spectra."}, {"start": 275.0, "end": 286.0, "text": " That is around visible light as you heard in the last lecture about from 400 nanometers to 730. That's more or less the visible spectra."}, {"start": 286.0, "end": 296.0, "text": " Well, apparently some people are overly excited about from Maxwell equations. Myself included. Well, I don't have a clue to have to like that."}, {"start": 296.0, "end": 302.0, "text": " I reserve this part for the rendering equation at some point. Let's see about that."}, {"start": 302.0, "end": 309.0, "text": " So, but unfortunately this doesn't work. Hopefully Thomas have said some things about this."}, {"start": 309.0, "end": 317.0, "text": " The basic principle is that if it's really nanometers, then we would need to have a simulation on the scale of nanometers."}, {"start": 317.0, "end": 321.0, "text": " And that's impossible. That's the simple way to put it."}, {"start": 321.0, "end": 325.0, "text": " And the solution is going to be the rendering equation."}, {"start": 325.0, "end": 332.0, "text": " And if you would like a tattoo of an equation, I would propose definitely having the rendering equation."}, {"start": 332.0, "end": 341.0, "text": " You will see how beautiful it is. But at this point, we are not ready to digest all of it. So, let's have some theory before that."}, {"start": 341.0, "end": 346.0, "text": " This is the trivial part. Okay. So, scalar product. Scalar product is a number."}, {"start": 346.0, "end": 351.0, "text": " So, on the left side, I have two vectors. On the right side, I have a number."}, {"start": 351.0, "end": 361.0, "text": " And the scalar product is of A and B. Vectors is the length of A times the length of B times the cosine of the angle between the two vectors."}, {"start": 361.0, "end": 369.0, "text": " In this course, if even if I don't say anything about the length of the vectors, a length of one is assumed."}, {"start": 369.0, "end": 376.0, "text": " Almost every single vector is going to be normalized. So, if they are normalized, then A, length, and B, length is one."}, {"start": 376.0, "end": 382.0, "text": " So, this is strictly going to be the angle between the two vectors. So, the cosines are going to be angles."}, {"start": 382.0, "end": 387.0, "text": " I mean, the cosine of the angle. Okay. Sound notation."}, {"start": 387.0, "end": 393.0, "text": " This is what you are going to see in many of the figures in the literature. What's going on?"}, {"start": 393.0, "end": 402.0, "text": " This point of, this is, axis the point of interest. This is where we compute some unit."}, {"start": 402.0, "end": 410.0, "text": " And V is the direction towards the viewer. It's flipped on purpose. I'm going to fix that in a second."}, {"start": 410.0, "end": 421.0, "text": " So, V is a direction towards the viewer. Okay. So, if I have this projector above me, the V vector would be pointing towards me if the axis is there."}, {"start": 421.0, "end": 428.0, "text": " And is the surface normal? L is the vector pointing towards the light source."}, {"start": 428.0, "end": 435.0, "text": " Okay. So, if I would be at this point, then this L vector would be towards, for instance, that light source."}, {"start": 435.0, "end": 442.0, "text": " R is the reflected ray direction. This means that I have a point. I have a light source."}, {"start": 442.0, "end": 449.0, "text": " Light is coming towards that point. And R is where it's going to be reflected. So, again, an example."}, {"start": 449.0, "end": 455.0, "text": " There is the projector. This is the point text. This is where the light comes from. And this is the reflected direction."}, {"start": 455.0, "end": 459.0, "text": " So, this is flipped along the surface normal."}, {"start": 459.0, "end": 468.0, "text": " You will see examples of all of these. And theta i and R are incidental from your reflected angles."}, {"start": 468.0, "end": 479.0, "text": " And because we are going to be computing scalar products and things with vectors, it is important that these vectors that we are talking about are starting from the same point."}, {"start": 479.0, "end": 491.0, "text": " So, generally in the images, you are going to see this X and some vectors that are pointing outwards all the time. Because these vectors I can use for computations."}, {"start": 491.0, "end": 501.0, "text": " And just another important thing. This is the mathematical definition of R. This is how you compute the actual reflected vector."}, {"start": 501.0, "end": 512.0, "text": " But I think you have done this before in previous courses. I think is it not the ECG, but unfortunately I don't remember the name."}, {"start": 512.0, "end": 514.0, "text": " But there is some basic ray tracing."}, {"start": 514.0, "end": 515.0, "text": " Is there?"}, {"start": 515.0, "end": 519.0, "text": " There is some ECG. You will need it for a shadow."}, {"start": 519.0, "end": 531.0, "text": " But even if you haven't seen it, you will see this info. And you will see how this works. Let's talk about light attenuation."}, {"start": 531.0, "end": 536.0, "text": " With some experience. Let's be practical."}, {"start": 536.0, "end": 543.0, "text": " So, the sun shines onto a point of the surface from above. What portion of the output of one ray will hit the surface?"}, {"start": 543.0, "end": 553.0, "text": " Well, this is something like a diffuse shading. So, I am going to compute a dot program between L and N. L is towards the light vector and is the surface normal."}, {"start": 553.0, "end": 564.0, "text": " Well, it seems to me that L and N is the very same thing in this scene. So, this cosine is going to be 0 degrees."}, {"start": 564.0, "end": 572.0, "text": " So, the cosine of 0 is 1. So, I am not going to have any kind of light attenuation in this case."}, {"start": 572.0, "end": 582.0, "text": " So, let's take another example. So, the sun is around here. And this is the light vector and you can also see the R."}, {"start": 582.0, "end": 586.0, "text": " Just as an example. This is where it is reflected."}, {"start": 586.0, "end": 595.0, "text": " So, I am computing this diffuse shading formula again. So, L dot N. Now, there is some angle. Let's say that this is 45 degrees."}, {"start": 595.0, "end": 609.0, "text": " 45 degrees is the cosine of 45 degrees is 1 over square root of 2. So, the square root of 2 is 1.41. So, 1 over 1.41, that is around 1.7."}, {"start": 609.0, "end": 616.0, "text": " So, there is some light attenuation if the sun is located here."}, {"start": 616.0, "end": 627.0, "text": " And what about the extreme case? Another extreme case where it is almost at a 90 degree angle. Well, the cosine of 90 degrees is 0."}, {"start": 627.0, "end": 639.0, "text": " So, this means that there is tons of light attenuation. And this is the reason why it is the hardest point of the day is moon when the sun is exactly above us."}, {"start": 639.0, "end": 647.0, "text": " And after that, it is just, it is usually, if you do not take into consideration anything else, then it is only going to get colder and colder."}, {"start": 647.0, "end": 652.0, "text": " And this is why it is so cold at night."}, {"start": 652.0, "end": 679.0, "text": " So, we can neatly model this light attenuation with a simple dot product, which is the cosine of these vectors."}]
Two Minute Papers
https://www.youtube.com/watch?v=pjc1QAI6zS0
TU Wien Rendering #1 - Introduction
Course website: https://www.cg.tuwien.ac.at/courses/Rendering/VU.SS2019.html A quick introduction where we learn how to pronounce my name, then I talk about what to expect from the course and what the assignments will be like. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/twominutepapers
We are going to start with the most difficult thing in the entire semester. It's about how to pronounce my name. Okay? Can you please find the lights for this spotlight because it's a bit hard to see? Oh, alright. I forgot this. Okay. This one? Yes. Okay. So, I am from Hungary and this is pronounced as Kaui Jone. Kaui is essentially the equivalent of Kaui in Deutsch or Charles in English. Jone is often there is no equivalent for that. So, I am sorry. So, it is pronounced as Kaui. If you imagine this as like an English word, then you forget the L and you just pronounce it like that. So, it's Kaui. Okay? So, I'd like to hear some examples of Kaui. Kaui. Okay? Kaui. Yes? Kaui. Excellent. Kaui. A bit louder? Kaui. It's the answer to the answer. So, it's Kaui. So, it's like a Y at the end. It's Kaui. One more time. Kaui. Lada. Kaui. Yes. Excellent. Kaui. Kaui. Wow. It's amazing. Okay. Now, how comes the hard part? So, this is pronounced as Jone. Okay? So, it's Hungarian. It's a weird language where Z and S is actually one letter. So, if you take a look at the Hungarian alphabet, there's a letter that is Z. There's a letter that is S. And there's a third letter that is Z and S together. So, it's pretty ridiculous, isn't it? So, this is pronounced as Jone. So, the Sch is the difficult part. Okay. Jone. Wow. Jone. Yes. It's a good word. Jone. Hi. Can you pronounce Jone? Jone. Yeah. It's just a common thing. You don't need to be meaning anything. Okay. Jone. Yes. Jone. Yes. Jone. Yes. Wow. Are you Hungarian? No, not really. I mean, Hungarian. Yes. Parents may be sicker than maybe. Mining, okay. Jone. Yeah. Wow. It's great. It's not Jone. Jone. Jone. Okay. Is there someone who I have forgotten? Or everyone knows what's up. Okay. So, this is what we're going to be doing. So, the amazing thing is that when you see images on the Internet like that, sometimes it's difficult to find out if this is computer graphics or is it a real photograph. And this is one of these examples. This is another example. And this is the work of amazing engineers and amazing artists. And we are going to be talking about how to compute images like that. And if you look at this, well, when you download the slides at home, you will see that on the lens, the dust is modeled. Here you can see just some small splotges, but you can actually see pieces of dust on the lens of the camera. And this is computed with the computer program. And by the end of the semester, you are going to know everything about this. How is this exactly computed? Every single pixel. Just a few things about organization. There's going to be assignments. They take up 40% of your grade. And these assignments will have, most of them will have theoretical parts, pen and paper, how to understand what's going on in nature. And there will be also programming exercises, but they are not really that programming exercises. It's mostly using programs, understanding what they are doing, and maybe modifying them here and there. But you are not going to write huge rendering engines and things like that. So don't worry about this. The 60% part is a world exam. And this is going to take place after the semester with me. So this is some friendly discussion about what you have learned. Or not so friendly discussion if you haven't learned anything. But that's never been case. So I'm just kidding. And this is going to take place with me. But you can choose. So if you would like to have the exam with Thomas, that's also fine. But I would like to note that I am an engineer. And he's a brilliant physicist. So if you choose who to try to deal with, I would choose the engineer. I don't know about you, but. Just a suggestion. And there can be some kind of individual contribution. So if you find some errors on the slides, if you add some figures that, hey, I don't like this figure, I've drawn something better than that. There's going to be programs that we work with. If you find some bugs in there, or if you can extend it in any way, you can get plus points. And this applies to basically any kind of contribution that you have. This is the book that we're going to learn from. So there is a trying to cover everything. And at some point, I will say that, yeah, please open the book if you would like to know more about this. But whatever I tell you here, it's going to be enough for an exam. So it's not going to happen that, hey, why haven't you read, I don't know, page 859. Don't you remember that? This is not going to happen. So this is to augment your knowledge. If you would like to know more, and this is a great place to do. And this has a website. This can be bought at different places. There is some sample chapters. And before buying, you can take a look whether you like it enough or not. So it's pretty cool that it has sample chapters. Let's start with what you shouldn't expect from this course. I'll just run through it. There's not going to be rigorous derivations of every imaginable equation that you have. So there are courses where there is this never ending infinite loop like in programming. An infinite loop of definition theorem, corollary definition theorem, theorem, lemma. Raise your hand if you've been to a course like that. I'm not going to tell anyone. I'm not going to tell anyone. So I've had a lot of these courses and I've had enough. So I'm trying to do it differently. There's not going to be endless derivations. There's not going to be an endless stream of formulae as well without explanation. There's going to be formulae, but with explanation we're going to play with all of them. And you are going to understand the core meaning of these things. And at the same time, please don't expect to understand everything if you open, for instance, the Luxrender source code. This is like a half a million line of code project. One of the best renders out there. But there's many. There's many really good renders. You will not understand every single thing that is there, but you will understand how the light transport part works. As thoroughly as possible. And the most important thing I've put it there in bold because this is what students love. You don't have to memorize any of the formulae. I will never tell you that, give me this formula and you will have to remember it off your head. I don't care. If you're an engineer at a company, you'll see that you need to solve a problem. Remember something, what you do. Google. And you look for it. It's not important to remember things. It's important to understand things. So if you look at the formula, you will have to understand what is going on. And that's intuition. This is what I would like you to have as much as possible, but you don't need to memorize any of these. Now, what you should expect from this course is how to simulate light in a simple and elegant way. This is going to be a surprise at first because things are going to look complicated. And by the end, we're going to derive really simple solutions for that that can be implemented in 200 lines of C++. So these 200 lines can compute something that's almost as beautiful as what you have seen here. And I have written this piece of code and every theorem that we learn about, you are going to see them in code. In fact, there's going to be an entire lecture on code review. Let's go through this renderer. And see there is Schlich's approximation. There is Snell's law. There is this and that. And everything you learn here, you are going to see in code. It's not just flying out and doing no. You will know why nature looks like as it does in real life. And you will be wondering that there's so many beautiful things and why haven't I seen them the way they are. Why are they looking the way they are? And you will know about also most of the state of the art in global illumination. This means that yes, we will start with algorithms from 1968. And we will end with algorithms from this year, like from two weeks ago, or in the next few weeks, because the C-graph is coming, like the C-graph paper, the biggest conference with the best of the bunch is coming in the next few weeks. And I'm going to read through it and the materials will be updated to the very latest works. And another thing is that really important is that you will be able to visualize and understand complicated formulae in a really intuitive way. So I would like you to learn something that's not only life transport specific, but you will be able to use this knowledge wherever you go, whatever kind of mathematical problems you have, this knowledge will be useful. And you will see from the very first lecture. And the most important thing is that you will see the world differently. There is lots of beautiful things in nature and you won't be able to stop looking at it. So you will perhaps like taking the train on public transport a bit more than before, because there's so many intricate, delicate things to see that you haven't seen before. You've looked, but you haven't seen them before. Stay up.
[{"start": 0.0, "end": 4.4, "text": " We are going to start with the most difficult thing in the entire semester."}, {"start": 4.4, "end": 7.0, "text": " It's about how to pronounce my name."}, {"start": 7.0, "end": 9.0, "text": " Okay?"}, {"start": 9.0, "end": 13.0, "text": " Can you please find the lights for this spotlight because it's a bit hard to see?"}, {"start": 13.0, "end": 14.0, "text": " Oh, alright."}, {"start": 14.0, "end": 15.0, "text": " I forgot this."}, {"start": 15.0, "end": 16.0, "text": " Okay."}, {"start": 16.0, "end": 17.0, "text": " This one?"}, {"start": 17.0, "end": 18.0, "text": " Yes."}, {"start": 18.0, "end": 19.0, "text": " Okay."}, {"start": 19.0, "end": 28.0, "text": " So, I am from Hungary and this is pronounced as Kaui Jone."}, {"start": 28.0, "end": 34.0, "text": " Kaui is essentially the equivalent of Kaui in Deutsch or Charles in English."}, {"start": 34.0, "end": 37.0, "text": " Jone is often there is no equivalent for that."}, {"start": 37.0, "end": 38.0, "text": " So, I am sorry."}, {"start": 38.0, "end": 40.0, "text": " So, it is pronounced as Kaui."}, {"start": 40.0, "end": 47.0, "text": " If you imagine this as like an English word, then you forget the L and you just pronounce it like that."}, {"start": 47.0, "end": 48.0, "text": " So, it's Kaui."}, {"start": 48.0, "end": 49.0, "text": " Okay?"}, {"start": 49.0, "end": 53.0, "text": " So, I'd like to hear some examples of Kaui."}, {"start": 53.0, "end": 54.0, "text": " Kaui."}, {"start": 54.0, "end": 55.0, "text": " Okay?"}, {"start": 55.0, "end": 56.0, "text": " Kaui."}, {"start": 56.0, "end": 57.0, "text": " Yes?"}, {"start": 57.0, "end": 58.0, "text": " Kaui."}, {"start": 58.0, "end": 59.0, "text": " Excellent."}, {"start": 59.0, "end": 60.0, "text": " Kaui."}, {"start": 60.0, "end": 61.0, "text": " A bit louder?"}, {"start": 61.0, "end": 62.0, "text": " Kaui."}, {"start": 62.0, "end": 64.0, "text": " It's the answer to the answer."}, {"start": 64.0, "end": 65.0, "text": " So, it's Kaui."}, {"start": 65.0, "end": 68.0, "text": " So, it's like a Y at the end."}, {"start": 68.0, "end": 69.0, "text": " It's Kaui."}, {"start": 69.0, "end": 70.0, "text": " One more time."}, {"start": 70.0, "end": 71.0, "text": " Kaui."}, {"start": 71.0, "end": 72.0, "text": " Lada."}, {"start": 72.0, "end": 73.0, "text": " Kaui."}, {"start": 73.0, "end": 74.0, "text": " Yes."}, {"start": 74.0, "end": 75.0, "text": " Excellent."}, {"start": 75.0, "end": 76.0, "text": " Kaui."}, {"start": 76.0, "end": 77.0, "text": " Kaui."}, {"start": 77.0, "end": 78.0, "text": " Wow."}, {"start": 78.0, "end": 79.0, "text": " It's amazing."}, {"start": 79.0, "end": 80.0, "text": " Okay."}, {"start": 80.0, "end": 82.0, "text": " Now, how comes the hard part?"}, {"start": 82.0, "end": 83.0, "text": " So, this is pronounced as Jone."}, {"start": 83.0, "end": 87.0, "text": " Okay?"}, {"start": 87.0, "end": 88.0, "text": " So, it's Hungarian."}, {"start": 88.0, "end": 92.0, "text": " It's a weird language where Z and S is actually one letter."}, {"start": 92.0, "end": 95.0, "text": " So, if you take a look at the Hungarian alphabet, there's a letter that is Z."}, {"start": 95.0, "end": 97.0, "text": " There's a letter that is S."}, {"start": 97.0, "end": 101.0, "text": " And there's a third letter that is Z and S together."}, {"start": 101.0, "end": 104.0, "text": " So, it's pretty ridiculous, isn't it?"}, {"start": 104.0, "end": 107.0, "text": " So, this is pronounced as Jone."}, {"start": 107.0, "end": 110.0, "text": " So, the Sch is the difficult part."}, {"start": 110.0, "end": 111.0, "text": " Okay."}, {"start": 111.0, "end": 113.0, "text": " Jone."}, {"start": 113.0, "end": 114.0, "text": " Wow."}, {"start": 114.0, "end": 115.0, "text": " Jone."}, {"start": 115.0, "end": 116.0, "text": " Yes."}, {"start": 116.0, "end": 117.0, "text": " It's a good word."}, {"start": 117.0, "end": 118.0, "text": " Jone."}, {"start": 118.0, "end": 119.0, "text": " Hi."}, {"start": 119.0, "end": 120.0, "text": " Can you pronounce Jone?"}, {"start": 120.0, "end": 121.0, "text": " Jone."}, {"start": 121.0, "end": 122.0, "text": " Yeah."}, {"start": 122.0, "end": 123.0, "text": " It's just a common thing."}, {"start": 123.0, "end": 124.0, "text": " You don't need to be meaning anything."}, {"start": 124.0, "end": 125.0, "text": " Okay."}, {"start": 125.0, "end": 126.0, "text": " Jone."}, {"start": 126.0, "end": 127.0, "text": " Yes."}, {"start": 127.0, "end": 128.0, "text": " Jone."}, {"start": 128.0, "end": 129.0, "text": " Yes."}, {"start": 129.0, "end": 130.0, "text": " Jone."}, {"start": 130.0, "end": 131.0, "text": " Yes."}, {"start": 131.0, "end": 132.0, "text": " Wow."}, {"start": 132.0, "end": 133.0, "text": " Are you Hungarian?"}, {"start": 133.0, "end": 134.0, "text": " No, not really."}, {"start": 134.0, "end": 135.0, "text": " I mean, Hungarian."}, {"start": 135.0, "end": 136.0, "text": " Yes."}, {"start": 136.0, "end": 139.0, "text": " Parents may be sicker than maybe."}, {"start": 139.0, "end": 141.0, "text": " Mining, okay."}, {"start": 141.0, "end": 142.0, "text": " Jone."}, {"start": 142.0, "end": 143.0, "text": " Yeah."}, {"start": 143.0, "end": 144.0, "text": " Wow."}, {"start": 144.0, "end": 145.0, "text": " It's great."}, {"start": 145.0, "end": 147.0, "text": " It's not Jone."}, {"start": 147.0, "end": 148.0, "text": " Jone."}, {"start": 148.0, "end": 149.0, "text": " Jone."}, {"start": 149.0, "end": 150.0, "text": " Okay."}, {"start": 150.0, "end": 153.0, "text": " Is there someone who I have forgotten?"}, {"start": 153.0, "end": 155.0, "text": " Or everyone knows what's up."}, {"start": 155.0, "end": 156.0, "text": " Okay."}, {"start": 156.0, "end": 159.0, "text": " So, this is what we're going to be doing."}, {"start": 159.0, "end": 164.0, "text": " So, the amazing thing is that when you see images on the Internet like that,"}, {"start": 164.0, "end": 169.0, "text": " sometimes it's difficult to find out if this is computer graphics"}, {"start": 169.0, "end": 171.0, "text": " or is it a real photograph."}, {"start": 171.0, "end": 173.0, "text": " And this is one of these examples."}, {"start": 173.0, "end": 176.0, "text": " This is another example."}, {"start": 176.0, "end": 182.0, "text": " And this is the work of amazing engineers and amazing artists."}, {"start": 182.0, "end": 188.0, "text": " And we are going to be talking about how to compute images like that."}, {"start": 188.0, "end": 192.0, "text": " And if you look at this, well, when you download the slides at home,"}, {"start": 192.0, "end": 196.0, "text": " you will see that on the lens, the dust is modeled."}, {"start": 196.0, "end": 199.0, "text": " Here you can see just some small splotges,"}, {"start": 199.0, "end": 203.0, "text": " but you can actually see pieces of dust on the lens of the camera."}, {"start": 203.0, "end": 206.0, "text": " And this is computed with the computer program."}, {"start": 206.0, "end": 211.0, "text": " And by the end of the semester, you are going to know everything about this."}, {"start": 211.0, "end": 213.0, "text": " How is this exactly computed?"}, {"start": 213.0, "end": 217.0, "text": " Every single pixel."}, {"start": 217.0, "end": 220.0, "text": " Just a few things about organization."}, {"start": 220.0, "end": 223.0, "text": " There's going to be assignments."}, {"start": 223.0, "end": 226.0, "text": " They take up 40% of your grade."}, {"start": 226.0, "end": 232.0, "text": " And these assignments will have, most of them will have theoretical parts, pen and paper,"}, {"start": 232.0, "end": 237.0, "text": " how to understand what's going on in nature."}, {"start": 237.0, "end": 244.0, "text": " And there will be also programming exercises, but they are not really that programming exercises."}, {"start": 244.0, "end": 251.0, "text": " It's mostly using programs, understanding what they are doing, and maybe modifying them here and there."}, {"start": 251.0, "end": 256.0, "text": " But you are not going to write huge rendering engines and things like that."}, {"start": 256.0, "end": 260.0, "text": " So don't worry about this."}, {"start": 260.0, "end": 263.0, "text": " The 60% part is a world exam."}, {"start": 263.0, "end": 267.0, "text": " And this is going to take place after the semester with me."}, {"start": 267.0, "end": 271.0, "text": " So this is some friendly discussion about what you have learned."}, {"start": 271.0, "end": 275.0, "text": " Or not so friendly discussion if you haven't learned anything."}, {"start": 275.0, "end": 277.0, "text": " But that's never been case."}, {"start": 277.0, "end": 279.0, "text": " So I'm just kidding."}, {"start": 279.0, "end": 281.0, "text": " And this is going to take place with me."}, {"start": 281.0, "end": 282.0, "text": " But you can choose."}, {"start": 282.0, "end": 288.0, "text": " So if you would like to have the exam with Thomas, that's also fine."}, {"start": 288.0, "end": 292.0, "text": " But I would like to note that I am an engineer."}, {"start": 292.0, "end": 294.0, "text": " And he's a brilliant physicist."}, {"start": 294.0, "end": 299.0, "text": " So if you choose who to try to deal with, I would choose the engineer."}, {"start": 299.0, "end": 302.0, "text": " I don't know about you, but."}, {"start": 302.0, "end": 305.0, "text": " Just a suggestion."}, {"start": 305.0, "end": 309.0, "text": " And there can be some kind of individual contribution."}, {"start": 309.0, "end": 318.0, "text": " So if you find some errors on the slides, if you add some figures that, hey, I don't like this figure, I've drawn something better than that."}, {"start": 318.0, "end": 321.0, "text": " There's going to be programs that we work with."}, {"start": 321.0, "end": 326.0, "text": " If you find some bugs in there, or if you can extend it in any way, you can get plus points."}, {"start": 326.0, "end": 331.0, "text": " And this applies to basically any kind of contribution that you have."}, {"start": 331.0, "end": 334.0, "text": " This is the book that we're going to learn from."}, {"start": 334.0, "end": 337.0, "text": " So there is a trying to cover everything."}, {"start": 337.0, "end": 342.0, "text": " And at some point, I will say that, yeah, please open the book if you would like to know more about this."}, {"start": 342.0, "end": 347.0, "text": " But whatever I tell you here, it's going to be enough for an exam."}, {"start": 347.0, "end": 354.0, "text": " So it's not going to happen that, hey, why haven't you read, I don't know, page 859."}, {"start": 354.0, "end": 357.0, "text": " Don't you remember that? This is not going to happen."}, {"start": 357.0, "end": 359.0, "text": " So this is to augment your knowledge."}, {"start": 359.0, "end": 362.0, "text": " If you would like to know more, and this is a great place to do."}, {"start": 362.0, "end": 365.0, "text": " And this has a website. This can be bought at different places."}, {"start": 365.0, "end": 367.0, "text": " There is some sample chapters."}, {"start": 367.0, "end": 377.0, "text": " And before buying, you can take a look whether you like it enough or not."}, {"start": 377.0, "end": 380.0, "text": " So it's pretty cool that it has sample chapters."}, {"start": 380.0, "end": 384.0, "text": " Let's start with what you shouldn't expect from this course. I'll just run through it."}, {"start": 384.0, "end": 389.0, "text": " There's not going to be rigorous derivations of every imaginable equation that you have."}, {"start": 389.0, "end": 398.0, "text": " So there are courses where there is this never ending infinite loop like in programming."}, {"start": 398.0, "end": 404.0, "text": " An infinite loop of definition theorem, corollary definition theorem, theorem, lemma."}, {"start": 404.0, "end": 408.0, "text": " Raise your hand if you've been to a course like that."}, {"start": 408.0, "end": 412.0, "text": " I'm not going to tell anyone. I'm not going to tell anyone."}, {"start": 412.0, "end": 416.0, "text": " So I've had a lot of these courses and I've had enough."}, {"start": 416.0, "end": 419.0, "text": " So I'm trying to do it differently."}, {"start": 419.0, "end": 422.0, "text": " There's not going to be endless derivations."}, {"start": 422.0, "end": 427.0, "text": " There's not going to be an endless stream of formulae as well without explanation."}, {"start": 427.0, "end": 431.0, "text": " There's going to be formulae, but with explanation we're going to play with all of them."}, {"start": 431.0, "end": 435.0, "text": " And you are going to understand the core meaning of these things."}, {"start": 435.0, "end": 447.0, "text": " And at the same time, please don't expect to understand everything if you open, for instance, the Luxrender source code."}, {"start": 447.0, "end": 452.0, "text": " This is like a half a million line of code project."}, {"start": 452.0, "end": 456.0, "text": " One of the best renders out there. But there's many. There's many really good renders."}, {"start": 456.0, "end": 464.0, "text": " You will not understand every single thing that is there, but you will understand how the light transport part works."}, {"start": 464.0, "end": 467.0, "text": " As thoroughly as possible."}, {"start": 467.0, "end": 473.0, "text": " And the most important thing I've put it there in bold because this is what students love."}, {"start": 473.0, "end": 477.0, "text": " You don't have to memorize any of the formulae."}, {"start": 477.0, "end": 484.0, "text": " I will never tell you that, give me this formula and you will have to remember it off your head."}, {"start": 484.0, "end": 489.0, "text": " I don't care. If you're an engineer at a company, you'll see that you need to solve a problem."}, {"start": 489.0, "end": 496.0, "text": " Remember something, what you do. Google. And you look for it. It's not important to remember things."}, {"start": 496.0, "end": 502.0, "text": " It's important to understand things. So if you look at the formula, you will have to understand what is going on."}, {"start": 502.0, "end": 511.0, "text": " And that's intuition. This is what I would like you to have as much as possible, but you don't need to memorize any of these."}, {"start": 511.0, "end": 519.0, "text": " Now, what you should expect from this course is how to simulate light in a simple and elegant way."}, {"start": 519.0, "end": 523.0, "text": " This is going to be a surprise at first because things are going to look complicated."}, {"start": 523.0, "end": 532.0, "text": " And by the end, we're going to derive really simple solutions for that that can be implemented in 200 lines of C++."}, {"start": 532.0, "end": 537.0, "text": " So these 200 lines can compute something that's almost as beautiful as what you have seen here."}, {"start": 537.0, "end": 546.0, "text": " And I have written this piece of code and every theorem that we learn about, you are going to see them in code."}, {"start": 546.0, "end": 551.0, "text": " In fact, there's going to be an entire lecture on code review."}, {"start": 551.0, "end": 557.0, "text": " Let's go through this renderer. And see there is Schlich's approximation."}, {"start": 557.0, "end": 563.0, "text": " There is Snell's law. There is this and that. And everything you learn here, you are going to see in code."}, {"start": 563.0, "end": 572.0, "text": " It's not just flying out and doing no. You will know why nature looks like as it does in real life."}, {"start": 572.0, "end": 580.0, "text": " And you will be wondering that there's so many beautiful things and why haven't I seen them the way they are."}, {"start": 580.0, "end": 583.0, "text": " Why are they looking the way they are?"}, {"start": 583.0, "end": 588.0, "text": " And you will know about also most of the state of the art in global illumination."}, {"start": 588.0, "end": 595.0, "text": " This means that yes, we will start with algorithms from 1968."}, {"start": 595.0, "end": 602.0, "text": " And we will end with algorithms from this year, like from two weeks ago, or in the next few weeks,"}, {"start": 602.0, "end": 610.0, "text": " because the C-graph is coming, like the C-graph paper, the biggest conference with the best of the bunch is coming in the next few weeks."}, {"start": 610.0, "end": 617.0, "text": " And I'm going to read through it and the materials will be updated to the very latest works."}, {"start": 617.0, "end": 628.0, "text": " And another thing is that really important is that you will be able to visualize and understand complicated formulae in a really intuitive way."}, {"start": 628.0, "end": 637.0, "text": " So I would like you to learn something that's not only life transport specific, but you will be able to use this knowledge wherever you go,"}, {"start": 637.0, "end": 642.0, "text": " whatever kind of mathematical problems you have, this knowledge will be useful."}, {"start": 642.0, "end": 647.0, "text": " And you will see from the very first lecture."}, {"start": 647.0, "end": 652.0, "text": " And the most important thing is that you will see the world differently."}, {"start": 652.0, "end": 659.0, "text": " There is lots of beautiful things in nature and you won't be able to stop looking at it."}, {"start": 659.0, "end": 665.0, "text": " So you will perhaps like taking the train on public transport a bit more than before,"}, {"start": 665.0, "end": 672.0, "text": " because there's so many intricate, delicate things to see that you haven't seen before."}, {"start": 672.0, "end": 701.0, "text": " You've looked, but you haven't seen them before. Stay up."}]
Two Minute Papers
https://www.youtube.com/watch?v=V1eYniJ0Rnk
Google DeepMind's Deep Q-learning playing Atari Breakout!
Google DeepMind created an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level. It is capable of playing many Atari games and uses a combination of deep artificial neural networks and reinforcement learning. After presenting their initial results with the algorithm, Google almost immediately acquired the company for several hundred million dollars, hence the name Google DeepMind. Please enjoy the footage and let me know if you have any questions regarding deep learning! ______________________ Recommended for you: 1. How DeepMind's AlphaGo Defeated Lee Sedol - https://www.youtube.com/watch?v=a-ovvd_ZrmA&index=58&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e 2. How DeepMind Conquered Go With Deep Learning (AlphaGo) - https://www.youtube.com/watch?v=IFmj5M5Q5jg&index=42&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e 3. Google DeepMind's Deep Q-Learning & Superhuman Atari Gameplays - https://www.youtube.com/watch?v=Ih8EfvOzBOY&index=14&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Subscribe if you would like to see more content like this: http://www.youtube.com/subscription_center?add_user=keeroyz - Original DeepMind code: https://sites.google.com/a/deepmind.com/dqn/ - Ilya Kuzovkin's fork with visualization: https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner - This patch fixes the visualization when reloading a pre-trained network. The window will appear after the first evaluation batch is done (typically a few minutes): http://cg.tuwien.ac.at/~zsolnai/wp/wp-content/uploads/2015/03/train_agent.patch - This configuration file will run Ilya Kuzovkin's version with less than 1GB of VRAM: http://cg.tuwien.ac.at/~zsolnai/wp/wp-content/uploads/2015/03/run_gpu - The original Nature paper on this deep learning technique is available here: http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html - And some mirrors that are not behind a paywall: http://www.cs.swarthmore.edu/~meeden/cs63/s15/nature15b.pdf http://diyhpl.us/~nmz787/pdf/Human-level_control_through_deep_reinforcement_learning.pdf Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
you you you you
[{"start": 0.0, "end": 2.0, "text": " you"}, {"start": 30.0, "end": 32.0, "text": " you"}, {"start": 60.0, "end": 62.0, "text": " you"}, {"start": 90.0, "end": 92.0, "text": " you"}]
Two Minute Papers
https://www.youtube.com/watch?v=cS4Am7Q8wmM
Procedural Generation of Hand-drawn like Line Art
A technique to mimic and apply artistic drawing style on 3D models using reinforcement learning. Details: http://cg.tuwien.ac.at/~zsolnai/gfx/prodecural_brush_synthesis_paper/ This technique is expected to be used in the feature-length film, Egill, The Last Pagan: http://www.imdb.com/title/tt1492806/?ref_=fn_al_tt_1 Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
[]
Two Minute Papers
https://www.youtube.com/watch?v=xjiFVSSMkfI
Control of Newtonian fluids with minimum force impact using the Navier Stokes equations
This is a program to simulate and control Newtonian fluids on the GPU by solving the Navier-Stokes equations. It's lots of fun and you should definitely try it out! The project was published at the Eurographics 2013 Poster Session. Details: http://cg.tuwien.ac.at/~zsolnai/gfx/real_time_fluid_control_eg/ http://cg.tuwien.ac.at/~zsolnai/control-of-newtonian-fluids-with-minimum-force-impact-using-the-navier-stokes-equations/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Today is a temporary day at Dria, it's the top habit that it wouldn't be going Connected to Dria is notions
[{"start": 0.0, "end": 1.22, "text": " Today is a temporary day at Dria,"}, {"start": 1.22, "end": 10.32, "text": " it's the top habit that it wouldn't be going Connected to Dria"}, {"start": 10.32, "end": 19.580000000000002, "text": " is not"}, {"start": 19.580000000000002, "end": 23.92, "text": "ions"}]
Two Minute Papers
https://www.youtube.com/watch?v=7SFw6sdyzcQ
Real-time Control and Stopping of Fluids by Károly Zsolnai and László Szirmay-Kalos
This is a program to simulate and control Newtonian fluids on the GPU by solving the Navier-Stokes equations. It's lots of fun and you should definitely try it out! The project was published at the Eurographics 2013 Poster Session. Details and code for this project are available here: http://cg.tuwien.ac.at/~zsolnai/gfx/real_time_fluid_control_eg/ http://cg.tuwien.ac.at/~zsolnai/control-of-newtonian-fluids-with-minimum-force-impact-using-the-navier-stokes-equations/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
...'
[{"start": 0.0, "end": 16.0, "text": "...'"}]
Two Minute Papers
https://www.youtube.com/watch?v=27PYlj-qNb0
A parallel genetic algorithm for Roger Alsing’s EvoLisa problem (triangles)
Genetic algorithms can solve a multitude of optimization problems by the digital modeling of natural selection, mutation and recombination. This algorithm attempts to draw a faithful representation of the Mona Lisa using only a few triangles. It is implemented in C++ and OpenGL and takes less than 400 lines of code. It is also a parallel implementation of a genetic algorithm, therefore it uses multiple CPU cores. The entire code is available for download below. Code and details for this project are available here: http://cg.tuwien.ac.at/~zsolnai/gfx/mona_lisa_parallel_genetic_algorithm/ Roger Alsing's original work: http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Don'ts everybody, sorry. Thank you.
[{"start": 0.0, "end": 22.6, "text": " Don'ts everybody, sorry."}, {"start": 22.6, "end": 47.88, "text": " Thank you."}]
Two Minute Papers
https://www.youtube.com/watch?v=Zwj94pIAwzg
Volumetric path tracing with equiangular sampling in a 2k binary
Volumetric path tracing enables us to render the interactions of light and solid objects or a participating medium, such az haze, fog and so on. Equiangular sampling speeds up this process substantially. Details and binary: http://cg.tuwien.ac.at/~zsolnai/gfx/volumetric-path-tracing-with-equiangular-sampling-in-2k/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
you
[{"start": 0.0, "end": 2.0, "text": " you"}]
Two Minute Papers
https://www.youtube.com/watch?v=mU-5CsaPfsE
Separable Subsurface Scattering - Unofficial talk by Károly Zsolnai
Separable Subsurface Scattering - a novel technique for real-time subsurface light transport calculations for computer games by Activision-Blizzard. This technique can render translucent objects such as human skin, marble, milk, plant leaves in real time on commodity hardware. The paper was published in Computer Graphics Forum (CGF) in 2015 and was presented at the Eurographics Symposium on Rendering (EGSR) in 2015. Let us know if you have any questions regarding this work and we'll be happy to assist you! Project webpage: http://cg.tuwien.ac.at/~zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Friendly greetings to everyone, my name is Karo Jornai and I promise to you that the pronunciation of my name is going to be the most complicated thing in this unofficial talk. This piece of work is a collaboration between Activision Blizzard, the University of Saragusa and the Technical University of Vienna. The quest here was to render images with really high quality subsurface scattering in real time on commodity hardware. To render photorealistic images, we populated scene with objects. Add the camera and the light source and start tracing rays from the camera to determine the incoming gradients. Now even though there is a large volume of research going on on how to do this efficiently, this is still a really time consuming process. What's more, this figure here shows light transport only between surfaces, meaning that we suppose that rays of light propagate only in vacuum. If we extend our program to support participating media, we can render volumetric effects like smoke, haze and many others and also translucent materials such as skin, plant leaves, marble, wax and so on. However, this extension bumps up the dimensionality of the integral we need to solve, making the process even more time consuming. But the reward for this is immense. Here on the left you can see how our skin would look like without subsurface scattering. It is indeed a very important factor in the visual appearance of many translucent materials. It is not a surprise that the motion picture and the gaming industries are yearning for a real time solution for this. There are fortunately simplified models to render subsurface light transport in optical lithic materials. What we do here is take an infinite hog space of a chosen translucent material, light an infinite azimoli thin pencil beam from above in normal incidence. This beam will penetrate the surface of the material and will start to attenuate as it becomes more and more submerged into the medium. During this process, these photons undergo many scattering events and eventually exit somewhere away from the origin. Counting up these photons exiting at different distances, we can build a histogram that we call diffusion profile and we'll denote as RD. This is an actual simulated diffusion profile, what it looks like if we look from above. Another important bit of preliminary knowledge is that we can directly use these diffusion profiles by convolving them with an input irradiance map to add subsurface scattering to it as a post-processing step. This is how the result looks like after the convolution is applied. Now this is remarkable as we don't have to run a fully ray-traced simulation with participating media. However, these signals are stored as images, so normally this means that we compute a 2D convolution between them. Unfortunately this is very costly, but there are techniques to reduce this problem to several much cheaper 1D convolutions. One example is Dion's excellent technique. He takes into consideration that in a homogenous and isotropic medium, the diffusion profiles are radially symmetric, therefore it is possible to take a 1D slice of this profile as shown below here and trying to fit it with a sum of Gaussians which are individually also radially symmetric. This means that we can use a cheaper set of 1D convolutions instead of using the 2D profile directly. This is an example input signal and the results with Dion's technique with different number of Gaussians compared to the true diffusion kernel. It is important to point out that even the mathematical properties of Gaussians, this technique requires one horizontal and one vertical convolution per Gaussian. These are 1D convolutions. This also means that if we'd like to obtain high quality subsurface scattering, we need at least 4 Gaussians and therefore 8 convolutions. This is not suitable for most real-time applications. However, it is a really smart idea and hasn't been really improved since 2007. And honestly, when we started this project, we didn't think anyone could realistically come up with something that's better than this. Our quest was nonetheless to obtain high fidelity results with a separable kernel using only 2 convolutions which is marked with green up there. Visualizing the SVD of the diffusion profile, it is clear that the signal is non-separable. It is not possible to write this 2D function as a mere product of 1D functions. However, the sami-lock plot tells us that the higher ranked singular values decay rapidly, meaning that most of the information here is not random. It has a lot of structure, therefore a rank 1 approximation sounds like a good starting point. The plan was to treat the diffusion profile here on the right as a matrix for which we compute the SVD. Here you can see the one singular value that we're taking and the corresponding left and right singular vectors that we hear denote by 8. We then compute one horizontal and one vertical convolution using these singular vectors to reconstruct the diffusion kernel and obtain the output. This is the input and the rank 1 SVD reconstruction. This would be the ground truth and now we can see that the separable SVD approximation is indeed looking very grim. There is a world of a difference between the two. So wow, this is surprising, especially considering the fact that the Eckhart Young theorem teaches us that the SVD is the best reconstruction in terms of the Frobenius norm which corresponds here to the RMS error. This is the absolute best reconstruction we can obtain with respect to the RMS error. Very disappointing. Here is the own algorithm, the one with the one D slice and fitting with one Gaussian. This means the same amount of convolutions and hence the same execution time as the rank 1 SVD and the ground truth. This is how the SVD looks like on the real world scene compared to using the true kernel. Looking at it again in our disappointment, we notice that the SVD yields an overall darker image therefore the reconstruction is not energy conserving. A new idea came up. Maybe we would try this before putting the project on ice and calling it a day. What if we would solve a minimization problem where the reconstructed kernel would still be as close as possible to the diffusion profile but would also be energy conservant. This should definitely be a viable option. And the results are dreadful. Herendus, I just don't know what to say. Look at the notes. Somehow as if the inputty radiance signal showed up as a really nasty ringing artifact. And we had the same around the ear. We visualized the actual kernel on a disc of light to see what went wrong here. And yes we see that it is indeed dreadful, nothing like the simulated diffusion kernel. But we hadn't the slightest idea why this happened. Visualizing the kernel itself in a simple 1D plot and staring at it for a while, it looks like a great separable approximation with respect to the RMS error. Most of the energy of the signal is close to the origin and the optimizer tries to reconstruct these details as closely as possible. Please note that the kernel plots are deceiving. These signals indeed have the same amount of energy, but the tail of the fit is extending really far away and this makes up for the seemingly less energy of the signal. This very delicate thing took a week of my life and I kind of wanted back. So what if we would minimize not only the RMS error by itself, which forces the optimizer to concentrate the energy to the origin of the signal where the energy spike is. But we would add a guide function which behaves like an envelope that tells the optimizer to possibly reconstruct regions not close to the origin but focus a bit more on far end scattering. This is the fit we originally had. And this is a very simple distance weighted guide function I had in mind. Imagine that we now have a more general model for which we used k equals 0, a constant envelope to obtain these horrendous results. I will now tell the optimizer to use k equals 1, which means that we would give higher weight to the regions further away from the origin. This is what we obtained. Very intuitive, we have a signal with the same amount of energy as if we pushed it from the top to neglect the origin and add this energy to the tail of the signal to focus on the reconstruction of far end scattering. And now we can even go k equals 2, which is essentially squishing the signal a bit more to emphasize far end scattering at the price of neglecting sharp close range details. Back to the original fit. That's weighted by distance a bit by plugging k equals 1 in the optimizer. Almost there. Okay, let's go k equals 2, a bit more emphasis on far end scattering. Now this looks remarkably close to the ground truth. This is the journey behind the guided optimization technique that is separable, requires only two convolutions and is one of the techniques we propose for applications with strict real-time constraints. We also propose an other technique which we have mathematically derived and for which I admit not having an intuitive story. So before I dare showing you the next slide, take a big breath and let's go. So sorry this is how it looks like and what is remarkable about it that this follows a completely different paradigm. What we're aiming for here is not to make our kernel close to the original diffusion kernel, but trying to make the result of the convolution the same. It is almost like minimizing the L2 distance of the resulting convolved images, not the kernels themselves. Stealing for images, not kernels. This is impossible to solve for a general case, so in order to accomplish this, one needs to confine the solution to a class of input-eradient signals, input images where it would work well. In our derivation, we plugged in one of these signals as inputs. This means that the technique should behave analytically on signals like this. And the most remarkable thing that this is a mere rank-1 separable approximation that is really analytic for these class of signals. This means that it mimics the effect of the true kernel perfectly. Let's take a look at this on a practical case, of course, not all signals are one these signals. This is our result with the analytic preintegrated kernel and the ground truth. Very close to being indistinguishable. Furthermore, it is really simple to implement and it has a closed form solution that does not require any kind of optimization procedure. One more interesting detail for the more curious minds, this technique is analytic for a greater class than only one of these signals, a class that we call additively separable signals. So what about artistic editing? The preintegrated technique is great, but it does not offer any kind of artistic control over the output. The guided approximation requires optimization, but in return, it also offers some degree of artistic freedom on how the desired output should look like. We also have this technique, which is simply using two separable Gaussians of different variance values, one each to provide perfect artistic freedom in adjusting the magnitudes of close and faring scattering. Note that these two Gaussians are not the same as in Dian's approach with the two Gaussians, as we do not use the radially symmetric one this signal directly. A real-world example, this is the input irradiance, heavily exaggerated faring scattering, heavily exaggerated close range scattering, and a more conservative, really good looking mixture of the two. Rapping it up, the SVD is great for applications that can afford higher rank reconstructions, the kernel preintegration is a simple technique that is analytic for additively separable signals, guided optimization, a more general version of the preintegration that can be conveniently tuned with one parameter, manual approximation, many degrees of freedom for artists, while it has a quite reasonable accuracy that is comparable to four Gaussians with previous techniques. Different techniques with different levels of scientific rigor and different target audiences ranging from scientists to artists working in the industry. Now even though we used examples with skin to demonstrate our techniques, it is important to point out that they work for a variety of translucent media such as plants, marble, roads, in the steel life, and milk. The most important take home message from this project, at least for me, is that it's entirely possible to do academic research together with companies and create results that can make it to multimillion dollar computer games, but also having proven results that are useful for the scientific community. Thank you.
[{"start": 0.0, "end": 5.6000000000000005, "text": " Friendly greetings to everyone, my name is Karo Jornai and I promise to you that the"}, {"start": 5.6000000000000005, "end": 12.0, "text": " pronunciation of my name is going to be the most complicated thing in this unofficial talk."}, {"start": 12.0, "end": 18.0, "text": " This piece of work is a collaboration between Activision Blizzard, the University of Saragusa"}, {"start": 18.0, "end": 20.92, "text": " and the Technical University of Vienna."}, {"start": 20.92, "end": 25.84, "text": " The quest here was to render images with really high quality subsurface scattering in real"}, {"start": 25.84, "end": 29.28, "text": " time on commodity hardware."}, {"start": 29.28, "end": 34.96, "text": " To render photorealistic images, we populated scene with objects."}, {"start": 34.96, "end": 40.24, "text": " Add the camera and the light source and start tracing rays from the camera to determine"}, {"start": 40.24, "end": 42.24, "text": " the incoming gradients."}, {"start": 42.24, "end": 47.8, "text": " Now even though there is a large volume of research going on on how to do this efficiently,"}, {"start": 47.8, "end": 51.2, "text": " this is still a really time consuming process."}, {"start": 51.2, "end": 56.6, "text": " What's more, this figure here shows light transport only between surfaces, meaning that"}, {"start": 56.6, "end": 61.24, "text": " we suppose that rays of light propagate only in vacuum."}, {"start": 61.24, "end": 66.84, "text": " If we extend our program to support participating media, we can render volumetric effects like"}, {"start": 66.84, "end": 76.08, "text": " smoke, haze and many others and also translucent materials such as skin, plant leaves, marble,"}, {"start": 76.08, "end": 78.2, "text": " wax and so on."}, {"start": 78.2, "end": 84.2, "text": " However, this extension bumps up the dimensionality of the integral we need to solve, making"}, {"start": 84.2, "end": 88.24000000000001, "text": " the process even more time consuming."}, {"start": 88.24000000000001, "end": 91.32000000000001, "text": " But the reward for this is immense."}, {"start": 91.32000000000001, "end": 97.08, "text": " Here on the left you can see how our skin would look like without subsurface scattering."}, {"start": 97.08, "end": 103.04, "text": " It is indeed a very important factor in the visual appearance of many translucent materials."}, {"start": 103.04, "end": 107.68, "text": " It is not a surprise that the motion picture and the gaming industries are yearning for"}, {"start": 107.68, "end": 110.88, "text": " a real time solution for this."}, {"start": 110.88, "end": 115.92, "text": " There are fortunately simplified models to render subsurface light transport in optical"}, {"start": 115.92, "end": 117.67999999999999, "text": " lithic materials."}, {"start": 117.67999999999999, "end": 124.0, "text": " What we do here is take an infinite hog space of a chosen translucent material, light"}, {"start": 124.0, "end": 129.4, "text": " an infinite azimoli thin pencil beam from above in normal incidence."}, {"start": 129.4, "end": 134.16, "text": " This beam will penetrate the surface of the material and will start to attenuate as it"}, {"start": 134.16, "end": 137.76, "text": " becomes more and more submerged into the medium."}, {"start": 137.76, "end": 143.0, "text": " During this process, these photons undergo many scattering events and eventually exit"}, {"start": 143.0, "end": 145.44, "text": " somewhere away from the origin."}, {"start": 145.44, "end": 151.39999999999998, "text": " Counting up these photons exiting at different distances, we can build a histogram that we"}, {"start": 151.39999999999998, "end": 157.64, "text": " call diffusion profile and we'll denote as RD."}, {"start": 157.64, "end": 165.07999999999998, "text": " This is an actual simulated diffusion profile, what it looks like if we look from above."}, {"start": 165.08, "end": 169.96, "text": " Another important bit of preliminary knowledge is that we can directly use these diffusion"}, {"start": 169.96, "end": 175.32000000000002, "text": " profiles by convolving them with an input irradiance map to add subsurface scattering to"}, {"start": 175.32000000000002, "end": 179.04000000000002, "text": " it as a post-processing step."}, {"start": 179.04000000000002, "end": 182.72000000000003, "text": " This is how the result looks like after the convolution is applied."}, {"start": 182.72000000000003, "end": 188.68, "text": " Now this is remarkable as we don't have to run a fully ray-traced simulation with"}, {"start": 188.68, "end": 190.56, "text": " participating media."}, {"start": 190.56, "end": 197.56, "text": " However, these signals are stored as images, so normally this means that we compute a 2D"}, {"start": 197.56, "end": 199.36, "text": " convolution between them."}, {"start": 199.36, "end": 204.84, "text": " Unfortunately this is very costly, but there are techniques to reduce this problem to several"}, {"start": 204.84, "end": 208.8, "text": " much cheaper 1D convolutions."}, {"start": 208.8, "end": 211.52, "text": " One example is Dion's excellent technique."}, {"start": 211.52, "end": 217.04, "text": " He takes into consideration that in a homogenous and isotropic medium, the diffusion profiles"}, {"start": 217.04, "end": 222.67999999999998, "text": " are radially symmetric, therefore it is possible to take a 1D slice of this profile as shown"}, {"start": 222.67999999999998, "end": 228.35999999999999, "text": " below here and trying to fit it with a sum of Gaussians which are individually also radially"}, {"start": 228.35999999999999, "end": 229.84, "text": " symmetric."}, {"start": 229.84, "end": 235.16, "text": " This means that we can use a cheaper set of 1D convolutions instead of using the 2D profile"}, {"start": 235.16, "end": 237.35999999999999, "text": " directly."}, {"start": 237.35999999999999, "end": 242.12, "text": " This is an example input signal and the results with Dion's technique with different number"}, {"start": 242.12, "end": 245.79999999999998, "text": " of Gaussians compared to the true diffusion kernel."}, {"start": 245.8, "end": 250.64000000000001, "text": " It is important to point out that even the mathematical properties of Gaussians, this"}, {"start": 250.64000000000001, "end": 256.0, "text": " technique requires one horizontal and one vertical convolution per Gaussian."}, {"start": 256.0, "end": 258.68, "text": " These are 1D convolutions."}, {"start": 258.68, "end": 263.56, "text": " This also means that if we'd like to obtain high quality subsurface scattering, we need"}, {"start": 263.56, "end": 267.40000000000003, "text": " at least 4 Gaussians and therefore 8 convolutions."}, {"start": 267.40000000000003, "end": 270.8, "text": " This is not suitable for most real-time applications."}, {"start": 270.8, "end": 277.92, "text": " However, it is a really smart idea and hasn't been really improved since 2007."}, {"start": 277.92, "end": 282.40000000000003, "text": " And honestly, when we started this project, we didn't think anyone could realistically"}, {"start": 282.40000000000003, "end": 285.48, "text": " come up with something that's better than this."}, {"start": 285.48, "end": 291.04, "text": " Our quest was nonetheless to obtain high fidelity results with a separable kernel using"}, {"start": 291.04, "end": 296.16, "text": " only 2 convolutions which is marked with green up there."}, {"start": 296.16, "end": 302.0, "text": " Visualizing the SVD of the diffusion profile, it is clear that the signal is non-separable."}, {"start": 302.0, "end": 307.08000000000004, "text": " It is not possible to write this 2D function as a mere product of 1D functions."}, {"start": 307.08000000000004, "end": 313.48, "text": " However, the sami-lock plot tells us that the higher ranked singular values decay rapidly,"}, {"start": 313.48, "end": 316.96000000000004, "text": " meaning that most of the information here is not random."}, {"start": 316.96000000000004, "end": 321.64000000000004, "text": " It has a lot of structure, therefore a rank 1 approximation sounds like a good starting"}, {"start": 321.64000000000004, "end": 323.16, "text": " point."}, {"start": 323.16, "end": 327.88000000000005, "text": " The plan was to treat the diffusion profile here on the right as a matrix for which we"}, {"start": 327.88000000000005, "end": 329.84000000000003, "text": " compute the SVD."}, {"start": 329.84000000000003, "end": 334.16, "text": " Here you can see the one singular value that we're taking and the corresponding left and"}, {"start": 334.16, "end": 338.84000000000003, "text": " right singular vectors that we hear denote by 8."}, {"start": 338.84000000000003, "end": 343.68, "text": " We then compute one horizontal and one vertical convolution using these singular vectors to"}, {"start": 343.68, "end": 348.72, "text": " reconstruct the diffusion kernel and obtain the output."}, {"start": 348.72, "end": 354.76000000000005, "text": " This is the input and the rank 1 SVD reconstruction."}, {"start": 354.76000000000005, "end": 360.20000000000005, "text": " This would be the ground truth and now we can see that the separable SVD approximation is"}, {"start": 360.20000000000005, "end": 362.24, "text": " indeed looking very grim."}, {"start": 362.24, "end": 365.28000000000003, "text": " There is a world of a difference between the two."}, {"start": 365.28000000000003, "end": 372.0, "text": " So wow, this is surprising, especially considering the fact that the Eckhart Young theorem teaches"}, {"start": 372.0, "end": 378.48, "text": " us that the SVD is the best reconstruction in terms of the Frobenius norm which corresponds"}, {"start": 378.48, "end": 380.52000000000004, "text": " here to the RMS error."}, {"start": 380.52000000000004, "end": 386.84000000000003, "text": " This is the absolute best reconstruction we can obtain with respect to the RMS error."}, {"start": 386.84000000000003, "end": 389.96000000000004, "text": " Very disappointing."}, {"start": 389.96000000000004, "end": 395.6, "text": " Here is the own algorithm, the one with the one D slice and fitting with one Gaussian."}, {"start": 395.6, "end": 400.72, "text": " This means the same amount of convolutions and hence the same execution time as the rank"}, {"start": 400.72, "end": 406.24, "text": " 1 SVD and the ground truth."}, {"start": 406.24, "end": 414.24, "text": " This is how the SVD looks like on the real world scene compared to using the true kernel."}, {"start": 414.24, "end": 419.76, "text": " Looking at it again in our disappointment, we notice that the SVD yields an overall darker"}, {"start": 419.76, "end": 424.72, "text": " image therefore the reconstruction is not energy conserving."}, {"start": 424.72, "end": 427.0, "text": " A new idea came up."}, {"start": 427.0, "end": 431.84000000000003, "text": " Maybe we would try this before putting the project on ice and calling it a day."}, {"start": 431.84000000000003, "end": 435.92, "text": " What if we would solve a minimization problem where the reconstructed kernel would still"}, {"start": 435.92, "end": 442.24, "text": " be as close as possible to the diffusion profile but would also be energy conservant."}, {"start": 442.24, "end": 445.32, "text": " This should definitely be a viable option."}, {"start": 445.32, "end": 448.52000000000004, "text": " And the results are dreadful."}, {"start": 448.52000000000004, "end": 452.24, "text": " Herendus, I just don't know what to say."}, {"start": 452.24, "end": 453.56, "text": " Look at the notes."}, {"start": 453.56, "end": 459.48, "text": " Somehow as if the inputty radiance signal showed up as a really nasty ringing artifact."}, {"start": 459.48, "end": 462.76, "text": " And we had the same around the ear."}, {"start": 462.76, "end": 468.0, "text": " We visualized the actual kernel on a disc of light to see what went wrong here."}, {"start": 468.0, "end": 475.0, "text": " And yes we see that it is indeed dreadful, nothing like the simulated diffusion kernel."}, {"start": 475.0, "end": 479.2, "text": " But we hadn't the slightest idea why this happened."}, {"start": 479.2, "end": 484.71999999999997, "text": " Visualizing the kernel itself in a simple 1D plot and staring at it for a while, it looks"}, {"start": 484.71999999999997, "end": 490.03999999999996, "text": " like a great separable approximation with respect to the RMS error."}, {"start": 490.04, "end": 495.6, "text": " Most of the energy of the signal is close to the origin and the optimizer tries to reconstruct"}, {"start": 495.6, "end": 498.96000000000004, "text": " these details as closely as possible."}, {"start": 498.96000000000004, "end": 501.8, "text": " Please note that the kernel plots are deceiving."}, {"start": 501.8, "end": 506.8, "text": " These signals indeed have the same amount of energy, but the tail of the fit is extending"}, {"start": 506.8, "end": 512.32, "text": " really far away and this makes up for the seemingly less energy of the signal."}, {"start": 512.32, "end": 518.44, "text": " This very delicate thing took a week of my life and I kind of wanted back."}, {"start": 518.44, "end": 524.12, "text": " So what if we would minimize not only the RMS error by itself, which forces the optimizer"}, {"start": 524.12, "end": 529.6, "text": " to concentrate the energy to the origin of the signal where the energy spike is."}, {"start": 529.6, "end": 534.48, "text": " But we would add a guide function which behaves like an envelope that tells the optimizer"}, {"start": 534.48, "end": 541.12, "text": " to possibly reconstruct regions not close to the origin but focus a bit more on far end"}, {"start": 541.12, "end": 543.1600000000001, "text": " scattering."}, {"start": 543.1600000000001, "end": 546.24, "text": " This is the fit we originally had."}, {"start": 546.24, "end": 551.48, "text": " And this is a very simple distance weighted guide function I had in mind."}, {"start": 551.48, "end": 558.48, "text": " Imagine that we now have a more general model for which we used k equals 0, a constant envelope"}, {"start": 558.48, "end": 561.36, "text": " to obtain these horrendous results."}, {"start": 561.36, "end": 566.64, "text": " I will now tell the optimizer to use k equals 1, which means that we would give higher"}, {"start": 566.64, "end": 571.4, "text": " weight to the regions further away from the origin."}, {"start": 571.4, "end": 573.44, "text": " This is what we obtained."}, {"start": 573.44, "end": 578.6400000000001, "text": " Very intuitive, we have a signal with the same amount of energy as if we pushed it from"}, {"start": 578.6400000000001, "end": 583.6800000000001, "text": " the top to neglect the origin and add this energy to the tail of the signal to focus on"}, {"start": 583.6800000000001, "end": 587.08, "text": " the reconstruction of far end scattering."}, {"start": 587.08, "end": 592.5600000000001, "text": " And now we can even go k equals 2, which is essentially squishing the signal a bit more"}, {"start": 592.5600000000001, "end": 599.6800000000001, "text": " to emphasize far end scattering at the price of neglecting sharp close range details."}, {"start": 599.6800000000001, "end": 601.9200000000001, "text": " Back to the original fit."}, {"start": 601.92, "end": 608.12, "text": " That's weighted by distance a bit by plugging k equals 1 in the optimizer."}, {"start": 608.12, "end": 609.12, "text": " Almost there."}, {"start": 609.12, "end": 615.28, "text": " Okay, let's go k equals 2, a bit more emphasis on far end scattering."}, {"start": 615.28, "end": 619.76, "text": " Now this looks remarkably close to the ground truth."}, {"start": 619.76, "end": 624.9599999999999, "text": " This is the journey behind the guided optimization technique that is separable, requires only two"}, {"start": 624.9599999999999, "end": 630.64, "text": " convolutions and is one of the techniques we propose for applications with strict real-time"}, {"start": 630.64, "end": 632.3199999999999, "text": " constraints."}, {"start": 632.3199999999999, "end": 638.56, "text": " We also propose an other technique which we have mathematically derived and for which I admit"}, {"start": 638.56, "end": 640.88, "text": " not having an intuitive story."}, {"start": 640.88, "end": 649.08, "text": " So before I dare showing you the next slide, take a big breath and let's go."}, {"start": 649.08, "end": 655.2, "text": " So sorry this is how it looks like and what is remarkable about it that this follows"}, {"start": 655.2, "end": 657.8, "text": " a completely different paradigm."}, {"start": 657.8, "end": 662.3599999999999, "text": " What we're aiming for here is not to make our kernel close to the original diffusion"}, {"start": 662.3599999999999, "end": 668.3599999999999, "text": " kernel, but trying to make the result of the convolution the same."}, {"start": 668.3599999999999, "end": 673.28, "text": " It is almost like minimizing the L2 distance of the resulting convolved images, not the"}, {"start": 673.28, "end": 675.4, "text": " kernels themselves."}, {"start": 675.4, "end": 678.8, "text": " Stealing for images, not kernels."}, {"start": 678.8, "end": 684.0, "text": " This is impossible to solve for a general case, so in order to accomplish this, one needs"}, {"start": 684.0, "end": 691.16, "text": " to confine the solution to a class of input-eradient signals, input images where it would work well."}, {"start": 691.16, "end": 695.12, "text": " In our derivation, we plugged in one of these signals as inputs."}, {"start": 695.12, "end": 700.24, "text": " This means that the technique should behave analytically on signals like this."}, {"start": 700.24, "end": 705.96, "text": " And the most remarkable thing that this is a mere rank-1 separable approximation that"}, {"start": 705.96, "end": 709.48, "text": " is really analytic for these class of signals."}, {"start": 709.48, "end": 714.8000000000001, "text": " This means that it mimics the effect of the true kernel perfectly."}, {"start": 714.8000000000001, "end": 718.9200000000001, "text": " Let's take a look at this on a practical case, of course, not all signals are one these"}, {"start": 718.9200000000001, "end": 721.32, "text": " signals."}, {"start": 721.32, "end": 728.76, "text": " This is our result with the analytic preintegrated kernel and the ground truth."}, {"start": 728.76, "end": 731.88, "text": " Very close to being indistinguishable."}, {"start": 731.88, "end": 737.08, "text": " Furthermore, it is really simple to implement and it has a closed form solution that does"}, {"start": 737.08, "end": 741.5200000000001, "text": " not require any kind of optimization procedure."}, {"start": 741.5200000000001, "end": 746.44, "text": " One more interesting detail for the more curious minds, this technique is analytic for a greater"}, {"start": 746.44, "end": 753.32, "text": " class than only one of these signals, a class that we call additively separable signals."}, {"start": 753.32, "end": 755.5200000000001, "text": " So what about artistic editing?"}, {"start": 755.5200000000001, "end": 760.08, "text": " The preintegrated technique is great, but it does not offer any kind of artistic control"}, {"start": 760.08, "end": 761.76, "text": " over the output."}, {"start": 761.76, "end": 767.64, "text": " The guided approximation requires optimization, but in return, it also offers some degree"}, {"start": 767.64, "end": 772.4, "text": " of artistic freedom on how the desired output should look like."}, {"start": 772.4, "end": 777.12, "text": " We also have this technique, which is simply using two separable Gaussians of different"}, {"start": 777.12, "end": 782.4, "text": " variance values, one each to provide perfect artistic freedom in adjusting the magnitudes"}, {"start": 782.4, "end": 785.72, "text": " of close and faring scattering."}, {"start": 785.72, "end": 791.16, "text": " Note that these two Gaussians are not the same as in Dian's approach with the two Gaussians,"}, {"start": 791.16, "end": 794.7199999999999, "text": " as we do not use the radially symmetric one this signal directly."}, {"start": 794.7199999999999, "end": 804.24, "text": " A real-world example, this is the input irradiance, heavily exaggerated faring scattering, heavily"}, {"start": 804.24, "end": 810.6, "text": " exaggerated close range scattering, and a more conservative, really good looking mixture"}, {"start": 810.6, "end": 812.64, "text": " of the two."}, {"start": 812.64, "end": 819.3199999999999, "text": " Rapping it up, the SVD is great for applications that can afford higher rank reconstructions,"}, {"start": 819.32, "end": 824.32, "text": " the kernel preintegration is a simple technique that is analytic for additively separable"}, {"start": 824.32, "end": 831.7600000000001, "text": " signals, guided optimization, a more general version of the preintegration that can be conveniently"}, {"start": 831.7600000000001, "end": 839.08, "text": " tuned with one parameter, manual approximation, many degrees of freedom for artists, while"}, {"start": 839.08, "end": 844.36, "text": " it has a quite reasonable accuracy that is comparable to four Gaussians with previous"}, {"start": 844.36, "end": 846.84, "text": " techniques."}, {"start": 846.84, "end": 852.0400000000001, "text": " Different techniques with different levels of scientific rigor and different target audiences"}, {"start": 852.0400000000001, "end": 857.84, "text": " ranging from scientists to artists working in the industry."}, {"start": 857.84, "end": 862.9200000000001, "text": " Now even though we used examples with skin to demonstrate our techniques, it is important"}, {"start": 862.9200000000001, "end": 871.48, "text": " to point out that they work for a variety of translucent media such as plants, marble,"}, {"start": 871.48, "end": 877.2, "text": " roads, in the steel life, and milk."}, {"start": 877.2, "end": 882.16, "text": " The most important take home message from this project, at least for me, is that it's"}, {"start": 882.16, "end": 887.64, "text": " entirely possible to do academic research together with companies and create results that"}, {"start": 887.64, "end": 892.8000000000001, "text": " can make it to multimillion dollar computer games, but also having proven results that"}, {"start": 892.8000000000001, "end": 896.0, "text": " are useful for the scientific community."}, {"start": 896.0, "end": 910.84, "text": " Thank you."}]