CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=a3sgFQjEfp4
Photorealistic Images from Drawings | Two Minute Papers #80
The Two Minute Papers subreddit is available here: https://www.reddit.com/r/twominutepapers/ By using a convolutional neural networks (a powerful deep learning technique), it is now possible to build an application that takes a rough sketch as an input, and fetches photorealistic images from a database. ___________________________________ The paper "The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies" and the online demo is available here: http://sketchy.eye.gatech.edu/ The paper "Signature verification using a Siamese time delay neural network" is available here: https://scholar.google.hu/scholar?cluster=4400768003729787411&hl=en&as_sdt=0,5 The paper "Learning Fine-grained Image Similarity with Deep Ranking" is available here: https://arxiv.org/abs/1404.4661 Our deep learning-related videos are available here: https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was drawn by Felícia Fehér. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. When we were children, every single one of us dreamed about having a magic pencil that would make our adorable little drawings come true. With the power of machine learning, the authors of this paper just made our dreams come true. Here's the workflow. We provide a crude drawing of something and the algorithm fetches a photograph from a database that depicts something similar to it. It's not synthesizing new images from scratch from a written description like one of the previous works. It fetches an already existing image from a database. The learning happens by showing a deep convolution on your own network pairs of photographs and sketches. If you're not familiar with these networks, we have some links for you in the video description box. It is also important to note that this piece of work does not showcase a new learning technique. It is using existing techniques on a newly created database that the authors kindly provided free of charge to encourage future research in this area. What we need to teach these networks is the relation of a photograph and a sketch. For instance, in an earlier work by the name Siamese Networks, the photo and the sketch would be fed to two convolutional neural networks with the additional information whether this pair is considered similar or dissimilar. This idea of Siamese Networks was initially applied to signature verification more than 20 years ago. Later, triplet networks were used to provide the relation of multiple pairs, like this sketch is closer to this photo than this other one. There is one more technique referred to in the paper that they used, which is a quite delightful read. Make sure to have a look. We need lots and lots of these pairs, so the learning algorithm can learn what it means that a sketch is similar to a photo and as a result, fetch meaningful images for us. So if we train these networks on this new database, this magic pencil dream of ours can come true. What's even better, anyone can try it online. This is going to be a very rigorous and scholarly scientific experiment. I don't know what this should be, but I hope the algorithm does. Well, that kind of makes sense. That's algorithm. For those fellow scholars out there who are in doubt with better drawing skills than I am, well, basically all of you. If you have tried it and got some amazing or maybe not so amazing results, please post them in the comment section. Or as we now have our very own subreddit, make sure to drop by and post some of your results there so we can marvel at them or have a good laugh at possible failure cases. I am looking forward to meeting you fellow scholars at the subreddit. The others are also available. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.84, "end": 10.120000000000001, "text": " When we were children, every single one of us dreamed about having a magic pencil that"}, {"start": 10.120000000000001, "end": 13.36, "text": " would make our adorable little drawings come true."}, {"start": 13.36, "end": 18.2, "text": " With the power of machine learning, the authors of this paper just made our dreams come true."}, {"start": 18.2, "end": 19.2, "text": " Here's the workflow."}, {"start": 19.2, "end": 24.32, "text": " We provide a crude drawing of something and the algorithm fetches a photograph from a"}, {"start": 24.32, "end": 28.0, "text": " database that depicts something similar to it."}, {"start": 28.0, "end": 32.96, "text": " It's not synthesizing new images from scratch from a written description like one of the"}, {"start": 32.96, "end": 34.16, "text": " previous works."}, {"start": 34.16, "end": 37.88, "text": " It fetches an already existing image from a database."}, {"start": 37.88, "end": 43.64, "text": " The learning happens by showing a deep convolution on your own network pairs of photographs and"}, {"start": 43.64, "end": 44.64, "text": " sketches."}, {"start": 44.64, "end": 47.8, "text": " If you're not familiar with these networks, we have some links for you in the video"}, {"start": 47.8, "end": 49.04, "text": " description box."}, {"start": 49.04, "end": 53.92, "text": " It is also important to note that this piece of work does not showcase a new learning"}, {"start": 53.92, "end": 54.92, "text": " technique."}, {"start": 54.92, "end": 60.36, "text": " It is using existing techniques on a newly created database that the authors kindly provided"}, {"start": 60.36, "end": 64.32000000000001, "text": " free of charge to encourage future research in this area."}, {"start": 64.32000000000001, "end": 69.44, "text": " What we need to teach these networks is the relation of a photograph and a sketch."}, {"start": 69.44, "end": 75.48, "text": " For instance, in an earlier work by the name Siamese Networks, the photo and the sketch"}, {"start": 75.48, "end": 80.24000000000001, "text": " would be fed to two convolutional neural networks with the additional information whether"}, {"start": 80.24000000000001, "end": 83.68, "text": " this pair is considered similar or dissimilar."}, {"start": 83.68, "end": 88.92, "text": " This idea of Siamese Networks was initially applied to signature verification more than"}, {"start": 88.92, "end": 90.44000000000001, "text": " 20 years ago."}, {"start": 90.44000000000001, "end": 96.44000000000001, "text": " Later, triplet networks were used to provide the relation of multiple pairs, like this"}, {"start": 96.44000000000001, "end": 99.80000000000001, "text": " sketch is closer to this photo than this other one."}, {"start": 99.80000000000001, "end": 103.84, "text": " There is one more technique referred to in the paper that they used, which is a quite"}, {"start": 103.84, "end": 105.04, "text": " delightful read."}, {"start": 105.04, "end": 106.36000000000001, "text": " Make sure to have a look."}, {"start": 106.36000000000001, "end": 111.56, "text": " We need lots and lots of these pairs, so the learning algorithm can learn what it means"}, {"start": 111.56, "end": 118.0, "text": " that a sketch is similar to a photo and as a result, fetch meaningful images for us."}, {"start": 118.0, "end": 123.24000000000001, "text": " So if we train these networks on this new database, this magic pencil dream of ours can"}, {"start": 123.24000000000001, "end": 124.4, "text": " come true."}, {"start": 124.4, "end": 129.12, "text": " What's even better, anyone can try it online."}, {"start": 129.12, "end": 134.32, "text": " This is going to be a very rigorous and scholarly scientific experiment."}, {"start": 134.32, "end": 137.96, "text": " I don't know what this should be, but I hope the algorithm does."}, {"start": 137.96, "end": 140.84, "text": " Well, that kind of makes sense."}, {"start": 140.84, "end": 142.04, "text": " That's algorithm."}, {"start": 142.04, "end": 146.6, "text": " For those fellow scholars out there who are in doubt with better drawing skills than I"}, {"start": 146.6, "end": 148.96, "text": " am, well, basically all of you."}, {"start": 148.96, "end": 154.12, "text": " If you have tried it and got some amazing or maybe not so amazing results, please post"}, {"start": 154.12, "end": 155.8, "text": " them in the comment section."}, {"start": 155.8, "end": 160.92000000000002, "text": " Or as we now have our very own subreddit, make sure to drop by and post some of your"}, {"start": 160.92000000000002, "end": 165.92000000000002, "text": " results there so we can marvel at them or have a good laugh at possible failure cases."}, {"start": 165.92000000000002, "end": 169.84, "text": " I am looking forward to meeting you fellow scholars at the subreddit."}, {"start": 169.84, "end": 171.84, "text": " The others are also available."}, {"start": 171.84, "end": 199.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=flOevlA9RyQ
Visually Indicated Sounds | Two Minute Papers #79
The Scholarly Store is available here: https://shop.spreadshirt.net/TwoMinutePapers Using the power of deep learning, it is now possible to create a technique that looks at a silent video and synthesize appropriate sound effects for it. The usage is at the moment, limited to hitting these objects with a drumstick. Note: The authors seem to lean on a database of sounds, i.e., the synthesis does not happen from scratch, but they are not merely fetching the database entry for a given sound, but performing example-based synthesis (Section 5.2 in the paper below). In the video and the paper, they both use the words "synthesized sound" and "predicted sound", and it may be a bit unclear what degree of synthesis qualifies as a "synthesized sound". I think this is definitely worthy of further scrutiny. _____________________________________ The paper "Visually Indicated Sounds" is available here: https://arxiv.org/abs/1512.08512 Recommended for you: What Do Virtual Objects Sound Like? - https://www.youtube.com/watch?v=ZaFqvM1IsP8&index=37&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Synthesizing Sound From Collisions - https://www.youtube.com/watch?v=rskdLEl05KI&index=51&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Reconstructing Sound From Vibrations - https://www.youtube.com/watch?v=2i1hrywDwPo&index=83&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Our deep learning-related videos are available here (if you are looking for convolutional neural networks, recurrent neural networks): https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by slgckgc - https://flic.kr/p/9x93qE Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojejolnai-Fehir. This name is not getting any easier, is it? It used to be Karojejolnai, which was hard enough and now this. Anyway, let's get started. This technique simulates how different objects in a video sound when struck. We have showcased some marvelous previous techniques that were mostly limited to wooden and plastic materials. Needless to say, there are links to these episodes in the video description box. A convolutional neural network takes care of understanding what is seen in the video. This technique is known to be particularly suited to processing image and video content. And it works by looking at the silent video directly and trying to understand what is going on just like a human wood. We train these networks with input and output pairs. The input is a video of us beating the hell out of some object with a drumstick. The choice of research. And the output is the sound this object emits. However, the output sound is something that changes in time. It is a sequence, therefore it cannot be handled by a simple classical neural network. It is learned by a recurrent neural network that can take care of learning such sequences. If you haven't heard these terms before, no worries. We have previous episodes on all of them in the video description box. Make sure to check them out. This piece of work is a nice showcase of combining two quite powerful techniques. The convolutional neural network tries to understand what happens in the input video and the recurrent neural network seals the deal by learning and guessing the correct sound that objects shown in the video would emit when struck. The synthesized outputs were compared to real-world results both mathematically and by asking humans to try to tell from the two samples which one the real deal is. These people were fooled by the algorithm around 40% of the time, which I find to be a really amazing result considering two things. First, the baseline is not 50%, but 0%, because people don't pick choices at random. We cannot reasonably expect a synthesized sound to fool humans at any time. Like nice little neural networks, we've been trained to recognize these sounds all are lives after all. And second, this is one of the first papers from a machine learning angle on sound synthesis. Before reading the paper, I expected at most 10 or 20% if that. The title wave of machine learning runs through a number of different scientific fields. Will deep learning techniques establish supremacy in these areas? Hard to say yet, but what we know for sure is that great strides are made literally every week. There are so many works out there, sometimes I don't even know where to start. Good times indeed. Here we go, some delightful news for you fellow scholars. The scholarly two minute paper store is now open. There are two different kinds of man's t-shirts available and a nice sleek design version that we made for the fellow scholar ladies out there. We also have the scholarly mug to get your day started in the most scientific way possible. We have tested the quality of these products and we're really happy with what we got. If you ordered anything, please provide us feedback on how you like the quality of the delivery and the products themselves. If you can send us an image of yourself wearing or using any of these, we'd love to have a look. Just leave them in the comment section or tweet at us. If you don't like what you get within 30 days, you can exchange it or get your product cost refunded. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karojejolnai-Fehir."}, {"start": 5.36, "end": 8.32, "text": " This name is not getting any easier, is it?"}, {"start": 8.32, "end": 13.0, "text": " It used to be Karojejolnai, which was hard enough and now this."}, {"start": 13.0, "end": 15.72, "text": " Anyway, let's get started."}, {"start": 15.72, "end": 20.76, "text": " This technique simulates how different objects in a video sound when struck."}, {"start": 20.76, "end": 26.48, "text": " We have showcased some marvelous previous techniques that were mostly limited to wooden and plastic"}, {"start": 26.48, "end": 27.48, "text": " materials."}, {"start": 27.48, "end": 31.560000000000002, "text": " Needless to say, there are links to these episodes in the video description box."}, {"start": 31.560000000000002, "end": 37.4, "text": " A convolutional neural network takes care of understanding what is seen in the video."}, {"start": 37.4, "end": 43.16, "text": " This technique is known to be particularly suited to processing image and video content."}, {"start": 43.16, "end": 48.44, "text": " And it works by looking at the silent video directly and trying to understand what is going"}, {"start": 48.44, "end": 50.6, "text": " on just like a human wood."}, {"start": 50.6, "end": 54.28, "text": " We train these networks with input and output pairs."}, {"start": 54.28, "end": 67.08, "text": " The input is a video of us beating the hell out of some object with a drumstick."}, {"start": 67.08, "end": 69.0, "text": " The choice of research."}, {"start": 69.0, "end": 72.36, "text": " And the output is the sound this object emits."}, {"start": 72.36, "end": 76.56, "text": " However, the output sound is something that changes in time."}, {"start": 76.56, "end": 82.2, "text": " It is a sequence, therefore it cannot be handled by a simple classical neural network."}, {"start": 82.2, "end": 88.12, "text": " It is learned by a recurrent neural network that can take care of learning such sequences."}, {"start": 88.12, "end": 91.04, "text": " If you haven't heard these terms before, no worries."}, {"start": 91.04, "end": 94.68, "text": " We have previous episodes on all of them in the video description box."}, {"start": 94.68, "end": 96.24000000000001, "text": " Make sure to check them out."}, {"start": 96.24000000000001, "end": 101.72, "text": " This piece of work is a nice showcase of combining two quite powerful techniques."}, {"start": 101.72, "end": 107.88, "text": " The convolutional neural network tries to understand what happens in the input video and the recurrent"}, {"start": 107.88, "end": 113.24, "text": " neural network seals the deal by learning and guessing the correct sound that objects"}, {"start": 113.24, "end": 116.0, "text": " shown in the video would emit when struck."}, {"start": 116.0, "end": 122.56, "text": " The synthesized outputs were compared to real-world results both mathematically and by asking"}, {"start": 122.56, "end": 127.52, "text": " humans to try to tell from the two samples which one the real deal is."}, {"start": 127.52, "end": 133.88, "text": " These people were fooled by the algorithm around 40% of the time, which I find to be a really"}, {"start": 133.88, "end": 136.88, "text": " amazing result considering two things."}, {"start": 136.88, "end": 144.48, "text": " First, the baseline is not 50%, but 0%, because people don't pick choices at random."}, {"start": 144.48, "end": 149.79999999999998, "text": " We cannot reasonably expect a synthesized sound to fool humans at any time."}, {"start": 149.79999999999998, "end": 154.2, "text": " Like nice little neural networks, we've been trained to recognize these sounds all"}, {"start": 154.2, "end": 156.2, "text": " are lives after all."}, {"start": 156.2, "end": 162.2, "text": " And second, this is one of the first papers from a machine learning angle on sound synthesis."}, {"start": 162.2, "end": 167.51999999999998, "text": " Before reading the paper, I expected at most 10 or 20% if that."}, {"start": 167.51999999999998, "end": 172.95999999999998, "text": " The title wave of machine learning runs through a number of different scientific fields."}, {"start": 172.95999999999998, "end": 177.35999999999999, "text": " Will deep learning techniques establish supremacy in these areas?"}, {"start": 177.35999999999999, "end": 182.95999999999998, "text": " Hard to say yet, but what we know for sure is that great strides are made literally every"}, {"start": 182.95999999999998, "end": 183.95999999999998, "text": " week."}, {"start": 183.95999999999998, "end": 188.6, "text": " There are so many works out there, sometimes I don't even know where to start."}, {"start": 188.6, "end": 190.51999999999998, "text": " Good times indeed."}, {"start": 190.52, "end": 194.4, "text": " Here we go, some delightful news for you fellow scholars."}, {"start": 194.4, "end": 197.92000000000002, "text": " The scholarly two minute paper store is now open."}, {"start": 197.92000000000002, "end": 203.72, "text": " There are two different kinds of man's t-shirts available and a nice sleek design version"}, {"start": 203.72, "end": 207.0, "text": " that we made for the fellow scholar ladies out there."}, {"start": 207.0, "end": 213.20000000000002, "text": " We also have the scholarly mug to get your day started in the most scientific way possible."}, {"start": 213.20000000000002, "end": 218.04000000000002, "text": " We have tested the quality of these products and we're really happy with what we got."}, {"start": 218.04, "end": 223.35999999999999, "text": " If you ordered anything, please provide us feedback on how you like the quality of the delivery"}, {"start": 223.35999999999999, "end": 225.07999999999998, "text": " and the products themselves."}, {"start": 225.07999999999998, "end": 229.84, "text": " If you can send us an image of yourself wearing or using any of these, we'd love to have"}, {"start": 229.84, "end": 230.84, "text": " a look."}, {"start": 230.84, "end": 233.6, "text": " Just leave them in the comment section or tweet at us."}, {"start": 233.6, "end": 238.28, "text": " If you don't like what you get within 30 days, you can exchange it or get your product"}, {"start": 238.28, "end": 239.56, "text": " cost refunded."}, {"start": 239.56, "end": 251.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=FMHGS8jWtzM
Time Varying Textures | Two Minute Papers #78
This work simulates how textures evolve and wear over time by taking only one image as an input sample. _______________________________ The paper "Time-varying Weathering in Texture Space" is available here: http://www.math.tau.ac.il/~dcor/articles/2016/TW.pdf http://www.math.tau.ac.il/~dcor/pubs.html WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by David Flores (we have applied a blur effect to it) - https://flic.kr/p/9eaVRJ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. This research group is known for their extraordinary ideas and this piece of work is of course no exception. This paper is about time varying textures. Have a look at these photographs that were taken at a different time. And the million dollar question is, can we simulate how this texture would look if we were to go forward in time? A texture weathering simulation, if you will. The immediate answer is that, of course not. However, in this piece of work, a single input image is taken and without any user interaction, the algorithm attempts to understand how this texture might have looked in the past. Now let's start out by addressing the elephant in the room. This problem can obviously not be solved in the general case for any image. However, if we restrict our assumptions to textures that contain a repetitive pattern, then it is much more feasible to identify the weathering patterns. To achieve this, an age map is built where the red regions show the parts that are assumed to be weathered. You can see on the image how these weathering patterns break up the regularity. Leaning on the assumption that if we go back in time, the regions marked with red were received and if we go forward in time, they will grow. We can write a really cool weathering simulator that creates results that look like wizardry. Broken glass, cracks, age rings on a wooden surface, you name it. But we can also use this technique to transfer weathering patterns from one image onto another. Textures with multiple layers are also supported, which means that it can handle images that are given as a sum of a regular and a regular patterns. The blue background is regular and quite symmetric, but the no parking text is lacking these regularities. And the amazing thing is that the technique still works on such cases. The results are also demonstrated by putting these weathered textures on 3D models so we can see them all in their glory in our own application. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.08, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.08, "end": 10.76, "text": " This research group is known for their extraordinary ideas and this piece of work is of course"}, {"start": 10.76, "end": 11.92, "text": " no exception."}, {"start": 11.92, "end": 15.200000000000001, "text": " This paper is about time varying textures."}, {"start": 15.200000000000001, "end": 18.88, "text": " Have a look at these photographs that were taken at a different time."}, {"start": 18.88, "end": 23.64, "text": " And the million dollar question is, can we simulate how this texture would look if we"}, {"start": 23.64, "end": 25.96, "text": " were to go forward in time?"}, {"start": 25.96, "end": 29.0, "text": " A texture weathering simulation, if you will."}, {"start": 29.0, "end": 31.88, "text": " The immediate answer is that, of course not."}, {"start": 31.88, "end": 38.16, "text": " However, in this piece of work, a single input image is taken and without any user interaction,"}, {"start": 38.16, "end": 44.08, "text": " the algorithm attempts to understand how this texture might have looked in the past."}, {"start": 44.08, "end": 47.68, "text": " Now let's start out by addressing the elephant in the room."}, {"start": 47.68, "end": 52.879999999999995, "text": " This problem can obviously not be solved in the general case for any image."}, {"start": 52.879999999999995, "end": 58.6, "text": " However, if we restrict our assumptions to textures that contain a repetitive pattern,"}, {"start": 58.6, "end": 62.72, "text": " then it is much more feasible to identify the weathering patterns."}, {"start": 62.72, "end": 68.56, "text": " To achieve this, an age map is built where the red regions show the parts that are assumed"}, {"start": 68.56, "end": 69.72, "text": " to be weathered."}, {"start": 69.72, "end": 74.56, "text": " You can see on the image how these weathering patterns break up the regularity."}, {"start": 74.56, "end": 79.44, "text": " Leaning on the assumption that if we go back in time, the regions marked with red were"}, {"start": 79.44, "end": 83.52000000000001, "text": " received and if we go forward in time, they will grow."}, {"start": 83.52000000000001, "end": 88.28, "text": " We can write a really cool weathering simulator that creates results that look like"}, {"start": 88.28, "end": 89.28, "text": " wizardry."}, {"start": 89.28, "end": 95.48, "text": " Broken glass, cracks, age rings on a wooden surface, you name it."}, {"start": 95.48, "end": 105.28, "text": " But we can also use this technique to transfer weathering patterns from one image onto another."}, {"start": 105.28, "end": 110.8, "text": " Textures with multiple layers are also supported, which means that it can handle images that"}, {"start": 110.8, "end": 115.24000000000001, "text": " are given as a sum of a regular and a regular patterns."}, {"start": 115.24, "end": 121.19999999999999, "text": " The blue background is regular and quite symmetric, but the no parking text is lacking these"}, {"start": 121.19999999999999, "end": 122.6, "text": " regularities."}, {"start": 122.6, "end": 128.35999999999999, "text": " And the amazing thing is that the technique still works on such cases."}, {"start": 128.35999999999999, "end": 133.84, "text": " The results are also demonstrated by putting these weathered textures on 3D models so we"}, {"start": 133.84, "end": 143.24, "text": " can see them all in their glory in our own application."}, {"start": 143.24, "end": 147.20000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6rNcAVr-U4s
Fermat Spirals for Layered 3D Printing | Two Minute Papers #77
The paper "Connected Fermat Spirals for Layered Fabrication" is available here: http://irc.cs.sdu.edu.cn/html/2016/2016_0519/222.html The ThatsMaths article on sunflowers + paper "Fibonacci patterns: common or rare?" is available here: https://thatsmaths.com/2014/06/05/sunflowers-and-fibonacci-models-of-efficiency/ http://www.sciencedirect.com/science/article/pii/S2210983813001314 Another nice application of Hilbert curves for spatial indexing (thanks for the link TheJonManley!): http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-indexing-with-Quadtrees-and-Hilbert-Curves WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by José Carlos Cortizo Pérez - https://flic.kr/p/5bXvB9 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojolnai-Fehir. What are Hilbert curves? Hilbert curves are repeating lines that are used to fill a square. Such curves, so far, have enjoyed applications like drawing zigzag patterns to prevent biting in our tail in a snake game. Or, Jogs aside, it is also useful in, for instance, choosing the right pixels to start tracing rays of light in light simulations. Or, to create good strategies in assigning numbers to different computers in a network. These numbers, by the way, we call IP addresses. These are just a few examples, and they show quite well how a seemingly innocuous mathematical structure can see applications in the most mind-bending ways imaginable. So here's one more. Actually, two more. Ed's spiral is essentially a long line as a collection of low curvature spirals. These are generated by a remarkably simple mathematical expression, and we can also observe such shapes in matter nature, for instance, in a sunflower. And the most natural question emerges in the head of every seasoned fellow scholar. Why is that? Why would nature be following mathematics or anything to do with what Firmat wrote on a piece of paper once? It has only been relatively recently shown that as the seeds are growing in the sunflower, they exert forces on each other, therefore they cannot be arranged in an arbitrary way. We can write up the mathematical equations to look for a way to maximize the concentration of growth hormones within the plant to make it as resilient as possible. In the meantime, this forced exertion constraint has to be taken into consideration. If we solve this equation with blood-sweden tears, we may experience some moments of great peril, but it will be all washed away by the beautiful sight of this arrangement. This is exactly what we see in nature, and which happens to be almost exactly the same as a mind-bendingly simple Firmat spiral pattern. Words fail me to describe how amazing it is that Mother Nature is essentially able to find these solutions by herself. Really cool, isn't it? If our mind wasn't blown enough yet, Firmat spirals can also be used to approximate a number of different shapes with the added constraint that we start from a given point, take an enormously long journey of low curvature shapes and get back to almost exactly where we started. This, again, sounds like an innocuous little game if Oking ill-concealed laughter in the audience as it is presented by as excited as underpaid mathematicians. However, as always, this is not the case at all. Researchers have found that if we get a 3D printing machine and create a layered material exactly like this, the surface will have a higher degree of fairness, be quicker to print, and will be generally of higher quality than other possible shapes. If we think about it, if we wish to print a prescribed object like this cat, there is a stupendously large number of ways to fill this space with curves that eventually form a cat. And if we do it with Firmat spirals, it will yield the highest quality print one can do at this point in time. In the paper, this is demonstrated for a number of shapes of varying complexities, and this is what research is all about. Having interesting connections between different fields that are not only beautiful, but also enrich our everyday lives with useful inventions. In the meantime, we have reached our first milestone on Patreon, and I am really grateful to you fellow scholars who are really passionate about supporting the show. We are growing at an extremely rapid pace, and I am really excited to make even more episodes about these amazing research works. Thanks for watching and for your generous support, and I'll see you next time. Bye!
[{"start": 0.0, "end": 4.94, "text": " Dear Fellow Scholars, this is two-minute papers with Karojolnai-Fehir."}, {"start": 4.94, "end": 6.84, "text": " What are Hilbert curves?"}, {"start": 6.84, "end": 10.96, "text": " Hilbert curves are repeating lines that are used to fill a square."}, {"start": 10.96, "end": 16.56, "text": " Such curves, so far, have enjoyed applications like drawing zigzag patterns to prevent"}, {"start": 16.56, "end": 19.28, "text": " biting in our tail in a snake game."}, {"start": 19.28, "end": 25.560000000000002, "text": " Or, Jogs aside, it is also useful in, for instance, choosing the right pixels to start tracing"}, {"start": 25.560000000000002, "end": 28.240000000000002, "text": " rays of light in light simulations."}, {"start": 28.24, "end": 33.8, "text": " Or, to create good strategies in assigning numbers to different computers in a network."}, {"start": 33.8, "end": 37.48, "text": " These numbers, by the way, we call IP addresses."}, {"start": 37.48, "end": 43.0, "text": " These are just a few examples, and they show quite well how a seemingly innocuous mathematical"}, {"start": 43.0, "end": 48.64, "text": " structure can see applications in the most mind-bending ways imaginable."}, {"start": 48.64, "end": 50.12, "text": " So here's one more."}, {"start": 50.12, "end": 52.12, "text": " Actually, two more."}, {"start": 52.12, "end": 58.44, "text": " Ed's spiral is essentially a long line as a collection of low curvature spirals."}, {"start": 58.44, "end": 63.599999999999994, "text": " These are generated by a remarkably simple mathematical expression, and we can also observe"}, {"start": 63.599999999999994, "end": 68.52, "text": " such shapes in matter nature, for instance, in a sunflower."}, {"start": 68.52, "end": 73.8, "text": " And the most natural question emerges in the head of every seasoned fellow scholar."}, {"start": 73.8, "end": 75.24, "text": " Why is that?"}, {"start": 75.24, "end": 80.6, "text": " Why would nature be following mathematics or anything to do with what Firmat wrote on"}, {"start": 80.6, "end": 82.47999999999999, "text": " a piece of paper once?"}, {"start": 82.47999999999999, "end": 88.16, "text": " It has only been relatively recently shown that as the seeds are growing in the sunflower,"}, {"start": 88.16, "end": 93.03999999999999, "text": " they exert forces on each other, therefore they cannot be arranged in an arbitrary way."}, {"start": 93.03999999999999, "end": 98.63999999999999, "text": " We can write up the mathematical equations to look for a way to maximize the concentration"}, {"start": 98.63999999999999, "end": 103.91999999999999, "text": " of growth hormones within the plant to make it as resilient as possible."}, {"start": 103.91999999999999, "end": 108.75999999999999, "text": " In the meantime, this forced exertion constraint has to be taken into consideration."}, {"start": 108.76, "end": 114.84, "text": " If we solve this equation with blood-sweden tears, we may experience some moments of great"}, {"start": 114.84, "end": 120.84, "text": " peril, but it will be all washed away by the beautiful sight of this arrangement."}, {"start": 120.84, "end": 126.64, "text": " This is exactly what we see in nature, and which happens to be almost exactly the same as"}, {"start": 126.64, "end": 130.84, "text": " a mind-bendingly simple Firmat spiral pattern."}, {"start": 130.84, "end": 136.64000000000001, "text": " Words fail me to describe how amazing it is that Mother Nature is essentially able to"}, {"start": 136.64, "end": 139.95999999999998, "text": " find these solutions by herself."}, {"start": 139.95999999999998, "end": 141.64, "text": " Really cool, isn't it?"}, {"start": 141.64, "end": 147.92, "text": " If our mind wasn't blown enough yet, Firmat spirals can also be used to approximate a number"}, {"start": 147.92, "end": 154.04, "text": " of different shapes with the added constraint that we start from a given point, take an enormously"}, {"start": 154.04, "end": 160.2, "text": " long journey of low curvature shapes and get back to almost exactly where we started."}, {"start": 160.2, "end": 165.6, "text": " This, again, sounds like an innocuous little game if Oking ill-concealed laughter in the"}, {"start": 165.6, "end": 170.92, "text": " audience as it is presented by as excited as underpaid mathematicians."}, {"start": 170.92, "end": 174.76, "text": " However, as always, this is not the case at all."}, {"start": 174.76, "end": 180.35999999999999, "text": " Researchers have found that if we get a 3D printing machine and create a layered material"}, {"start": 180.35999999999999, "end": 187.72, "text": " exactly like this, the surface will have a higher degree of fairness, be quicker to print,"}, {"start": 187.72, "end": 191.92, "text": " and will be generally of higher quality than other possible shapes."}, {"start": 191.92, "end": 196.88, "text": " If we think about it, if we wish to print a prescribed object like this cat, there"}, {"start": 196.88, "end": 202.0, "text": " is a stupendously large number of ways to fill this space with curves that eventually"}, {"start": 202.0, "end": 203.44, "text": " form a cat."}, {"start": 203.44, "end": 208.95999999999998, "text": " And if we do it with Firmat spirals, it will yield the highest quality print one can do"}, {"start": 208.95999999999998, "end": 210.64, "text": " at this point in time."}, {"start": 210.64, "end": 215.64, "text": " In the paper, this is demonstrated for a number of shapes of varying complexities, and this"}, {"start": 215.64, "end": 218.51999999999998, "text": " is what research is all about."}, {"start": 218.52, "end": 223.8, "text": " Having interesting connections between different fields that are not only beautiful, but also"}, {"start": 223.8, "end": 228.0, "text": " enrich our everyday lives with useful inventions."}, {"start": 228.0, "end": 233.36, "text": " In the meantime, we have reached our first milestone on Patreon, and I am really grateful to"}, {"start": 233.36, "end": 237.12, "text": " you fellow scholars who are really passionate about supporting the show."}, {"start": 237.12, "end": 242.04000000000002, "text": " We are growing at an extremely rapid pace, and I am really excited to make even more"}, {"start": 242.04000000000002, "end": 245.04000000000002, "text": " episodes about these amazing research works."}, {"start": 245.04000000000002, "end": 248.48000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}, {"start": 248.48, "end": 249.04, "text": " Bye!"}]
Two Minute Papers
https://www.youtube.com/watch?v=iTRnr6p7iYo
Procedural Yarn Models for Cloth Rendering | Two Minute Papers #76
The paper "Fitting Procedural Yarn Models for Realistic Cloth Rendering" is available here: https://shuangz.com/publications.htm http://www.cs.cornell.edu/~kb/publications/SIG16ProceduralYarn.pdf Video credits (in order): Bandyte - https://www.youtube.com/watch?v=e4BhdrFDHkQ TheJamsh - https://www.youtube.com/watch?v=oSYjg9W4Nrk Gamasutra - http://www.gamasutra.com/blogs/AAdonaac/20150903/252889/Procedural_Dungeon_Generation_Algorithm.php WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by Ny - https://flic.kr/p/gqShF5 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. Today, we're going to talk about a procedural algorithm. But first of all, what does procedural mean? Procedural graphics is an exciting subfield of computer graphics where instead of storing a lot of stuff, information is generated on the fly. For instance, in photorealistic rendering, we're trying to simulate how digital objects would look like in real life. We usually seek to involve some scratches on our digital models and perhaps add some pieces of bump or dirt on the surface of the model. To obtain this, we can just ask the computer to not only generate them on the fly, but we can also edit them as we desire. We can also generate cloudy skies and many other things where only some statistical properties have to be satisfied like how many clouds we wish to see and how puffy they should be, which would otherwise be too laborious to draw by hand. We're scholars, after all, we don't have time for that. There are also computer games where the levels we can play through are not predetermined, but also generated on the fly according to some given logical constraints. This can mean that the labyrinth should be solvable or the level shouldn't contain too many enemies that would be impossible to defeat. The main selling point is that such a computer game has potentially an infinite amount of levels. In this paper, a technique is proposed to automatically generate procedural yarn geometry. A yarn is a piece of thread from which we can sew garments. The authors extensively studied parameters in physical pieces of yarns, such as twisting and herriness and tried to match them with a procedural technique. So, for instance, if in a sudden trapdation we wish to obtain a realistic looking piece of cotton, rayon or silk in our light simulation programs, we can easily get a unique sample of a chosen material which will be very close to the real deal in terms of these intuitive parameters like herriness. And we can not only get as long or as many of these as we desire, but we can also edit them according to our artistic vision. The solutions are validated against photographs and even CT scans. I always emphasize that I really like these papers where the solutions have some connection to the real world around us. This one is super fun indeed. The paper is a majestic combination of beautifully written mathematics and amazing looking results. Make sure to have a look. And you know, we always hear these news where other YouTubers have problems with what is going on in their comment section while not here with our fellow scholars. Have a look at the comment section of our previous episodes just absolutely beautiful. I don't even know what to say. It feels like a secret hideout of respectful and scholarly conversations. It's really amazing that we are building a community of fellow scholars, humble people who wish nothing else than to learn more. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir."}, {"start": 4.6000000000000005, "end": 8.2, "text": " Today, we're going to talk about a procedural algorithm."}, {"start": 8.2, "end": 10.8, "text": " But first of all, what does procedural mean?"}, {"start": 10.8, "end": 14.700000000000001, "text": " Procedural graphics is an exciting subfield of computer graphics"}, {"start": 14.700000000000001, "end": 20.3, "text": " where instead of storing a lot of stuff, information is generated on the fly."}, {"start": 20.3, "end": 24.0, "text": " For instance, in photorealistic rendering, we're trying to simulate"}, {"start": 24.0, "end": 27.0, "text": " how digital objects would look like in real life."}, {"start": 27.0, "end": 31.1, "text": " We usually seek to involve some scratches on our digital models"}, {"start": 31.1, "end": 35.8, "text": " and perhaps add some pieces of bump or dirt on the surface of the model."}, {"start": 35.8, "end": 41.1, "text": " To obtain this, we can just ask the computer to not only generate them on the fly,"}, {"start": 41.1, "end": 43.8, "text": " but we can also edit them as we desire."}, {"start": 43.8, "end": 47.3, "text": " We can also generate cloudy skies and many other things"}, {"start": 47.3, "end": 50.900000000000006, "text": " where only some statistical properties have to be satisfied"}, {"start": 50.900000000000006, "end": 55.0, "text": " like how many clouds we wish to see and how puffy they should be,"}, {"start": 55.0, "end": 58.4, "text": " which would otherwise be too laborious to draw by hand."}, {"start": 58.4, "end": 61.0, "text": " We're scholars, after all, we don't have time for that."}, {"start": 61.0, "end": 64.5, "text": " There are also computer games where the levels we can play through"}, {"start": 64.5, "end": 68.2, "text": " are not predetermined, but also generated on the fly"}, {"start": 68.2, "end": 70.9, "text": " according to some given logical constraints."}, {"start": 70.9, "end": 73.9, "text": " This can mean that the labyrinth should be solvable"}, {"start": 73.9, "end": 76.6, "text": " or the level shouldn't contain too many enemies"}, {"start": 76.6, "end": 78.9, "text": " that would be impossible to defeat."}, {"start": 78.9, "end": 81.8, "text": " The main selling point is that such a computer game"}, {"start": 81.8, "end": 85.1, "text": " has potentially an infinite amount of levels."}, {"start": 85.1, "end": 88.6, "text": " In this paper, a technique is proposed to automatically generate"}, {"start": 88.6, "end": 90.6, "text": " procedural yarn geometry."}, {"start": 90.6, "end": 94.6, "text": " A yarn is a piece of thread from which we can sew garments."}, {"start": 94.6, "end": 97.1, "text": " The authors extensively studied parameters"}, {"start": 97.1, "end": 101.6, "text": " in physical pieces of yarns, such as twisting and herriness"}, {"start": 101.6, "end": 104.6, "text": " and tried to match them with a procedural technique."}, {"start": 104.6, "end": 107.6, "text": " So, for instance, if in a sudden trapdation"}, {"start": 107.6, "end": 111.1, "text": " we wish to obtain a realistic looking piece of cotton,"}, {"start": 111.1, "end": 114.6, "text": " rayon or silk in our light simulation programs,"}, {"start": 114.6, "end": 118.19999999999999, "text": " we can easily get a unique sample of a chosen material"}, {"start": 118.19999999999999, "end": 120.5, "text": " which will be very close to the real deal"}, {"start": 120.5, "end": 124.39999999999999, "text": " in terms of these intuitive parameters like herriness."}, {"start": 124.39999999999999, "end": 128.9, "text": " And we can not only get as long or as many of these as we desire,"}, {"start": 128.9, "end": 132.4, "text": " but we can also edit them according to our artistic vision."}, {"start": 132.4, "end": 135.4, "text": " The solutions are validated against photographs"}, {"start": 135.4, "end": 137.29999999999998, "text": " and even CT scans."}, {"start": 137.29999999999998, "end": 140.4, "text": " I always emphasize that I really like these papers"}, {"start": 140.4, "end": 142.6, "text": " where the solutions have some connection"}, {"start": 142.6, "end": 144.3, "text": " to the real world around us."}, {"start": 144.3, "end": 146.3, "text": " This one is super fun indeed."}, {"start": 146.3, "end": 151.20000000000002, "text": " The paper is a majestic combination of beautifully written mathematics"}, {"start": 151.20000000000002, "end": 153.20000000000002, "text": " and amazing looking results."}, {"start": 153.20000000000002, "end": 154.70000000000002, "text": " Make sure to have a look."}, {"start": 154.70000000000002, "end": 156.8, "text": " And you know, we always hear these news"}, {"start": 156.8, "end": 159.1, "text": " where other YouTubers have problems"}, {"start": 159.1, "end": 161.5, "text": " with what is going on in their comment section"}, {"start": 161.5, "end": 164.20000000000002, "text": " while not here with our fellow scholars."}, {"start": 164.20000000000002, "end": 167.4, "text": " Have a look at the comment section of our previous episodes"}, {"start": 167.4, "end": 170.3, "text": " just absolutely beautiful."}, {"start": 170.3, "end": 171.8, "text": " I don't even know what to say."}, {"start": 171.8, "end": 175.10000000000002, "text": " It feels like a secret hideout of respectful"}, {"start": 175.10000000000002, "end": 177.0, "text": " and scholarly conversations."}, {"start": 177.0, "end": 180.0, "text": " It's really amazing that we are building a community"}, {"start": 180.0, "end": 182.70000000000002, "text": " of fellow scholars, humble people"}, {"start": 182.70000000000002, "end": 185.5, "text": " who wish nothing else than to learn more."}, {"start": 185.5, "end": 188.0, "text": " Thanks for watching and for your generous support"}, {"start": 188.0, "end": 201.2, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZBWTD2aNb_o
What Can We Learn From Deep Learning Programs? | Two Minute Papers #75
The paper "Model Compression" is available here: https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf There is also a talk on it here: http://research.microsoft.com/apps/video/default.aspx?id=103668&r=1 Discussions on this issue: 1. https://www.linkedin.com/pulse/computer-vision-research-my-deep-depression-nikos-paragios 2. https://www.reddit.com/r/MachineLearning/comments/4lq701/yann_lecuns_letter_to_cvpr_chair_after_bad/ Recommended for you: Neural Programmer Interpreters - https://www.youtube.com/watch?v=B70tT4WMyJk WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by John Lord - https://flic.kr/p/nVUaB Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizhona Ifehe. I have recently been witnessing a few heated conversations regarding the submission of deep-learning papers to computer vision conferences. The forums are up in arms about the fact that despite some of these papers showcase remarkable results, they were rejected on the basis of, from what I have heard, not adding too much to the tree of knowledge. They argue that we don't understand what is going on in these neural networks and cannot really learn anything new from them. I'll try to understand and rephrase their argument differently. We know exactly how to train a neural network. It's just that, as an output of this process, we get a model of something that resembles a brain as a collection of neurons and circumstances under which these neurons are activated. These stories in a file that can take up to several gigabytes and the best solutions are often not intuitively understandable for us. For instance, in this video, we are training a neural network to classify these points correctly, but what exactly can we learn if we look into these neurons? Now imagine that in practice, we don't have a handful of these boxes, but millions of them, and more complex than the ones you see here. Let's start with a simple example that hopefully helps getting a better grip of this argument. Now I'll be damned if this video won't be more than a couple minutes, so this is going to be one of those slightly extended two-minute paper's episodes. I hope you don't mind. The grammatical rules of my native language, a lot of them, are contained in enormous tomes that everyone has to go through during their school years. Rules are important. They give the scaffoldings for constructing sentences that are grammatically correct. Can we explain or even enumerate these rules? Well unless you are a linguist, the answer is no. Almost no one really remembers more than a few rules, but every native speaker knows how their language should be spoken. And it is because we have heard a lot of sentences that are correct and learned it by heart what makes a proper sentence and what is gibberish. This is exactly what neural networks do. They are trained in a very similar way. In fact they are so effective at it that if we try to forcefully insert some of our knowledge in there, the solutions are going to get worse. It is therefore an appropriate time to ask questions like what merits a paper and what do we define a scientific progress. What if we have extremely accurate algorithms where we don't know what is going on under the hood or simpler, more intuitive algorithms that may be subpar in accuracy? If we have a top tier scientific conference where only a very limited number of papers get accepted, which ones shall we accept? I hope that this question will spark a productive discussion and hopefully scientific research venues will be more vigilant about this question in the future. Okay, so the question is crystal clear, knowledge or efficiency. How about possible solutions? Can we extract scientific insights out of these neural networks? Global compression is a way to essentially compress the information in this brainish thing, this collection of neurons we described earlier. To demonstrate why this is such a cool idea, let's quickly jump to this program by deep mind that plays Atari games at an amazingly high level. In breakout, the solution program that you see here is essentially an enormous table that describes what the program should do when it sees different inputs. It is so enormous that it has many millions of records in there. A manual of many thousand pages, if you will. It is easy to execute for a computer that completely impossible for us to understand why and how it works. However, if we intuitively think about the game itself, we could actually write a super simple program in one line of code that would almost be as good as this. All we need to do is try to follow the ball with the pedal. One line of code and pretty decent results. Not optimal, but quite decent. From such a program, we can actually learn something about the game. Essentially, what we could do with these enormous tables is compressing them into much, much smaller ones. Once there are so tiny that we can actually build an intuition from them. This way, the output of a machine learning technique wouldn't only be an extremely efficient program. But finally, the output of the procedure would be knowledge. Insight. If you think about it, such an algorithm would essentially do research by itself. At first, it would randomly try experimenting and after a large amount of observations are collected, these observations would be explained by a small number of rules. That is exactly the definition of research. And perhaps this is one of the more interesting future frontiers of machine learning research. And by the way, earlier we have talked about a fantastic paper on neural programmer interpreters that also aim to output complete algorithms that can be directly used and understood. The link is available in the description box. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizhona Ifehe."}, {"start": 4.88, "end": 10.56, "text": " I have recently been witnessing a few heated conversations regarding the submission of deep-learning"}, {"start": 10.56, "end": 13.120000000000001, "text": " papers to computer vision conferences."}, {"start": 13.120000000000001, "end": 18.96, "text": " The forums are up in arms about the fact that despite some of these papers showcase remarkable"}, {"start": 18.96, "end": 24.2, "text": " results, they were rejected on the basis of, from what I have heard, not adding too much"}, {"start": 24.2, "end": 25.84, "text": " to the tree of knowledge."}, {"start": 25.84, "end": 30.44, "text": " They argue that we don't understand what is going on in these neural networks and cannot"}, {"start": 30.44, "end": 32.64, "text": " really learn anything new from them."}, {"start": 32.64, "end": 36.76, "text": " I'll try to understand and rephrase their argument differently."}, {"start": 36.76, "end": 39.64, "text": " We know exactly how to train a neural network."}, {"start": 39.64, "end": 45.16, "text": " It's just that, as an output of this process, we get a model of something that resembles"}, {"start": 45.16, "end": 51.480000000000004, "text": " a brain as a collection of neurons and circumstances under which these neurons are activated."}, {"start": 51.48, "end": 56.76, "text": " These stories in a file that can take up to several gigabytes and the best solutions are"}, {"start": 56.76, "end": 60.0, "text": " often not intuitively understandable for us."}, {"start": 60.0, "end": 64.47999999999999, "text": " For instance, in this video, we are training a neural network to classify these points"}, {"start": 64.47999999999999, "end": 68.92, "text": " correctly, but what exactly can we learn if we look into these neurons?"}, {"start": 68.92, "end": 73.52, "text": " Now imagine that in practice, we don't have a handful of these boxes, but millions of"}, {"start": 73.52, "end": 76.52, "text": " them, and more complex than the ones you see here."}, {"start": 76.52, "end": 82.19999999999999, "text": " Let's start with a simple example that hopefully helps getting a better grip of this argument."}, {"start": 82.19999999999999, "end": 86.39999999999999, "text": " Now I'll be damned if this video won't be more than a couple minutes, so this is going"}, {"start": 86.39999999999999, "end": 90.24, "text": " to be one of those slightly extended two-minute paper's episodes."}, {"start": 90.24, "end": 91.67999999999999, "text": " I hope you don't mind."}, {"start": 91.67999999999999, "end": 97.47999999999999, "text": " The grammatical rules of my native language, a lot of them, are contained in enormous"}, {"start": 97.47999999999999, "end": 101.67999999999999, "text": " tomes that everyone has to go through during their school years."}, {"start": 101.67999999999999, "end": 103.28, "text": " Rules are important."}, {"start": 103.28, "end": 108.04, "text": " They give the scaffoldings for constructing sentences that are grammatically correct."}, {"start": 108.04, "end": 111.24, "text": " Can we explain or even enumerate these rules?"}, {"start": 111.24, "end": 114.36, "text": " Well unless you are a linguist, the answer is no."}, {"start": 114.36, "end": 119.48, "text": " Almost no one really remembers more than a few rules, but every native speaker knows"}, {"start": 119.48, "end": 121.8, "text": " how their language should be spoken."}, {"start": 121.8, "end": 126.48, "text": " And it is because we have heard a lot of sentences that are correct and learned it by"}, {"start": 126.48, "end": 130.64, "text": " heart what makes a proper sentence and what is gibberish."}, {"start": 130.64, "end": 133.2, "text": " This is exactly what neural networks do."}, {"start": 133.2, "end": 135.76, "text": " They are trained in a very similar way."}, {"start": 135.76, "end": 141.39999999999998, "text": " In fact they are so effective at it that if we try to forcefully insert some of our knowledge"}, {"start": 141.39999999999998, "end": 144.51999999999998, "text": " in there, the solutions are going to get worse."}, {"start": 144.51999999999998, "end": 149.67999999999998, "text": " It is therefore an appropriate time to ask questions like what merits a paper and what"}, {"start": 149.67999999999998, "end": 152.64, "text": " do we define a scientific progress."}, {"start": 152.64, "end": 157.23999999999998, "text": " What if we have extremely accurate algorithms where we don't know what is going on under"}, {"start": 157.23999999999998, "end": 162.95999999999998, "text": " the hood or simpler, more intuitive algorithms that may be subpar in accuracy?"}, {"start": 162.96, "end": 168.56, "text": " If we have a top tier scientific conference where only a very limited number of papers get"}, {"start": 168.56, "end": 170.96, "text": " accepted, which ones shall we accept?"}, {"start": 170.96, "end": 176.20000000000002, "text": " I hope that this question will spark a productive discussion and hopefully scientific research"}, {"start": 176.20000000000002, "end": 179.84, "text": " venues will be more vigilant about this question in the future."}, {"start": 179.84, "end": 184.64000000000001, "text": " Okay, so the question is crystal clear, knowledge or efficiency."}, {"start": 184.64000000000001, "end": 186.8, "text": " How about possible solutions?"}, {"start": 186.8, "end": 191.20000000000002, "text": " Can we extract scientific insights out of these neural networks?"}, {"start": 191.2, "end": 195.51999999999998, "text": " Global compression is a way to essentially compress the information in this brainish"}, {"start": 195.51999999999998, "end": 199.04, "text": " thing, this collection of neurons we described earlier."}, {"start": 199.04, "end": 204.04, "text": " To demonstrate why this is such a cool idea, let's quickly jump to this program by deep"}, {"start": 204.04, "end": 208.95999999999998, "text": " mind that plays Atari games at an amazingly high level."}, {"start": 208.95999999999998, "end": 214.16, "text": " In breakout, the solution program that you see here is essentially an enormous table that"}, {"start": 214.16, "end": 218.67999999999998, "text": " describes what the program should do when it sees different inputs."}, {"start": 218.68, "end": 222.92000000000002, "text": " It is so enormous that it has many millions of records in there."}, {"start": 222.92000000000002, "end": 226.04000000000002, "text": " A manual of many thousand pages, if you will."}, {"start": 226.04000000000002, "end": 231.8, "text": " It is easy to execute for a computer that completely impossible for us to understand why"}, {"start": 231.8, "end": 233.24, "text": " and how it works."}, {"start": 233.24, "end": 238.76000000000002, "text": " However, if we intuitively think about the game itself, we could actually write a super"}, {"start": 238.76000000000002, "end": 244.12, "text": " simple program in one line of code that would almost be as good as this."}, {"start": 244.12, "end": 248.4, "text": " All we need to do is try to follow the ball with the pedal."}, {"start": 248.4, "end": 251.6, "text": " One line of code and pretty decent results."}, {"start": 251.6, "end": 254.0, "text": " Not optimal, but quite decent."}, {"start": 254.0, "end": 258.08, "text": " From such a program, we can actually learn something about the game."}, {"start": 258.08, "end": 263.12, "text": " Essentially, what we could do with these enormous tables is compressing them into much, much"}, {"start": 263.12, "end": 264.64, "text": " smaller ones."}, {"start": 264.64, "end": 269.24, "text": " Once there are so tiny that we can actually build an intuition from them."}, {"start": 269.24, "end": 273.92, "text": " This way, the output of a machine learning technique wouldn't only be an extremely"}, {"start": 273.92, "end": 275.6, "text": " efficient program."}, {"start": 275.6, "end": 279.68, "text": " But finally, the output of the procedure would be knowledge."}, {"start": 279.68, "end": 280.68, "text": " Insight."}, {"start": 280.68, "end": 286.24, "text": " If you think about it, such an algorithm would essentially do research by itself."}, {"start": 286.24, "end": 291.52000000000004, "text": " At first, it would randomly try experimenting and after a large amount of observations"}, {"start": 291.52000000000004, "end": 297.44, "text": " are collected, these observations would be explained by a small number of rules."}, {"start": 297.44, "end": 300.84000000000003, "text": " That is exactly the definition of research."}, {"start": 300.84, "end": 306.28, "text": " And perhaps this is one of the more interesting future frontiers of machine learning research."}, {"start": 306.28, "end": 312.32, "text": " And by the way, earlier we have talked about a fantastic paper on neural programmer interpreters"}, {"start": 312.32, "end": 318.2, "text": " that also aim to output complete algorithms that can be directly used and understood."}, {"start": 318.2, "end": 320.28, "text": " The link is available in the description box."}, {"start": 320.28, "end": 333.23999999999995, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hnT-P3aALVE
Hallucinating Images With Deep Learning | Two Minute Papers #74
During our journeys in deep learning, we have seen techniques that can summarize photographs in entire sentences that actually make sense. This time, we are going to turn this process around and ask a deep learning system to "hallucinate", i.e., generate images according to sentences that we add as an input. The results are nothing short of insane! _____________________________ The paper "Generative Adversarial Text to Image Synthesis" is available here: http://arxiv.org/abs/1605.05396 Recommended for you: Recurrent Neural Network Writes Sentences About Images - https://www.youtube.com/watch?v=e-WB4lfg30M Deep Learning related Two Minute Papers episodes - https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by C. P. Ewing - https://flic.kr/p/GDm4Jd Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fehir. In an earlier episode, we showcased a technique for summarizing images not in a word, but an entire sentence that actually makes sense. If you were spellbound by those results, you'll be out of your mind when you hear this one. Let's turn it around and ask a neural network to have a sentence as an input, and ask it to generate images according to it. Not fetching already existing images from somewhere, generating new images according to these sentences. Is this for real? This is an idea that is completely out of this world. A few years ago, if someone proposed such an idea and hoped that any useful result can come out of this, that person would have immediately been transported to an asylum. The important keyword here is zero-shot recognition. Before we go to the zero part, let's talk about one-shot learning. One-shot learning means a class of techniques that can learn something from one or at most a handful of examples. Deep neural networks typically require to see hundreds of thousands of mugs before they can learn the concept of a mug. However, if I show one mug to any of you fellow scholars, you will, of course, immediately get the concept of a mug. At this point, it is amazing what these deep neural networks can do, but with the current progress in this area, I'm convinced that in a few years, feeding millions of examples to a deep neural network to learn such a simple concept will be considered a crime. On to zero-shot recognition. The zero-shot is pretty simple. It means zero training samples. But this sounds preposterous. What it actually means is that we can train our neural network to recognize birds, tiny things. What the concept of blue is, what a crown is. But then we ask it to show us an image of a tiny bird with a blue crown. Essentially, the neural network learns to combine these concepts together and generate new images leaning on these learned concepts. And I think this paper is a wonderful testament as to why two-minute papers is such a strident advocate of deep learning and why more people should know about these extraordinary works. About the paper, it is really well written. There are quite a few treats in there for scientists. Game theory and minimax optimization among other things. Cupcakes for my brain. We will definitely talk about these topics in later two-minute paper episodes. Stay tuned. But for now, you shouldn't only read the paper, you should devour it. And before we go, let's address the elephant in the room. The output images are tiny because this technique is very expensive to compute. Prediction, two papers down the line, it will be done in a matter of seconds. Two even more papers down the line, it will do animations in full HD. Until then, I'll sit here stunned by the results and just frown and wonder. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fehir."}, {"start": 5.0, "end": 13.5, "text": " In an earlier episode, we showcased a technique for summarizing images not in a word, but an entire sentence that actually makes sense."}, {"start": 13.5, "end": 19.0, "text": " If you were spellbound by those results, you'll be out of your mind when you hear this one."}, {"start": 19.0, "end": 28.0, "text": " Let's turn it around and ask a neural network to have a sentence as an input, and ask it to generate images according to it."}, {"start": 28.0, "end": 35.0, "text": " Not fetching already existing images from somewhere, generating new images according to these sentences."}, {"start": 35.0, "end": 40.0, "text": " Is this for real? This is an idea that is completely out of this world."}, {"start": 40.0, "end": 50.0, "text": " A few years ago, if someone proposed such an idea and hoped that any useful result can come out of this, that person would have immediately been transported to an asylum."}, {"start": 50.0, "end": 58.0, "text": " The important keyword here is zero-shot recognition. Before we go to the zero part, let's talk about one-shot learning."}, {"start": 58.0, "end": 65.0, "text": " One-shot learning means a class of techniques that can learn something from one or at most a handful of examples."}, {"start": 65.0, "end": 73.0, "text": " Deep neural networks typically require to see hundreds of thousands of mugs before they can learn the concept of a mug."}, {"start": 73.0, "end": 80.0, "text": " However, if I show one mug to any of you fellow scholars, you will, of course, immediately get the concept of a mug."}, {"start": 80.0, "end": 88.0, "text": " At this point, it is amazing what these deep neural networks can do, but with the current progress in this area, I'm convinced that in a few years,"}, {"start": 88.0, "end": 96.0, "text": " feeding millions of examples to a deep neural network to learn such a simple concept will be considered a crime."}, {"start": 96.0, "end": 103.0, "text": " On to zero-shot recognition. The zero-shot is pretty simple. It means zero training samples."}, {"start": 103.0, "end": 111.0, "text": " But this sounds preposterous. What it actually means is that we can train our neural network to recognize birds, tiny things."}, {"start": 111.0, "end": 120.0, "text": " What the concept of blue is, what a crown is. But then we ask it to show us an image of a tiny bird with a blue crown."}, {"start": 120.0, "end": 128.0, "text": " Essentially, the neural network learns to combine these concepts together and generate new images leaning on these learned concepts."}, {"start": 128.0, "end": 139.0, "text": " And I think this paper is a wonderful testament as to why two-minute papers is such a strident advocate of deep learning and why more people should know about these extraordinary works."}, {"start": 139.0, "end": 145.0, "text": " About the paper, it is really well written. There are quite a few treats in there for scientists."}, {"start": 145.0, "end": 150.0, "text": " Game theory and minimax optimization among other things. Cupcakes for my brain."}, {"start": 150.0, "end": 155.0, "text": " We will definitely talk about these topics in later two-minute paper episodes. Stay tuned."}, {"start": 155.0, "end": 160.0, "text": " But for now, you shouldn't only read the paper, you should devour it."}, {"start": 160.0, "end": 163.0, "text": " And before we go, let's address the elephant in the room."}, {"start": 163.0, "end": 168.0, "text": " The output images are tiny because this technique is very expensive to compute."}, {"start": 168.0, "end": 178.0, "text": " Prediction, two papers down the line, it will be done in a matter of seconds. Two even more papers down the line, it will do animations in full HD."}, {"start": 178.0, "end": 183.0, "text": " Until then, I'll sit here stunned by the results and just frown and wonder."}, {"start": 183.0, "end": 198.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JKYQOAZRZu4
Rocking Out With Convolutions | Two Minute Papers #73
A lot of university students tend to have a lot of problems understanding convolutions. Today, we're going to talk about both many cool useful applications of convolutions and there will be a bit of intuition on how the computation is done. Among other cool applications, it turns out we can add very convincing reverberation effects to our guitars by computing convolutions. __________________________ Immersive Math: http://immersivemath.com/ila/index.html Separable Subsurface Scattering: https://www.youtube.com/watch?v=72_iAlYwl0c Convolutions and Gaussian blur image source - Wikipedia WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by Dustin Gaffke - https://flic.kr/p/nKy4EK Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér, and today we are here to answer one question. What is a convolution? I have heard many university students crying in despair over their perilous journeys of understanding what convolutions are and why they are useful. Let me give a helping hand there. A convolution is a mathematical technique to mix together two signals. A lot of really useful tasks can be accomplished through this operation. For instance, convolutions can be used to add reverberation to a recorded instrument. So I play my guitar here in my room and it can sound like as if it were recorded in a large concert hall. Now, dear Fellow Scholars, put on a pair of headphones and let me rock out on my guitar to show it to you. First, you'll hear the dry guitar signal. And this is the same signal with the added reverberation. It sounds much more convincing, right? In simple words, a convolution is a bit like saying guitar plus concert hall equals a guitar sound that was played in a concert hall. The only difference is that we don't say guitar plus concert hall. We say guitar convolved with the concert hall. If we want to be a bit more accurate, we could say that the impulse response of the hall, which records how this place reacts to a person who starts to play the guitar in there. People use the living hell out of convolution reverberation plugins in the music industry. Convolutions can also be used to blur or sharpen an image. We also had many examples of convolution on your own networks that provide efficient means to, for instance, get machines to recognize traffic signs. We can also use them to add sophisticated light transport effects, such as subsurface scattering to images. This way we can conjure up digital characters with stunningly high quality skin and other translucent materials in our animations and computer games. We have had a previous episode on this, and it is available in the video description box, make sure to have a look. As we said before, computing a convolution is not at all like addition. Not even close. For instance, the convolution of two boxes is a triangle. Wow. What? What kind of witchcraft is this? It doesn't sound intuitive at all. The computation of the convolution means that we start to push this box over the other one, and at every point in time we take a look at the intersection between the two signals. As you can see, at first they don't touch at all. Then, as they start to overlap, we have highlighted the intersected area with green, and as they get closer to each other, this area increases. And they are completely overlapped, we get the maximum intersection area, which then starts to dwindle as they separate. It is a miracle of mathematics that by computing things like this, we can rock out in a virtual church or a stadium, which sounds very close to the real deal. And before we go, a quick shout out to immersive math, a really intuitive resource for learning linear algebra. If you're into math, you simply have to check this one out. It's really cool. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.92, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r, and today we are"}, {"start": 5.92, "end": 8.44, "text": " here to answer one question."}, {"start": 8.44, "end": 10.16, "text": " What is a convolution?"}, {"start": 10.16, "end": 16.0, "text": " I have heard many university students crying in despair over their perilous journeys of"}, {"start": 16.0, "end": 20.080000000000002, "text": " understanding what convolutions are and why they are useful."}, {"start": 20.080000000000002, "end": 21.8, "text": " Let me give a helping hand there."}, {"start": 21.8, "end": 26.84, "text": " A convolution is a mathematical technique to mix together two signals."}, {"start": 26.84, "end": 31.28, "text": " A lot of really useful tasks can be accomplished through this operation."}, {"start": 31.28, "end": 37.24, "text": " For instance, convolutions can be used to add reverberation to a recorded instrument."}, {"start": 37.24, "end": 42.84, "text": " So I play my guitar here in my room and it can sound like as if it were recorded in a large"}, {"start": 42.84, "end": 44.24, "text": " concert hall."}, {"start": 44.24, "end": 50.2, "text": " Now, dear Fellow Scholars, put on a pair of headphones and let me rock out on my guitar to show"}, {"start": 50.2, "end": 51.2, "text": " it to you."}, {"start": 51.2, "end": 60.52, "text": " First, you'll hear the dry guitar signal."}, {"start": 60.52, "end": 72.52000000000001, "text": " And this is the same signal with the added reverberation."}, {"start": 72.52000000000001, "end": 75.44, "text": " It sounds much more convincing, right?"}, {"start": 75.44, "end": 81.8, "text": " In simple words, a convolution is a bit like saying guitar plus concert hall equals a"}, {"start": 81.8, "end": 85.6, "text": " guitar sound that was played in a concert hall."}, {"start": 85.6, "end": 90.16, "text": " The only difference is that we don't say guitar plus concert hall."}, {"start": 90.16, "end": 93.96, "text": " We say guitar convolved with the concert hall."}, {"start": 93.96, "end": 98.72, "text": " If we want to be a bit more accurate, we could say that the impulse response of the hall,"}, {"start": 98.72, "end": 103.88, "text": " which records how this place reacts to a person who starts to play the guitar in there."}, {"start": 103.88, "end": 109.83999999999999, "text": " People use the living hell out of convolution reverberation plugins in the music industry."}, {"start": 109.83999999999999, "end": 113.56, "text": " Convolutions can also be used to blur or sharpen an image."}, {"start": 113.56, "end": 118.28, "text": " We also had many examples of convolution on your own networks that provide efficient"}, {"start": 118.28, "end": 123.44, "text": " means to, for instance, get machines to recognize traffic signs."}, {"start": 123.44, "end": 128.51999999999998, "text": " We can also use them to add sophisticated light transport effects, such as subsurface"}, {"start": 128.51999999999998, "end": 130.44, "text": " scattering to images."}, {"start": 130.44, "end": 136.12, "text": " This way we can conjure up digital characters with stunningly high quality skin and other"}, {"start": 136.12, "end": 140.07999999999998, "text": " translucent materials in our animations and computer games."}, {"start": 140.07999999999998, "end": 145.0, "text": " We have had a previous episode on this, and it is available in the video description box,"}, {"start": 145.0, "end": 146.35999999999999, "text": " make sure to have a look."}, {"start": 146.35999999999999, "end": 151.68, "text": " As we said before, computing a convolution is not at all like addition."}, {"start": 151.68, "end": 153.07999999999998, "text": " Not even close."}, {"start": 153.07999999999998, "end": 158.16, "text": " For instance, the convolution of two boxes is a triangle."}, {"start": 158.16, "end": 159.16, "text": " Wow."}, {"start": 159.16, "end": 160.16, "text": " What?"}, {"start": 160.16, "end": 162.24, "text": " What kind of witchcraft is this?"}, {"start": 162.24, "end": 164.48, "text": " It doesn't sound intuitive at all."}, {"start": 164.48, "end": 169.51999999999998, "text": " The computation of the convolution means that we start to push this box over the other"}, {"start": 169.51999999999998, "end": 175.64, "text": " one, and at every point in time we take a look at the intersection between the two signals."}, {"start": 175.64, "end": 179.4, "text": " As you can see, at first they don't touch at all."}, {"start": 179.4, "end": 184.88, "text": " Then, as they start to overlap, we have highlighted the intersected area with green, and as"}, {"start": 184.88, "end": 188.72, "text": " they get closer to each other, this area increases."}, {"start": 188.72, "end": 193.2, "text": " And they are completely overlapped, we get the maximum intersection area, which then"}, {"start": 193.2, "end": 195.92, "text": " starts to dwindle as they separate."}, {"start": 195.92, "end": 201.6, "text": " It is a miracle of mathematics that by computing things like this, we can rock out in a virtual"}, {"start": 201.6, "end": 206.07999999999998, "text": " church or a stadium, which sounds very close to the real deal."}, {"start": 206.07999999999998, "end": 211.88, "text": " And before we go, a quick shout out to immersive math, a really intuitive resource for learning"}, {"start": 211.88, "end": 212.88, "text": " linear algebra."}, {"start": 212.88, "end": 216.52, "text": " If you're into math, you simply have to check this one out."}, {"start": 216.52, "end": 217.88, "text": " It's really cool."}, {"start": 217.88, "end": 221.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1PNhuHa7lS0
Reinforcement Learning with OpenAI's Gym | Two Minute Papers #72
OpenAI's Gym is available here: https://gym.openai.com/ OpenAI - Non-profit AI company by Elon Musk and Sam Altman https://www.youtube.com/watch?v=AbcRlDBnwjM Google DeepMind's paper "Unifying Count-Based Exploration and Intrinsic Motivation" and video on reniforcement learning and curiosity: https://arxiv.org/pdf/1606.01868v1.pdf https://www.youtube.com/watch?v=0yI2wJ6F8r0 Link to the mentioned research project at Experiment: 1. https://experiment.com/projects/opening-your-mind-s-eye-collaborating-with-a-computer-to-reveal-visual-imagination?s=discover 2. https://experiment.com/projects/yvgjmnuxsnavvjuhxzwf WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image is licensed under CC0 and is available here: https://pixabay.com/en/dumbbell-training-fitness-room-940375/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejolai Fehir. Reinforcement learning is a technique in the field of machine learning to learn how to navigate an elaborate, play a video game, or to teach a digital creature to walk. Usually, we are interested in a series of actions that are in some sense optimal in a given environment. Despite the fact that many enormous tombs exist to discuss the mathematical details, the intuition behind the algorithm itself is incredibly simple. Choose an action, and if you get rewarded for it, keep doing it. If the rewards are not coming, try something else. The reward can be, for instance, our score in a computer game, or how far our digital creature could walk. It is usually quite difficult to learn things where the reward comes long after our action, because we don't know when exactly the point was when we did something well. This is one of the reasons why Google DeepMind will try to conquer strategy games in the future, because this is a genre where good plays usually include long-term planning that reinforcement learning techniques don't really excel at. By the way, this just in, they have just published an excellent paper on including curiosity in this equation in a way that helps long-term planning remarkably. As more techniques pop up in this direction, it is getting abundantly clear that we need a framework where they can undergo stringent testing. This means that the amount of collected rewards and scores should be computed the same way and in the same physical framework. OpenAI is a non-profit company boasting an impressive roster of top-tier researchers who embarked on the quest to develop open and ethical artificial intelligence techniques. We've had a previous episode on this when the company was freshly founded, and as you might have guessed, the link is available in the description box. They have recently published their first major project that goes by the name Jim. Jim is a unified framework that puts reinforcement learning techniques on an equal footing. Anyone can submit their solutions which are run on the same problems, and as a nice bit of gamification, leaderboards are established to see which technique emerges victorious. These environments range from a variety of computer games to different balancing tasks. Some simpler reference solutions are also provided for many of them as a starting point. This place is like Disney World for someone who is excited about the field of reinforcement learning. With more and more techniques, this subfield gets more saturated. It gets more and more difficult to be the first at something. That's a great challenge for researchers. From a consumer point of view, this means that better techniques will pop up day by day. And, as I like to say quite often, we have really exciting times ahead of us. A quick shout out to Experiment, a startup to help research projects come to fruition by crowdsourcing them. Current experiments include really cool projects like how we could implement better anti-doping policies for professional sports, or how to show on a computer screen how our visual imagination works. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejolai Fehir."}, {"start": 4.5, "end": 11.0, "text": " Reinforcement learning is a technique in the field of machine learning to learn how to navigate an elaborate,"}, {"start": 11.0, "end": 15.5, "text": " play a video game, or to teach a digital creature to walk."}, {"start": 15.5, "end": 22.0, "text": " Usually, we are interested in a series of actions that are in some sense optimal in a given environment."}, {"start": 22.0, "end": 28.0, "text": " Despite the fact that many enormous tombs exist to discuss the mathematical details,"}, {"start": 28.0, "end": 32.5, "text": " the intuition behind the algorithm itself is incredibly simple."}, {"start": 32.5, "end": 36.5, "text": " Choose an action, and if you get rewarded for it, keep doing it."}, {"start": 36.5, "end": 39.5, "text": " If the rewards are not coming, try something else."}, {"start": 39.5, "end": 43.5, "text": " The reward can be, for instance, our score in a computer game,"}, {"start": 43.5, "end": 46.5, "text": " or how far our digital creature could walk."}, {"start": 46.5, "end": 52.0, "text": " It is usually quite difficult to learn things where the reward comes long after our action,"}, {"start": 52.0, "end": 56.0, "text": " because we don't know when exactly the point was when we did something well."}, {"start": 56.0, "end": 61.5, "text": " This is one of the reasons why Google DeepMind will try to conquer strategy games in the future,"}, {"start": 61.5, "end": 66.5, "text": " because this is a genre where good plays usually include long-term planning"}, {"start": 66.5, "end": 70.0, "text": " that reinforcement learning techniques don't really excel at."}, {"start": 70.0, "end": 76.5, "text": " By the way, this just in, they have just published an excellent paper on including curiosity in this equation"}, {"start": 76.5, "end": 79.5, "text": " in a way that helps long-term planning remarkably."}, {"start": 79.5, "end": 82.0, "text": " As more techniques pop up in this direction,"}, {"start": 82.0, "end": 88.0, "text": " it is getting abundantly clear that we need a framework where they can undergo stringent testing."}, {"start": 88.0, "end": 96.0, "text": " This means that the amount of collected rewards and scores should be computed the same way and in the same physical framework."}, {"start": 96.0, "end": 102.0, "text": " OpenAI is a non-profit company boasting an impressive roster of top-tier researchers"}, {"start": 102.0, "end": 108.0, "text": " who embarked on the quest to develop open and ethical artificial intelligence techniques."}, {"start": 108.0, "end": 112.0, "text": " We've had a previous episode on this when the company was freshly founded,"}, {"start": 112.0, "end": 116.5, "text": " and as you might have guessed, the link is available in the description box."}, {"start": 116.5, "end": 122.0, "text": " They have recently published their first major project that goes by the name Jim."}, {"start": 122.0, "end": 128.0, "text": " Jim is a unified framework that puts reinforcement learning techniques on an equal footing."}, {"start": 128.0, "end": 132.5, "text": " Anyone can submit their solutions which are run on the same problems,"}, {"start": 132.5, "end": 139.0, "text": " and as a nice bit of gamification, leaderboards are established to see which technique emerges victorious."}, {"start": 139.0, "end": 144.5, "text": " These environments range from a variety of computer games to different balancing tasks."}, {"start": 144.5, "end": 149.5, "text": " Some simpler reference solutions are also provided for many of them as a starting point."}, {"start": 149.5, "end": 155.0, "text": " This place is like Disney World for someone who is excited about the field of reinforcement learning."}, {"start": 155.0, "end": 159.0, "text": " With more and more techniques, this subfield gets more saturated."}, {"start": 159.0, "end": 165.0, "text": " It gets more and more difficult to be the first at something. That's a great challenge for researchers."}, {"start": 165.0, "end": 170.5, "text": " From a consumer point of view, this means that better techniques will pop up day by day."}, {"start": 170.5, "end": 176.0, "text": " And, as I like to say quite often, we have really exciting times ahead of us."}, {"start": 176.0, "end": 183.5, "text": " A quick shout out to Experiment, a startup to help research projects come to fruition by crowdsourcing them."}, {"start": 183.5, "end": 192.5, "text": " Current experiments include really cool projects like how we could implement better anti-doping policies for professional sports,"}, {"start": 192.5, "end": 198.0, "text": " or how to show on a computer screen how our visual imagination works."}, {"start": 198.0, "end": 214.5, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MfaTOXxA8dM
Image Colorization With Deep Learning and Classification | Two Minute Papers #71
The paper "Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification" and its implementation are available here: http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/en/ https://github.com/satoshiiizuka/siggraph2016_colorization The video classification paper by Karpathy et al.: http://cs.stanford.edu/people/karpathy/deepvideo/ Recommended for you: Artistic Style Transfer For Videos - https://www.youtube.com/watch?v=Uxax5EKg0zA Deep Learning related Two Minute Papers videos - https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ijona Ifehir. This work is about adding color to black and white images. There were some previous works that tackled this problem, and many of them worked quite well, but there were cases when the results simply didn't make too much sense. For instance, the algorithm often didn't guess what color the fur of a dog should be. If we would give the same task to a human, we could usually expect better results because the human knows what breed the dog is and what colors are appropriate for that breed. In short, we know what is actually seen on the image, but the algorithm doesn't. It just trains on black and white and color the image pairs and learns how it is usually done without any concept of what is seen on the image. So here's the idea. Let's try to get to neural network not only to colorize the image, but classify what is seen on the image before doing that. If we see a dog in an image, it is not likely to be pink, is it? If we know that we have to deal with a golf course, we immediately know to reach out for those green crayons. This is a novel fusion-based technique. This means that we have a separate neural network for classifying the images and one for colorizing them. The fusion part is when we unify the information in these neural networks so we can create an output that aggregates all this information. And the results are just spectacular. The additional information on what these images are about really make a huge impact on the quality of the results. Please note that this is by far not the first work on fusion. I've also linked an earlier paper for recognizing objects in videos, but I think this is a really creative application of the same train of thought that is really worthy of our attention. To delight the fellow tinkerers out there, the source code of the project is also available. The supplementary video reveals that temporal coherence is still a problem. This means that every image is colorized separately with no communication. It is a bit like giving the images to colorize one by one to different people with no overarching artistic direction. The result will get this way is a flickery animation. This problem has been solved for artistic style transfer, which we have discussed in an earlier episode, the link is in the description box. There was one future episode planned about plastic deformations. I have read the paper several times and it is excellent, but I felt that the quality of my presentation was not up there to put it in front of you fellow scholars. It may happen in the future, but I had to shelf this one for now. Please accept my apologies for that. In the next episode, we'll continue with OpenAI's great new invention for reinforcement learning. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ijona Ifehir."}, {"start": 4.46, "end": 8.26, "text": " This work is about adding color to black and white images."}, {"start": 8.26, "end": 11.16, "text": " There were some previous works that tackled this problem,"}, {"start": 11.16, "end": 17.04, "text": " and many of them worked quite well, but there were cases when the results simply didn't make too much sense."}, {"start": 17.04, "end": 22.48, "text": " For instance, the algorithm often didn't guess what color the fur of a dog should be."}, {"start": 22.48, "end": 27.560000000000002, "text": " If we would give the same task to a human, we could usually expect better results"}, {"start": 27.56, "end": 33.0, "text": " because the human knows what breed the dog is and what colors are appropriate for that breed."}, {"start": 33.0, "end": 38.0, "text": " In short, we know what is actually seen on the image, but the algorithm doesn't."}, {"start": 38.0, "end": 41.8, "text": " It just trains on black and white and color the image pairs"}, {"start": 41.8, "end": 47.14, "text": " and learns how it is usually done without any concept of what is seen on the image."}, {"start": 47.14, "end": 48.4, "text": " So here's the idea."}, {"start": 48.4, "end": 52.5, "text": " Let's try to get to neural network not only to colorize the image,"}, {"start": 52.5, "end": 55.92, "text": " but classify what is seen on the image before doing that."}, {"start": 55.92, "end": 60.0, "text": " If we see a dog in an image, it is not likely to be pink, is it?"}, {"start": 60.0, "end": 62.480000000000004, "text": " If we know that we have to deal with a golf course,"}, {"start": 62.480000000000004, "end": 66.0, "text": " we immediately know to reach out for those green crayons."}, {"start": 66.0, "end": 68.64, "text": " This is a novel fusion-based technique."}, {"start": 68.64, "end": 73.28, "text": " This means that we have a separate neural network for classifying the images"}, {"start": 73.28, "end": 75.36, "text": " and one for colorizing them."}, {"start": 75.36, "end": 79.44, "text": " The fusion part is when we unify the information in these neural networks"}, {"start": 79.44, "end": 83.6, "text": " so we can create an output that aggregates all this information."}, {"start": 83.6, "end": 86.16, "text": " And the results are just spectacular."}, {"start": 86.16, "end": 89.52, "text": " The additional information on what these images are about"}, {"start": 89.52, "end": 92.75999999999999, "text": " really make a huge impact on the quality of the results."}, {"start": 92.75999999999999, "end": 96.32, "text": " Please note that this is by far not the first work on fusion."}, {"start": 96.32, "end": 100.52, "text": " I've also linked an earlier paper for recognizing objects in videos,"}, {"start": 100.52, "end": 104.63999999999999, "text": " but I think this is a really creative application of the same train of thought"}, {"start": 104.63999999999999, "end": 106.91999999999999, "text": " that is really worthy of our attention."}, {"start": 106.91999999999999, "end": 109.28, "text": " To delight the fellow tinkerers out there,"}, {"start": 109.28, "end": 112.03999999999999, "text": " the source code of the project is also available."}, {"start": 112.04, "end": 116.36000000000001, "text": " The supplementary video reveals that temporal coherence is still a problem."}, {"start": 116.36000000000001, "end": 121.48, "text": " This means that every image is colorized separately with no communication."}, {"start": 121.48, "end": 125.08000000000001, "text": " It is a bit like giving the images to colorize one by one"}, {"start": 125.08000000000001, "end": 128.84, "text": " to different people with no overarching artistic direction."}, {"start": 128.84, "end": 132.0, "text": " The result will get this way is a flickery animation."}, {"start": 132.0, "end": 135.24, "text": " This problem has been solved for artistic style transfer,"}, {"start": 135.24, "end": 137.4, "text": " which we have discussed in an earlier episode,"}, {"start": 137.4, "end": 139.4, "text": " the link is in the description box."}, {"start": 139.4, "end": 143.20000000000002, "text": " There was one future episode planned about plastic deformations."}, {"start": 143.20000000000002, "end": 146.6, "text": " I have read the paper several times and it is excellent,"}, {"start": 146.6, "end": 150.4, "text": " but I felt that the quality of my presentation was not up there"}, {"start": 150.4, "end": 153.0, "text": " to put it in front of you fellow scholars."}, {"start": 153.0, "end": 156.32, "text": " It may happen in the future, but I had to shelf this one for now."}, {"start": 156.32, "end": 158.56, "text": " Please accept my apologies for that."}, {"start": 158.56, "end": 162.76, "text": " In the next episode, we'll continue with OpenAI's great new invention"}, {"start": 162.76, "end": 164.20000000000002, "text": " for reinforcement learning."}, {"start": 164.20000000000002, "end": 166.76, "text": " Thanks for watching and for your generous support,"}, {"start": 166.76, "end": 169.76, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=heY2gfXSHBo
Schrödinger's Smoke | Two Minute Papers #70
Today we will talk about Eulerian and Lagrangian smoke and fluid simulations and how this excellent technique can incorporate a variant of Schrödinger's equation and make an excellent fluid simulator out of it. :) _________________ The paper "Schrödinger's Smoke" and its implementation is available here: http://multires.caltech.edu/pubs/SchrodingersSmoke.pdf http://multires.caltech.edu/pubs/SchrodingersSmokeCode.zip The publisher's version is expected to show up here soon: http://www.multires.caltech.edu/pubs/pubs.htm The Short Science website is available here: http://www.shortscience.org/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. There are two main branches of efficient smoke and fluid simulator programs, Ilarian and Lagrangian techniques. Before we dive into what these terms mean, I'd like to note that we have close captions available for this series that you can turn on by clicking the CC button at the bottom of the player. With that out of the way, the Ilarian technique means that we have a fixed grid and the measurement happens in the grid points only. We have no idea what happens between these grid points. It may sound counterintuitive at first because it has no notion of particles at all. With the Lagrangian technique, we have particles that move around in space and we measure important quantities like velocity and pressure with these particles. In short, Ilarian, grids, Lagrangian, particles. Normally the problem with Ilarian simulations is that we don't know what exactly happens between the grid points, causing information to disappear in these regions. To alleviate this, they are usually combined with Lagrangian techniques because if we can also track all these particles individually, we cannot lose any of them. The drawback is of course that we need to simulate millions of particles which will take at least a few minutes for every frame we wish to compute. By formulating his famous equation, the Austrian physicist, Erwin Schrodinger, won the Nobel Prize in 1933. In case you're wondering, yes, this is the guy who forgot to fit his cat. There's two important things you should know about the Schrodinger equation. One is that it is used to describe how subatomic particles behave in time and two, it has absolutely nothing to do with large scale fluid simulations whatsoever. The point of this work is to reformulate Schrodinger's equation in a way that it tracks the density and the velocity of the fluid in time. This way it can be integrated in a purely grid-based Eulerian fluid simulator. And we don't need to track all these individual particles one by one, but we can still keep these fine, small scale details in a way that rivals Lagrangian simulations but without the huge additional costs. So the idea is absolutely bonkers, just the thought of doing this sounds so outlandish to me. And it works. Obstacles are also supported by this technique. Many questions still remain, such as how to mix different fluid interfaces together, how to model the forces between them. I do not have the pressure to see the limits of the approach, but I am quite convinced that this direction holds a lot of promise for the future. I cannot wait to play with the code and see some follow-up works on this. As always, everything is linked in the video description box. The paper is not only absolutely beautifully written, but it is also a really fun paper to read. And as I read it, I really loved how a jolt of epiphany ran through me. It is a fantastic feeling when a light bulb lights up in my mind as I suddenly get to understand something. I think it is the scientist's equivalent of obtaining enlightenment. May it happen to you fellow scholars often during your journeys. And I get to spend quite a bit of time every day reading fine works like this. It's a good life. I'd like to give a quick shout out to this really cool website called Short Science, which is a collection of crowdsourced short summaries for scientific papers. Really cool stuff. Make sure to have a look. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 5.0, "end": 9.66, "text": " There are two main branches of efficient smoke and fluid simulator programs,"}, {"start": 9.66, "end": 12.24, "text": " Ilarian and Lagrangian techniques."}, {"start": 12.24, "end": 16.580000000000002, "text": " Before we dive into what these terms mean, I'd like to note that we have close captions"}, {"start": 16.580000000000002, "end": 21.42, "text": " available for this series that you can turn on by clicking the CC button at the bottom"}, {"start": 21.42, "end": 22.42, "text": " of the player."}, {"start": 22.42, "end": 27.98, "text": " With that out of the way, the Ilarian technique means that we have a fixed grid and the measurement"}, {"start": 27.98, "end": 30.5, "text": " happens in the grid points only."}, {"start": 30.5, "end": 33.7, "text": " We have no idea what happens between these grid points."}, {"start": 33.7, "end": 38.66, "text": " It may sound counterintuitive at first because it has no notion of particles at all."}, {"start": 38.66, "end": 43.94, "text": " With the Lagrangian technique, we have particles that move around in space and we measure important"}, {"start": 43.94, "end": 47.94, "text": " quantities like velocity and pressure with these particles."}, {"start": 47.94, "end": 52.42, "text": " In short, Ilarian, grids, Lagrangian, particles."}, {"start": 52.42, "end": 57.2, "text": " Normally the problem with Ilarian simulations is that we don't know what exactly happens"}, {"start": 57.2, "end": 62.14, "text": " between the grid points, causing information to disappear in these regions."}, {"start": 62.14, "end": 67.06, "text": " To alleviate this, they are usually combined with Lagrangian techniques because if we can"}, {"start": 67.06, "end": 71.74000000000001, "text": " also track all these particles individually, we cannot lose any of them."}, {"start": 71.74000000000001, "end": 76.22, "text": " The drawback is of course that we need to simulate millions of particles which will take"}, {"start": 76.22, "end": 79.66, "text": " at least a few minutes for every frame we wish to compute."}, {"start": 79.66, "end": 85.18, "text": " By formulating his famous equation, the Austrian physicist, Erwin Schrodinger, won the Nobel"}, {"start": 85.18, "end": 87.82000000000001, "text": " Prize in 1933."}, {"start": 87.82000000000001, "end": 91.74000000000001, "text": " In case you're wondering, yes, this is the guy who forgot to fit his cat."}, {"start": 91.74000000000001, "end": 95.66000000000001, "text": " There's two important things you should know about the Schrodinger equation."}, {"start": 95.66000000000001, "end": 101.74000000000001, "text": " One is that it is used to describe how subatomic particles behave in time and two, it has"}, {"start": 101.74000000000001, "end": 106.54, "text": " absolutely nothing to do with large scale fluid simulations whatsoever."}, {"start": 106.54, "end": 111.30000000000001, "text": " The point of this work is to reformulate Schrodinger's equation in a way that it tracks"}, {"start": 111.30000000000001, "end": 114.98, "text": " the density and the velocity of the fluid in time."}, {"start": 114.98, "end": 120.46000000000001, "text": " This way it can be integrated in a purely grid-based Eulerian fluid simulator."}, {"start": 120.46000000000001, "end": 125.62, "text": " And we don't need to track all these individual particles one by one, but we can still keep"}, {"start": 125.62, "end": 131.58, "text": " these fine, small scale details in a way that rivals Lagrangian simulations but without"}, {"start": 131.58, "end": 133.74, "text": " the huge additional costs."}, {"start": 133.74, "end": 139.38, "text": " So the idea is absolutely bonkers, just the thought of doing this sounds so outlandish"}, {"start": 139.38, "end": 140.14000000000001, "text": " to me."}, {"start": 140.14000000000001, "end": 142.02, "text": " And it works."}, {"start": 142.02, "end": 144.7, "text": " Obstacles are also supported by this technique."}, {"start": 144.7, "end": 149.85999999999999, "text": " Many questions still remain, such as how to mix different fluid interfaces together, how"}, {"start": 149.85999999999999, "end": 152.33999999999997, "text": " to model the forces between them."}, {"start": 152.33999999999997, "end": 157.14, "text": " I do not have the pressure to see the limits of the approach, but I am quite convinced that"}, {"start": 157.14, "end": 160.22, "text": " this direction holds a lot of promise for the future."}, {"start": 160.22, "end": 164.85999999999999, "text": " I cannot wait to play with the code and see some follow-up works on this."}, {"start": 164.85999999999999, "end": 168.17999999999998, "text": " As always, everything is linked in the video description box."}, {"start": 168.17999999999998, "end": 173.78, "text": " The paper is not only absolutely beautifully written, but it is also a really fun paper"}, {"start": 173.78, "end": 174.78, "text": " to read."}, {"start": 174.78, "end": 179.5, "text": " And as I read it, I really loved how a jolt of epiphany ran through me."}, {"start": 179.5, "end": 185.18, "text": " It is a fantastic feeling when a light bulb lights up in my mind as I suddenly get to understand"}, {"start": 185.18, "end": 186.18, "text": " something."}, {"start": 186.18, "end": 190.18, "text": " I think it is the scientist's equivalent of obtaining enlightenment."}, {"start": 190.18, "end": 193.74, "text": " May it happen to you fellow scholars often during your journeys."}, {"start": 193.74, "end": 198.54, "text": " And I get to spend quite a bit of time every day reading fine works like this."}, {"start": 198.54, "end": 199.74, "text": " It's a good life."}, {"start": 199.74, "end": 205.02, "text": " I'd like to give a quick shout out to this really cool website called Short Science, which"}, {"start": 205.02, "end": 210.10000000000002, "text": " is a collection of crowdsourced short summaries for scientific papers."}, {"start": 210.10000000000002, "end": 211.10000000000002, "text": " Really cool stuff."}, {"start": 211.10000000000002, "end": 212.10000000000002, "text": " Make sure to have a look."}, {"start": 212.1, "end": 238.94, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7ymM4cG1zfQ
Storytime & Reading Comments | Two Minute Papers
We have reached one million views on the channel, and are nearing ten thousand Fellow Scholars as well! In this video, we celebrate this milestone with some storytime and I'll read some of your kind comments from my mailbox. Thanks so much everyone! My apologies, the second half of the closed captions are missing for this episode. ________________________ Video credits: Simulating Viscosity and Melting Fluids - https://www.youtube.com/watch?v=KgIrnR2O8KQ Capturing Waves of Light With Femto-photography - https://www.youtube.com/watch?v=TRNUTN01SEg How Do Genetic Algorithms Work? - https://www.youtube.com/watch?v=ziMHaGQJuSI Narrow Band Liquid Simulations - https://www.youtube.com/watch?v=nfPBT71xYVQ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by ChristianSchd CC BY-SA 3.0 - https://en.wikipedia.org/wiki/Lecture_hall#/media/File:Hanover_Institute_Inorganic_Chemisty_Lecture_Hall.jpg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejone Efeir. We have reached a lot of milestones lately. For instance, we now have over 1 million views on the channel. In the next few days, we are also hoping to hit 10,000 subscribers. To put it in perspective, in 2015, August the 20th, we have had a 250 subscriber special episode where my mind was pretty much blown. That was about 9 months ago. Whoa! For the record, here's an image that supposedly contains 10,000 people. Just imagine that they all have doctoral hats and you immediately have a mental image of 10,000 of our fellow scholars. It's insanity, and I must say that I am completely blown away by your loyal support. Thanks so much for everyone for the many kind messages, comments and emails of which I read quite a few in a second. It just boggles the mind to see that so many people are interested in learning more about awesome new research inventions, and I hope that it is as addictive to you as it was for me when I first got to see some of these results. A huge thank you also to our generous supporters on Patreon. I find it really amazing that we have quite a few fellow scholars out there who love the series so much that they are willing to financially help our cause. Just think about it. Especially given the fact that it's not easy to make ends meet these days. I know this all too well being a full time researcher, doing too many papers and having a baby is extremely taxing, but I love it. I really do. And I know there are people who have it way worse. And here we have these fellow scholars who believe in this cause and are willing to help out. Thanks so much for each one of you. I'm honored to have loyal supporters like you. We are more than halfway there towards our first milestone on Patreon, which means that our hardware and software costs can be completely covered with your help. As you know, we have really cool perks for our patrons. One of these perks is that for instance, our professors can decide the order of the next episodes, and I was really surprised to see that this episode won by a record number of points. Looks like you fellow scholars are really yearning for some story time, so let's bring it on. The focus of two minute papers has always been what a work is about and why it is important. It has always been more about intuition than specific details. That is the most important reason why two minute papers exist, but this sounds a bit cryptic, so let me explain myself. Most materials on any scientific topic on YouTube are about details. When this big deep learning range started and it was quite fresh, I wanted to find out what deep learning was. I haven't found a single video that was not at least 30 to 90 minutes long, and all of them talked about partial derivatives and the chain rule, but never explained what we are exactly doing and why we are doing it. It took me four days to get a good grasp, a good high level intuition of what is going on. I would have loved to have a resource that explains this four days worth of research work in just two minutes in a way that anyone can understand. I haven't found anything and this was just one example of many. Gandhi said, be the change that you wish to see in the world, and this is how two minute papers was born. It has never been about the details, it has always been about intuition. I think this particular comment is amazing and sums it up really well. These videos are amazing, just the right amount of information to make you want to learn more, but enough to get a decent overview of the subject. This is exactly what two minute papers is about. Get people excited about research, leave some room for exploration, and make sure the resources to do so are given in the video description box. I wanted to showcase this one comment, but I started looking through some more and I just can't stop. I hope you'll be as delighted to hear them as I am. Thank you for continuing to make these videos. Right now I'm a student, just starting to study to become a data scientist and your videos inspire me and make me excited about the future. Keep up the good work. Quinn, I hope that these works inspire you to be just as great as the incredible researchers behind these works. If you look at the end of this comment, it says, what's so great about your content is that you strike the perfect balance of duration and information. I've seen 30 minute videos that taught me less than your short thingies. Thanks so much for the comment and… Hmm, nice name, by the way. This has slowly but steadily become my favorite YouTube channel. Hmm, thank you. I'm very happy to have you around. Better explained than my professor at the university in one and a half hours. Thanks. Please don't tell this to your professor. Thank you. I'm subscribed to over 100 channels, but I look forward to your videos the most by far. Thank you for the great content. Wow. Your videos really inspire me and keep me curious about science and technology. I'm super interested in machine learning, neural networks, and artificial general intelligence. These breakthroughs in the field you share with us from week to week makes me feel that a new era is coming, and I want to be a part of it. Thank you for inspiring young minds. I couldn't agree more. There are indeed so many works to be excited for. I really have a hard time containing my excitement. I don't know how you pump out so many great videos so quickly, truly impressive. In fact, I'd like to do a whole lot more and hopefully this will be possible by doing it full time in the future. That would be really amazing. Here's a message from someone who just blazed through the entire series in one go. I hope one day will have so many episodes that it will be a serious life hazard to do so. Your channel is very inspiring. Now I can't think of anything else than to work on science or engineering. I hope these works will inspire you to greatness and please make sure to write to me about all the amazing stuff you created so I can have a look. I think this one was shown to me in a forum topic about light transport. It seems to be written by a former student of one of my courses in Vienna in Incognito. Hey there, and thanks for the kind words. Sometimes even the authors of the paper show up themselves and fortunately so far not to tell me how much I've butchered their work. Some of my heroes at ID Software are also watching too many papers and the new Doom game was also shipped on time. Peter Arvay is the CEO of Prezi and is also one of my all-time heroes. In my humble opinion he is about as good of a CEO as any company could hope for. There are so many more kind messages that I have in my mailbox. Thanks so much everyone, really appreciated. I really feel like we are just starting out and we are taking the first few steps of a wonderful journey. Let's keep celebrating science together and remember, research is not only for experts, it is for everyone. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.1000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejone Efeir."}, {"start": 5.1000000000000005, "end": 7.88, "text": " We have reached a lot of milestones lately."}, {"start": 7.88, "end": 11.94, "text": " For instance, we now have over 1 million views on the channel."}, {"start": 11.94, "end": 16.16, "text": " In the next few days, we are also hoping to hit 10,000 subscribers."}, {"start": 16.16, "end": 23.44, "text": " To put it in perspective, in 2015, August the 20th, we have had a 250 subscriber special"}, {"start": 23.44, "end": 26.48, "text": " episode where my mind was pretty much blown."}, {"start": 26.48, "end": 28.96, "text": " That was about 9 months ago."}, {"start": 28.96, "end": 29.96, "text": " Whoa!"}, {"start": 29.96, "end": 34.96, "text": " For the record, here's an image that supposedly contains 10,000 people."}, {"start": 34.96, "end": 39.34, "text": " Just imagine that they all have doctoral hats and you immediately have a mental image"}, {"start": 39.34, "end": 41.56, "text": " of 10,000 of our fellow scholars."}, {"start": 41.56, "end": 47.28, "text": " It's insanity, and I must say that I am completely blown away by your loyal support."}, {"start": 47.28, "end": 52.32, "text": " Thanks so much for everyone for the many kind messages, comments and emails of which I"}, {"start": 52.32, "end": 54.36, "text": " read quite a few in a second."}, {"start": 54.36, "end": 59.18, "text": " It just boggles the mind to see that so many people are interested in learning more about"}, {"start": 59.18, "end": 64.32, "text": " awesome new research inventions, and I hope that it is as addictive to you as it was for"}, {"start": 64.32, "end": 67.28, "text": " me when I first got to see some of these results."}, {"start": 67.28, "end": 71.0, "text": " A huge thank you also to our generous supporters on Patreon."}, {"start": 71.0, "end": 75.68, "text": " I find it really amazing that we have quite a few fellow scholars out there who love the"}, {"start": 75.68, "end": 80.2, "text": " series so much that they are willing to financially help our cause."}, {"start": 80.2, "end": 81.6, "text": " Just think about it."}, {"start": 81.6, "end": 85.6, "text": " Especially given the fact that it's not easy to make ends meet these days."}, {"start": 85.6, "end": 91.11999999999999, "text": " I know this all too well being a full time researcher, doing too many papers and having a baby"}, {"start": 91.11999999999999, "end": 94.16, "text": " is extremely taxing, but I love it."}, {"start": 94.16, "end": 95.36, "text": " I really do."}, {"start": 95.36, "end": 98.03999999999999, "text": " And I know there are people who have it way worse."}, {"start": 98.03999999999999, "end": 101.75999999999999, "text": " And here we have these fellow scholars who believe in this cause and are willing to"}, {"start": 101.75999999999999, "end": 102.75999999999999, "text": " help out."}, {"start": 102.75999999999999, "end": 104.91999999999999, "text": " Thanks so much for each one of you."}, {"start": 104.91999999999999, "end": 107.91999999999999, "text": " I'm honored to have loyal supporters like you."}, {"start": 107.92, "end": 112.6, "text": " We are more than halfway there towards our first milestone on Patreon, which means that"}, {"start": 112.6, "end": 117.12, "text": " our hardware and software costs can be completely covered with your help."}, {"start": 117.12, "end": 120.4, "text": " As you know, we have really cool perks for our patrons."}, {"start": 120.4, "end": 125.08, "text": " One of these perks is that for instance, our professors can decide the order of the next"}, {"start": 125.08, "end": 130.16, "text": " episodes, and I was really surprised to see that this episode won by a record number"}, {"start": 130.16, "end": 131.16, "text": " of points."}, {"start": 131.16, "end": 135.64, "text": " Looks like you fellow scholars are really yearning for some story time, so let's bring"}, {"start": 135.64, "end": 136.64, "text": " it on."}, {"start": 136.64, "end": 142.92, "text": " The focus of two minute papers has always been what a work is about and why it is important."}, {"start": 142.92, "end": 147.07999999999998, "text": " It has always been more about intuition than specific details."}, {"start": 147.07999999999998, "end": 151.64, "text": " That is the most important reason why two minute papers exist, but this sounds a bit"}, {"start": 151.64, "end": 154.39999999999998, "text": " cryptic, so let me explain myself."}, {"start": 154.39999999999998, "end": 159.23999999999998, "text": " Most materials on any scientific topic on YouTube are about details."}, {"start": 159.23999999999998, "end": 163.95999999999998, "text": " When this big deep learning range started and it was quite fresh, I wanted to find out"}, {"start": 163.95999999999998, "end": 165.51999999999998, "text": " what deep learning was."}, {"start": 165.52, "end": 171.48000000000002, "text": " I haven't found a single video that was not at least 30 to 90 minutes long, and all of"}, {"start": 171.48000000000002, "end": 176.20000000000002, "text": " them talked about partial derivatives and the chain rule, but never explained what we"}, {"start": 176.20000000000002, "end": 179.24, "text": " are exactly doing and why we are doing it."}, {"start": 179.24, "end": 183.96, "text": " It took me four days to get a good grasp, a good high level intuition of what is going"}, {"start": 183.96, "end": 184.96, "text": " on."}, {"start": 184.96, "end": 190.44, "text": " I would have loved to have a resource that explains this four days worth of research work in"}, {"start": 190.44, "end": 194.64000000000001, "text": " just two minutes in a way that anyone can understand."}, {"start": 194.64, "end": 199.16, "text": " I haven't found anything and this was just one example of many."}, {"start": 199.16, "end": 203.95999999999998, "text": " Gandhi said, be the change that you wish to see in the world, and this is how two minute"}, {"start": 203.95999999999998, "end": 205.44, "text": " papers was born."}, {"start": 205.44, "end": 209.88, "text": " It has never been about the details, it has always been about intuition."}, {"start": 209.88, "end": 214.6, "text": " I think this particular comment is amazing and sums it up really well."}, {"start": 214.6, "end": 219.39999999999998, "text": " These videos are amazing, just the right amount of information to make you want to learn"}, {"start": 219.39999999999998, "end": 223.6, "text": " more, but enough to get a decent overview of the subject."}, {"start": 223.6, "end": 226.56, "text": " This is exactly what two minute papers is about."}, {"start": 226.56, "end": 231.4, "text": " Get people excited about research, leave some room for exploration, and make sure the"}, {"start": 231.4, "end": 235.16, "text": " resources to do so are given in the video description box."}, {"start": 235.16, "end": 240.07999999999998, "text": " I wanted to showcase this one comment, but I started looking through some more and I just"}, {"start": 240.07999999999998, "end": 241.07999999999998, "text": " can't stop."}, {"start": 241.07999999999998, "end": 244.64, "text": " I hope you'll be as delighted to hear them as I am."}, {"start": 244.64, "end": 247.07999999999998, "text": " Thank you for continuing to make these videos."}, {"start": 247.07999999999998, "end": 252.28, "text": " Right now I'm a student, just starting to study to become a data scientist and your videos"}, {"start": 252.28, "end": 255.48, "text": " inspire me and make me excited about the future."}, {"start": 255.48, "end": 257.0, "text": " Keep up the good work."}, {"start": 257.0, "end": 262.48, "text": " Quinn, I hope that these works inspire you to be just as great as the incredible researchers"}, {"start": 262.48, "end": 264.36, "text": " behind these works."}, {"start": 264.36, "end": 268.56, "text": " If you look at the end of this comment, it says, what's so great about your content"}, {"start": 268.56, "end": 272.64, "text": " is that you strike the perfect balance of duration and information."}, {"start": 272.64, "end": 277.4, "text": " I've seen 30 minute videos that taught me less than your short thingies."}, {"start": 277.4, "end": 279.2, "text": " Thanks so much for the comment and\u2026"}, {"start": 279.2, "end": 281.64, "text": " Hmm, nice name, by the way."}, {"start": 281.64, "end": 286.24, "text": " This has slowly but steadily become my favorite YouTube channel."}, {"start": 286.24, "end": 288.03999999999996, "text": " Hmm, thank you."}, {"start": 288.03999999999996, "end": 290.36, "text": " I'm very happy to have you around."}, {"start": 290.36, "end": 295.36, "text": " Better explained than my professor at the university in one and a half hours."}, {"start": 295.36, "end": 296.36, "text": " Thanks."}, {"start": 296.36, "end": 299.59999999999997, "text": " Please don't tell this to your professor."}, {"start": 299.59999999999997, "end": 300.84, "text": " Thank you."}, {"start": 300.84, "end": 306.08, "text": " I'm subscribed to over 100 channels, but I look forward to your videos the most by"}, {"start": 306.08, "end": 307.08, "text": " far."}, {"start": 307.08, "end": 309.2, "text": " Thank you for the great content."}, {"start": 309.2, "end": 311.03999999999996, "text": " Wow."}, {"start": 311.04, "end": 315.56, "text": " Your videos really inspire me and keep me curious about science and technology."}, {"start": 315.56, "end": 320.76000000000005, "text": " I'm super interested in machine learning, neural networks, and artificial general intelligence."}, {"start": 320.76000000000005, "end": 324.56, "text": " These breakthroughs in the field you share with us from week to week makes me feel that"}, {"start": 324.56, "end": 328.12, "text": " a new era is coming, and I want to be a part of it."}, {"start": 328.12, "end": 330.32000000000005, "text": " Thank you for inspiring young minds."}, {"start": 330.32000000000005, "end": 331.64000000000004, "text": " I couldn't agree more."}, {"start": 331.64000000000004, "end": 334.8, "text": " There are indeed so many works to be excited for."}, {"start": 334.8, "end": 338.36, "text": " I really have a hard time containing my excitement."}, {"start": 338.36, "end": 343.64, "text": " I don't know how you pump out so many great videos so quickly, truly impressive."}, {"start": 343.64, "end": 348.12, "text": " In fact, I'd like to do a whole lot more and hopefully this will be possible by doing"}, {"start": 348.12, "end": 349.88, "text": " it full time in the future."}, {"start": 349.88, "end": 352.28000000000003, "text": " That would be really amazing."}, {"start": 352.28000000000003, "end": 357.8, "text": " Here's a message from someone who just blazed through the entire series in one go."}, {"start": 357.8, "end": 362.64, "text": " I hope one day will have so many episodes that it will be a serious life hazard to do"}, {"start": 362.64, "end": 363.64, "text": " so."}, {"start": 363.64, "end": 365.56, "text": " Your channel is very inspiring."}, {"start": 365.56, "end": 370.2, "text": " Now I can't think of anything else than to work on science or engineering."}, {"start": 370.2, "end": 375.28000000000003, "text": " I hope these works will inspire you to greatness and please make sure to write to me about"}, {"start": 375.28000000000003, "end": 379.24, "text": " all the amazing stuff you created so I can have a look."}, {"start": 379.24, "end": 383.84000000000003, "text": " I think this one was shown to me in a forum topic about light transport."}, {"start": 383.84000000000003, "end": 389.24, "text": " It seems to be written by a former student of one of my courses in Vienna in Incognito."}, {"start": 389.24, "end": 392.16, "text": " Hey there, and thanks for the kind words."}, {"start": 392.16, "end": 397.6, "text": " Sometimes even the authors of the paper show up themselves and fortunately so far not"}, {"start": 397.6, "end": 400.48, "text": " to tell me how much I've butchered their work."}, {"start": 400.48, "end": 406.48, "text": " Some of my heroes at ID Software are also watching too many papers and the new Doom game"}, {"start": 406.48, "end": 408.84000000000003, "text": " was also shipped on time."}, {"start": 408.84000000000003, "end": 414.32000000000005, "text": " Peter Arvay is the CEO of Prezi and is also one of my all-time heroes."}, {"start": 414.32000000000005, "end": 420.0, "text": " In my humble opinion he is about as good of a CEO as any company could hope for."}, {"start": 420.0, "end": 423.52, "text": " There are so many more kind messages that I have in my mailbox."}, {"start": 423.52, "end": 426.6, "text": " Thanks so much everyone, really appreciated."}, {"start": 426.6, "end": 431.68, "text": " I really feel like we are just starting out and we are taking the first few steps of"}, {"start": 431.68, "end": 433.28, "text": " a wonderful journey."}, {"start": 433.28, "end": 439.36, "text": " Let's keep celebrating science together and remember, research is not only for experts,"}, {"start": 439.36, "end": 441.24, "text": " it is for everyone."}, {"start": 441.24, "end": 452.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-rf_MDh-FiE
Surface-Only Liquids | Two Minute Papers #69
The paper "Surface-Only Liquids" is available here: http://www.cs.columbia.edu/cg/surfaceliquids/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by J. Frog - https://flic.kr/p/9Ruz12 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir. Most of the techniques we've seen in previous fluid papers run the simulation inside the entire volume of the fluids. These traditional techniques scale poorly with the size of our simulation. But wait, as we haven't talked about scaling before, what does this scaling thing really mean? Favorable scaling means that if we have a bigger simulation, we don't have to wait longer for it. Scaling is fairly normal if we have a simulation twice as big and we need to wait about twice as much. Poor scaling can give us extraordinarily bad deals, such as waiting 10 or more times as much for a simulation that is only twice as big. Fortunately, a new class of algorithms is slowly emerging that try to focus more resources on computing what happens near the surface of the liquid and try to get away with as little less possible inside of the volume. This piece of work shows that most of the time we can get away with not doing computations inside the volume of the fluid, but only on the surface. This surface only techniques scales extremely well compared to traditional techniques that simulate the entire volume. If a piece of fluid were an apple, we'd only have to eat the peel and not the whole apple. It's a lot less chewing, right? As a result, the chewing or the computation, if you will, typically takes seconds per image instead of minutes. A previous technique on narrow band fluid simulations computed the important physical properties near the surface, but in this case, we compute not near the surface, but only on the surface. The difference sounds subtle, but it makes a completely different mathematical background. To make such a technique work, we have to make simplifications to the problem. For instance, one of the simplifications is to make the fluids incompressible. This means that the density of the fluid is not allowed to change. The resulting technique supports simulating a variety of cases such as dripping water, droplet and crown splashes, fluid chains, and sheet flapping. I was spellbound by the mathematics written in the paper that is both crystal clear and beautiful in its flamboyancy. This one is such a spectacular paper. It is so good I had it on my tablet and couldn't wait to get on the train. So I could finally read it. The main limitation of the technique is that it is not that useful if we have a large surface to volume ratio, simply because the peel is still a large amount compared to the volume of our apple. We needed the other way around for this technique to be useful, which is true in many cases. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.2, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir."}, {"start": 5.2, "end": 10.16, "text": " Most of the techniques we've seen in previous fluid papers run the simulation inside the"}, {"start": 10.16, "end": 12.44, "text": " entire volume of the fluids."}, {"start": 12.44, "end": 16.68, "text": " These traditional techniques scale poorly with the size of our simulation."}, {"start": 16.68, "end": 21.64, "text": " But wait, as we haven't talked about scaling before, what does this scaling thing really"}, {"start": 21.64, "end": 22.64, "text": " mean?"}, {"start": 22.64, "end": 27.240000000000002, "text": " Favorable scaling means that if we have a bigger simulation, we don't have to wait longer"}, {"start": 27.240000000000002, "end": 28.240000000000002, "text": " for it."}, {"start": 28.24, "end": 33.68, "text": " Scaling is fairly normal if we have a simulation twice as big and we need to wait about twice"}, {"start": 33.68, "end": 34.76, "text": " as much."}, {"start": 34.76, "end": 40.76, "text": " Poor scaling can give us extraordinarily bad deals, such as waiting 10 or more times as much"}, {"start": 40.76, "end": 43.56, "text": " for a simulation that is only twice as big."}, {"start": 43.56, "end": 49.04, "text": " Fortunately, a new class of algorithms is slowly emerging that try to focus more resources"}, {"start": 49.04, "end": 54.28, "text": " on computing what happens near the surface of the liquid and try to get away with as little"}, {"start": 54.28, "end": 57.16, "text": " less possible inside of the volume."}, {"start": 57.16, "end": 61.559999999999995, "text": " This piece of work shows that most of the time we can get away with not doing computations"}, {"start": 61.559999999999995, "end": 65.72, "text": " inside the volume of the fluid, but only on the surface."}, {"start": 65.72, "end": 70.39999999999999, "text": " This surface only techniques scales extremely well compared to traditional techniques that"}, {"start": 70.39999999999999, "end": 72.6, "text": " simulate the entire volume."}, {"start": 72.6, "end": 77.75999999999999, "text": " If a piece of fluid were an apple, we'd only have to eat the peel and not the whole apple."}, {"start": 77.75999999999999, "end": 79.96, "text": " It's a lot less chewing, right?"}, {"start": 79.96, "end": 85.32, "text": " As a result, the chewing or the computation, if you will, typically takes seconds per image"}, {"start": 85.32, "end": 86.75999999999999, "text": " instead of minutes."}, {"start": 86.76, "end": 91.4, "text": " A previous technique on narrow band fluid simulations computed the important physical"}, {"start": 91.4, "end": 97.68, "text": " properties near the surface, but in this case, we compute not near the surface, but only"}, {"start": 97.68, "end": 99.2, "text": " on the surface."}, {"start": 99.2, "end": 104.08000000000001, "text": " The difference sounds subtle, but it makes a completely different mathematical background."}, {"start": 104.08000000000001, "end": 108.4, "text": " To make such a technique work, we have to make simplifications to the problem."}, {"start": 108.4, "end": 113.24000000000001, "text": " For instance, one of the simplifications is to make the fluids incompressible."}, {"start": 113.24, "end": 117.0, "text": " This means that the density of the fluid is not allowed to change."}, {"start": 117.0, "end": 122.32, "text": " The resulting technique supports simulating a variety of cases such as dripping water,"}, {"start": 122.32, "end": 127.0, "text": " droplet and crown splashes, fluid chains, and sheet flapping."}, {"start": 127.0, "end": 132.35999999999999, "text": " I was spellbound by the mathematics written in the paper that is both crystal clear and"}, {"start": 132.35999999999999, "end": 134.84, "text": " beautiful in its flamboyancy."}, {"start": 134.84, "end": 137.92, "text": " This one is such a spectacular paper."}, {"start": 137.92, "end": 143.16, "text": " It is so good I had it on my tablet and couldn't wait to get on the train."}, {"start": 143.16, "end": 144.92, "text": " So I could finally read it."}, {"start": 144.92, "end": 149.68, "text": " The main limitation of the technique is that it is not that useful if we have a large surface"}, {"start": 149.68, "end": 155.16, "text": " to volume ratio, simply because the peel is still a large amount compared to the volume"}, {"start": 155.16, "end": 156.6, "text": " of our apple."}, {"start": 156.6, "end": 161.72, "text": " We needed the other way around for this technique to be useful, which is true in many cases."}, {"start": 161.72, "end": 173.2, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Uxax5EKg0zA
Artistic Style Transfer For Videos | Two Minute Papers #68
Artificial neural networks were inspired by the human brain and simulate how neurons behave when they are shown a sensory input (e.g., images, sounds, etc). They are known to be excellent tools for image recognition, any many other problems beyond that - they also excel at weather predictions, breast cancer cell mitosis detection, brain image segmentation and toxicity prediction among many others. Deep learning means that we use an artificial neural network with multiple layers, making it even more powerful for more difficult tasks. This time they have been shown to be apt at reproducing the artistic style of many famous painters, such as Vincent Van Gogh and Pablo Picasso among many others. All the user needs to do is provide an input photograph and a target image from which the artistic style will be learned. And now, onto the next frontier: transferring artistic style to videos! _________ The paper "Artistic style transfer for videos" is available here: http://arxiv.org/abs/1604.08610 The implementation of this technique is also available: https://github.com/manuelruder/artistic-videos Recommended for you: Deep Neural Network Learns Van Gogh's Art - https://www.youtube.com/watch?v=-R9bJGNHltQ Deep Learning Program Learns to Paint - https://www.youtube.com/watch?v=UGAzi1QBVEg From Doodles To Paintings With Deep Learning - https://www.youtube.com/watch?v=jMZqxfTls-0 Sintel Movie copyright: Blender Foundation https://durian.blender.org/sharing/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was taken from the corresponding paper. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Kato Ejona Ifehir. We have previously talked about a technique that used a deep neural network to transfer the artistic style of a painting to an arbitrary image, for instance, to a photograph. As always, if you're not familiar with some of these terms, we have discussed them in previous episodes and links are available in the description box, make sure to check them out. Style transfer is possible on still images, as there's currently no technique to apply this to videos, it is hopefully abundantly clear that a lot of potential still lies dormant inside. But can we apply this artistic style transfer to videos? Would it work if we would simply try? For an experienced researcher, it is flagrantly obvious that it's an understatement to say that it wouldn't work. It would fail in a spectacular manner, as you can see here. But with this technique, it apparently works quite well. To be frank, the results look gorgeous. So how does it work? Now, don't be afraid, you'll be presented with a concise but deliberately obscure statement. This technique preserves temporal coherence when applying the artistic style by incorporating the optical flow of the input video. Now, the only question is what temporal coherence and optical flow means? Temporal coherence is a term that was used by physicists to describe, for instance, how the behavior of a wave of light changes or stays the same if we observe it at different times. In computer graphics, it is also an important term because oftentimes we have techniques that we can apply to one image, but not necessarily to a video, because the behavior of the technique changes drastically from frame to frame, introducing a disturbing flickering effect that you can see in this video here. We have the same if we do the artistic style transfer because there is no communication between the individual images of the video. The technique has no idea that most of the time we are looking at the same things and if so, the artistic style would have to be applied the same way over and over to these regions. We are clearly lacking temporal coherence. Now, onto optical flows. Imagine a flying drone that takes a series of photographs while hovering and looking around the bovus. To write sophisticated navigation algorithms, the drone would have to know which object is which across many of these photographs. If we have slightly turned, most of what we see is the same and only a small part of this new image is new information. But the computer doesn't know that as all it sees is a bunch of pixels. Optical flow algorithms help us achieving this by describing the possible motions that give us photograph B from photograph A. In this application, what this means is that there is some interframe communication. The algorithm will know that if I color this person this way a moment ago, I cannot drastically change the style of that region on a whim. It is now easy to see why naively applying such techniques to many individual frames would be a flippant attempt to create beautiful smooth-looking videos. So now, it hopefully makes a bit more sense. This technique preserves temporal coherence when applying the artistic style by incorporating the optical flow of the input video. Such great progress in so little time. Last time I've mentioned Cram from the comments section and this time I'd like to comment related giraffe for his insightful comments. Thanks for being around and I've definitely learned from you fellow scholars. I am really loving the respectful and quality discussions that take place in the comments section and it is really cool that we can both learn from each other. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two minute papers with Kato Ejona Ifehir."}, {"start": 4.4, "end": 14.6, "text": " We have previously talked about a technique that used a deep neural network to transfer the artistic style of a painting to an arbitrary image, for instance, to a photograph."}, {"start": 14.6, "end": 23.2, "text": " As always, if you're not familiar with some of these terms, we have discussed them in previous episodes and links are available in the description box, make sure to check them out."}, {"start": 23.2, "end": 34.8, "text": " Style transfer is possible on still images, as there's currently no technique to apply this to videos, it is hopefully abundantly clear that a lot of potential still lies dormant inside."}, {"start": 34.8, "end": 40.8, "text": " But can we apply this artistic style transfer to videos? Would it work if we would simply try?"}, {"start": 40.8, "end": 47.3, "text": " For an experienced researcher, it is flagrantly obvious that it's an understatement to say that it wouldn't work."}, {"start": 47.3, "end": 52.0, "text": " It would fail in a spectacular manner, as you can see here."}, {"start": 52.0, "end": 60.0, "text": " But with this technique, it apparently works quite well. To be frank, the results look gorgeous. So how does it work?"}, {"start": 60.0, "end": 65.6, "text": " Now, don't be afraid, you'll be presented with a concise but deliberately obscure statement."}, {"start": 65.6, "end": 74.2, "text": " This technique preserves temporal coherence when applying the artistic style by incorporating the optical flow of the input video."}, {"start": 74.2, "end": 79.2, "text": " Now, the only question is what temporal coherence and optical flow means?"}, {"start": 79.2, "end": 90.8, "text": " Temporal coherence is a term that was used by physicists to describe, for instance, how the behavior of a wave of light changes or stays the same if we observe it at different times."}, {"start": 90.8, "end": 109.0, "text": " In computer graphics, it is also an important term because oftentimes we have techniques that we can apply to one image, but not necessarily to a video, because the behavior of the technique changes drastically from frame to frame, introducing a disturbing flickering effect that you can see in this video here."}, {"start": 109.0, "end": 117.0, "text": " We have the same if we do the artistic style transfer because there is no communication between the individual images of the video."}, {"start": 117.0, "end": 128.0, "text": " The technique has no idea that most of the time we are looking at the same things and if so, the artistic style would have to be applied the same way over and over to these regions."}, {"start": 128.0, "end": 131.0, "text": " We are clearly lacking temporal coherence."}, {"start": 131.0, "end": 140.0, "text": " Now, onto optical flows. Imagine a flying drone that takes a series of photographs while hovering and looking around the bovus."}, {"start": 140.0, "end": 148.0, "text": " To write sophisticated navigation algorithms, the drone would have to know which object is which across many of these photographs."}, {"start": 148.0, "end": 155.0, "text": " If we have slightly turned, most of what we see is the same and only a small part of this new image is new information."}, {"start": 155.0, "end": 167.0, "text": " But the computer doesn't know that as all it sees is a bunch of pixels. Optical flow algorithms help us achieving this by describing the possible motions that give us photograph B from photograph A."}, {"start": 167.0, "end": 172.0, "text": " In this application, what this means is that there is some interframe communication."}, {"start": 172.0, "end": 181.0, "text": " The algorithm will know that if I color this person this way a moment ago, I cannot drastically change the style of that region on a whim."}, {"start": 181.0, "end": 190.0, "text": " It is now easy to see why naively applying such techniques to many individual frames would be a flippant attempt to create beautiful smooth-looking videos."}, {"start": 190.0, "end": 201.0, "text": " So now, it hopefully makes a bit more sense. This technique preserves temporal coherence when applying the artistic style by incorporating the optical flow of the input video."}, {"start": 201.0, "end": 205.0, "text": " Such great progress in so little time."}, {"start": 205.0, "end": 214.0, "text": " Last time I've mentioned Cram from the comments section and this time I'd like to comment related giraffe for his insightful comments."}, {"start": 214.0, "end": 218.0, "text": " Thanks for being around and I've definitely learned from you fellow scholars."}, {"start": 218.0, "end": 226.0, "text": " I am really loving the respectful and quality discussions that take place in the comments section and it is really cool that we can both learn from each other."}, {"start": 226.0, "end": 235.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wBrwN4dS-DA
Deep Reinforcement Terrain Learning | Two Minute Papers #67
In this piece of work, a combination of deep learning and reinforcement learning is presented which has proven to be useful in solving many extremely difficult tasks. Google DeepMind built a system that can play Atari games at a superhuman level using this technique that is also referred to as Deep Q-Learning. This time, it was used to teach digital creatures to walk and overcome challenging terrain arrangements. __________________________ The paper "Terrain-Adaptive Locomotion Skills Using Deep Reinforcement Learning " is available here: http://www.cs.ubc.ca/~van/papers/2016-TOG-deepRL/index.html The implementation of the paper is also available here: https://github.com/xbpeng/DeepTerrainRL OpenAI's Gym project: https://gym.openai.com/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by Fulvio Spada - https://flic.kr/p/o7z8o1 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. This is a follow-up work to a technique we have talked about earlier. We have seen how different creatures learn to walk and their movement patterns happen to be robust to slight variations in the terrain. In this work, we imagine these creatures as a collection of joints and links, typically around 20 links. Depending on what actions we choose for these individual body parts in time, we can construct movements such as walking or leaping forward. However, this time, these creatures not only learn to walk, but they also monitor their surroundings, and are also taught to cope with immense difficulties that arise from larger terrain differences. This means that they learn both on character features, like where the center of mass is, and what the velocity of different body parts are. Different terrain features, such as what the displacement of the slope we are walking up on is, or if there's a wall ahead of us. The use machinery to achieve this is deep reinforcement learning. It is therefore a combination of a deep neural network and a reinforcement learning algorithm. The neural network learns the correspondence between these states and output actions, and the reinforcement learner tries to guess which action will lead to a positive reward, which is typically measured as our progress on how far we got through the level. In this footage, we can witness how a simple learning algorithm built from these two puzzle pieces can teach these creatures to modify their center of mass and adapt their movement to overcome more sophisticated obstacles, and other kinds of adversities. And please note that the technique still supports a variety of different creature setups. One important limitation of this technique is that it is restricted to 2D. This means that the characters can walk around not in a 3D world, but on a plane. A question whether we are shackled by the 2Dness of the technique, or if the results can be applied to 3D, remains to be seen. I'd like to note that candidly discussing limitations is immensely important in research. And the most important thing is often not what we can do at this moment, but the long-term potential of the technique which I think this work has in abundance. It's very clear that in this research area, enormous leaps are made year by year, and there's lots to be excited about. As more papers are published on this locomotion problem, the authors also discussed that it would be great to have a unified physics system and some error metrics so that we can measure these techniques against each other on equal footings. I feel that such a work would provide fertile grounds for more exploration in this area, and if I see more papers akin to this one, I'll be a happy man. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.96, "end": 8.620000000000001, "text": " This is a follow-up work to a technique we have talked about earlier."}, {"start": 8.620000000000001, "end": 13.02, "text": " We have seen how different creatures learn to walk and their movement patterns happen"}, {"start": 13.02, "end": 16.3, "text": " to be robust to slight variations in the terrain."}, {"start": 16.3, "end": 21.1, "text": " In this work, we imagine these creatures as a collection of joints and links, typically"}, {"start": 21.1, "end": 22.94, "text": " around 20 links."}, {"start": 22.94, "end": 27.400000000000002, "text": " Depending on what actions we choose for these individual body parts in time, we can"}, {"start": 27.4, "end": 31.32, "text": " construct movements such as walking or leaping forward."}, {"start": 31.32, "end": 36.56, "text": " However, this time, these creatures not only learn to walk, but they also monitor their"}, {"start": 36.56, "end": 41.879999999999995, "text": " surroundings, and are also taught to cope with immense difficulties that arise from larger"}, {"start": 41.879999999999995, "end": 43.56, "text": " terrain differences."}, {"start": 43.56, "end": 48.92, "text": " This means that they learn both on character features, like where the center of mass is,"}, {"start": 48.92, "end": 51.96, "text": " and what the velocity of different body parts are."}, {"start": 51.96, "end": 57.0, "text": " Different terrain features, such as what the displacement of the slope we are walking up on"}, {"start": 57.0, "end": 60.480000000000004, "text": " is, or if there's a wall ahead of us."}, {"start": 60.480000000000004, "end": 64.6, "text": " The use machinery to achieve this is deep reinforcement learning."}, {"start": 64.6, "end": 70.24000000000001, "text": " It is therefore a combination of a deep neural network and a reinforcement learning algorithm."}, {"start": 70.24000000000001, "end": 75.84, "text": " The neural network learns the correspondence between these states and output actions, and"}, {"start": 75.84, "end": 80.88, "text": " the reinforcement learner tries to guess which action will lead to a positive reward, which"}, {"start": 80.88, "end": 85.36, "text": " is typically measured as our progress on how far we got through the level."}, {"start": 85.36, "end": 90.44, "text": " In this footage, we can witness how a simple learning algorithm built from these two puzzle"}, {"start": 90.44, "end": 95.88, "text": " pieces can teach these creatures to modify their center of mass and adapt their movement"}, {"start": 95.88, "end": 103.24, "text": " to overcome more sophisticated obstacles, and other kinds of adversities."}, {"start": 103.24, "end": 108.28, "text": " And please note that the technique still supports a variety of different creature setups."}, {"start": 108.28, "end": 112.96000000000001, "text": " One important limitation of this technique is that it is restricted to 2D."}, {"start": 112.96000000000001, "end": 118.36, "text": " This means that the characters can walk around not in a 3D world, but on a plane."}, {"start": 118.36, "end": 123.52, "text": " A question whether we are shackled by the 2Dness of the technique, or if the results can"}, {"start": 123.52, "end": 126.44, "text": " be applied to 3D, remains to be seen."}, {"start": 126.44, "end": 132.24, "text": " I'd like to note that candidly discussing limitations is immensely important in research."}, {"start": 132.24, "end": 137.32, "text": " And the most important thing is often not what we can do at this moment, but the long-term"}, {"start": 137.32, "end": 141.32, "text": " potential of the technique which I think this work has in abundance."}, {"start": 141.32, "end": 146.4, "text": " It's very clear that in this research area, enormous leaps are made year by year, and"}, {"start": 146.4, "end": 148.6, "text": " there's lots to be excited about."}, {"start": 148.6, "end": 153.23999999999998, "text": " As more papers are published on this locomotion problem, the authors also discussed that"}, {"start": 153.23999999999998, "end": 158.64, "text": " it would be great to have a unified physics system and some error metrics so that we can"}, {"start": 158.64, "end": 162.72, "text": " measure these techniques against each other on equal footings."}, {"start": 162.72, "end": 167.68, "text": " I feel that such a work would provide fertile grounds for more exploration in this area,"}, {"start": 167.68, "end": 171.64, "text": " and if I see more papers akin to this one, I'll be a happy man."}, {"start": 171.64, "end": 201.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=72_iAlYwl0c
Separable Subsurface Scattering | Two Minute Papers #66
Separable Subsurface Scattering is a novel technique to add real-time subsurface light transport calculations for computer games and other real-time applications. ____________________________ The paper "Separable Subsurface Scattering" and its implementation is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ http://www.iryoku.com/separable-sss/ Recommended for you: Ray Tracing / Subsurface Scattering @ Function 2015 - https://www.youtube.com/watch?v=qyDUvatu5M8 Separable Subsurface Scattering Unofficial Talk - https://www.youtube.com/watch?v=mU-5CsaPfsE Separable Subsurface Scattering Implementation in Blender (thank you Lubos Lenco!): http://www.blendernation.com/2016/05/02/separable-subsurface-scattering-game-engine-cycles/ http://luboslenco.com/notes/ssss/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Image credits: Leaves - https://flic.kr/p/fGie2L Snail - https://flic.kr/p/8wXFiC Skin: Wikipedia Extended credits (copied from the Acknowledgements section of the mentioned paper): The authors want to thank the reviewers for their insightful comments; Infinity Realities, in particular Lee Perry-Smith, for his head model and for the Lauren model; the Institute of Creative Technologies at USC, in particular Paul Debevec, for the Ari and Bernardo models; and Bernardo Antoniazzi for letting us use his likeness. Furthermore, we want to thank the Stanford University Computer Graphics Laboratory for the Dragon model, and the following contributors from Blend Swap under CC-BY licence: longrender for the Dish model, metalix for the Green apple model, betomo16 for the Plant model, and PickleJones for the Grapes model. We also thank Felícia Fehér for editing the figures. This research has been partially funded by the European Commission, 7th Framework Programme, through projects GOLEM and VERVE, the Spanish Ministry of Economy and Competitiveness through project LIGHTSLICE, and project TAMA, and the Austrian Science Fund (FWF) through project no. P23700-N23. The thumbnail background image was taken from the corresponding paper linked above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. Subsurface scattering means that the portion of incoming light penetrates the surface of a material. Our skin is a little known but nonetheless great example of that. But so are plant leaves, marble, milk, or snails, to have a wackier example. Subsurface scattering looks unbelievably beautiful, but at the same time it is very expensive to compute because we have to simulate up to thousands and thousands of light scattering events for every ray of light. And we have to do this for millions of rays. It really takes forever. The lack of subsurface scattering is the reason why we have seen so many lifeless rubber-looking human characters in video games and animated movies for decades now. This technique is a collaboration between the Activision Blizzard Game Development Company, the University of Zaragoza in Spain, and the Technical University of Vienna in Austria. And it can simulate this kind of subsurface light transport in half a millisecond per image. Let's stop for a minute and think about this. Earlier we talked about subsurface scattering techniques that were really awesome, but still took at least, let's say, four hours on a scene before they became useful. This one is half a millisecond per image. Almost nothing. In one second, it can do this calculation 2,000 times. Now, this has to be a completely different approach than just simulating many millions of rays of light, right? We can take a four hour long algorithm, do some magic, and get something like this. The first key thought is that we can set up some cool experiment where we play around with light sources and big blocks of translucent materials, and record how light bounces off of these materials. Cool thing number one. We only need to do it once per material. Number two, the results can be stored in an image. This is what we call a diffusion profile, and this is how it looks like. So we have an image of the diffusion profile and one image of the material that we would like to add subsurface scattering to. This is a convolution based technique, which means that it enables us not to add these two images together, but to mix them together in a way that the optical properties of the diffusion profiles are carried to the image. If we add the optical properties of an apple to a human face, it will look more like a face that has been carved out of a giant apple. A less S-9 application is, of course, if we mix it with the appropriate skin profile image, then we'll get photorealistic looking faces as it is demonstrated quite aptly by this animation. This apple to skin example, by the way, you can actually try for yourself, as the source code and an executable demo is also freely available for everyone to experiment with. Convolutions have so many cool applications, I don't even know where to start. In fact, I think we should have an episode solely on that. Can't wait, it's going to be a lot of fun. These convolution computations are great, but they are still too expensive for real-time video games. What this work gives us is a set of techniques that are able to compute this convolution not on these original images, but much smaller, tiny, tiny strips, which are much cheaper. But the result of the computations look barely distinguishable. Another cool thing is that the quality of the results is not only scientifically provable, but this technique also opens up the possibility of artistic manipulation. It is done in a way that we can start out with a physically plausible result and tailor it to our liking. You can see some exaggerated examples of that. The entire technique is so simple, a computer program that executes it can fit on your business card. It also seems to have appeared in Blender recently. Also, a big hello and shoutout for the awesome people at Intel who recently invited my humble self to chat a bit about this technique. If you would like to hear more about the details on how this algorithm works, I've put some videos in the description box. The most important take home message from this project, at least for me, is that it is possible to conduct academic research projects together with companies and create results that can make it to multimillion dollar computer games, but also having proven results that are useful for the scientific community. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.64, "end": 10.44, "text": " Subsurface scattering means that the portion of incoming light penetrates the surface of a material."}, {"start": 10.44, "end": 14.32, "text": " Our skin is a little known but nonetheless great example of that."}, {"start": 14.32, "end": 20.6, "text": " But so are plant leaves, marble, milk, or snails, to have a wackier example."}, {"start": 20.6, "end": 24.0, "text": " Subsurface scattering looks unbelievably beautiful,"}, {"start": 24.0, "end": 33.28, "text": " but at the same time it is very expensive to compute because we have to simulate up to thousands and thousands of light scattering events for every ray of light."}, {"start": 33.28, "end": 37.84, "text": " And we have to do this for millions of rays. It really takes forever."}, {"start": 37.84, "end": 47.92, "text": " The lack of subsurface scattering is the reason why we have seen so many lifeless rubber-looking human characters in video games and animated movies for decades now."}, {"start": 47.92, "end": 52.8, "text": " This technique is a collaboration between the Activision Blizzard Game Development Company,"}, {"start": 52.8, "end": 58.4, "text": " the University of Zaragoza in Spain, and the Technical University of Vienna in Austria."}, {"start": 58.4, "end": 64.92, "text": " And it can simulate this kind of subsurface light transport in half a millisecond per image."}, {"start": 64.92, "end": 67.39999999999999, "text": " Let's stop for a minute and think about this."}, {"start": 67.39999999999999, "end": 71.6, "text": " Earlier we talked about subsurface scattering techniques that were really awesome,"}, {"start": 71.6, "end": 76.64, "text": " but still took at least, let's say, four hours on a scene before they became useful."}, {"start": 76.64, "end": 80.75999999999999, "text": " This one is half a millisecond per image."}, {"start": 80.76, "end": 86.92, "text": " Almost nothing. In one second, it can do this calculation 2,000 times."}, {"start": 86.92, "end": 93.32000000000001, "text": " Now, this has to be a completely different approach than just simulating many millions of rays of light, right?"}, {"start": 93.32000000000001, "end": 98.68, "text": " We can take a four hour long algorithm, do some magic, and get something like this."}, {"start": 98.68, "end": 106.92, "text": " The first key thought is that we can set up some cool experiment where we play around with light sources and big blocks of translucent materials,"}, {"start": 106.92, "end": 111.0, "text": " and record how light bounces off of these materials."}, {"start": 111.0, "end": 115.32000000000001, "text": " Cool thing number one. We only need to do it once per material."}, {"start": 115.32000000000001, "end": 119.56, "text": " Number two, the results can be stored in an image."}, {"start": 119.56, "end": 123.68, "text": " This is what we call a diffusion profile, and this is how it looks like."}, {"start": 123.68, "end": 130.76, "text": " So we have an image of the diffusion profile and one image of the material that we would like to add subsurface scattering to."}, {"start": 130.76, "end": 137.0, "text": " This is a convolution based technique, which means that it enables us not to add these two images together,"}, {"start": 137.0, "end": 144.51999999999998, "text": " but to mix them together in a way that the optical properties of the diffusion profiles are carried to the image."}, {"start": 144.51999999999998, "end": 152.76, "text": " If we add the optical properties of an apple to a human face, it will look more like a face that has been carved out of a giant apple."}, {"start": 152.76, "end": 158.68, "text": " A less S-9 application is, of course, if we mix it with the appropriate skin profile image,"}, {"start": 158.68, "end": 164.44, "text": " then we'll get photorealistic looking faces as it is demonstrated quite aptly by this animation."}, {"start": 164.44, "end": 168.76000000000002, "text": " This apple to skin example, by the way, you can actually try for yourself,"}, {"start": 168.76000000000002, "end": 174.84, "text": " as the source code and an executable demo is also freely available for everyone to experiment with."}, {"start": 174.84, "end": 179.56, "text": " Convolutions have so many cool applications, I don't even know where to start."}, {"start": 179.56, "end": 182.92000000000002, "text": " In fact, I think we should have an episode solely on that."}, {"start": 182.92000000000002, "end": 185.72, "text": " Can't wait, it's going to be a lot of fun."}, {"start": 185.72, "end": 191.16, "text": " These convolution computations are great, but they are still too expensive for real-time video games."}, {"start": 191.16, "end": 195.56, "text": " What this work gives us is a set of techniques that are able to compute this convolution"}, {"start": 195.56, "end": 201.72, "text": " not on these original images, but much smaller, tiny, tiny strips, which are much cheaper."}, {"start": 201.72, "end": 205.56, "text": " But the result of the computations look barely distinguishable."}, {"start": 205.56, "end": 210.6, "text": " Another cool thing is that the quality of the results is not only scientifically provable,"}, {"start": 210.6, "end": 215.24, "text": " but this technique also opens up the possibility of artistic manipulation."}, {"start": 215.24, "end": 219.4, "text": " It is done in a way that we can start out with a physically plausible result"}, {"start": 219.4, "end": 221.24, "text": " and tailor it to our liking."}, {"start": 221.24, "end": 224.20000000000002, "text": " You can see some exaggerated examples of that."}, {"start": 224.20000000000002, "end": 231.16, "text": " The entire technique is so simple, a computer program that executes it can fit on your business card."}, {"start": 231.16, "end": 234.92000000000002, "text": " It also seems to have appeared in Blender recently."}, {"start": 234.92000000000002, "end": 238.84, "text": " Also, a big hello and shoutout for the awesome people at Intel"}, {"start": 238.84, "end": 242.68, "text": " who recently invited my humble self to chat a bit about this technique."}, {"start": 242.68, "end": 246.36, "text": " If you would like to hear more about the details on how this algorithm works,"}, {"start": 246.36, "end": 248.36, "text": " I've put some videos in the description box."}, {"start": 249.08, "end": 253.32, "text": " The most important take home message from this project, at least for me,"}, {"start": 253.32, "end": 258.04, "text": " is that it is possible to conduct academic research projects together with companies"}, {"start": 258.04, "end": 262.36, "text": " and create results that can make it to multimillion dollar computer games,"}, {"start": 262.36, "end": 267.0, "text": " but also having proven results that are useful for the scientific community."}, {"start": 267.0, "end": 273.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SC0D7aJOySY
Real-Time Shading With Area Light Sources | Two Minute Papers #65
The paper "Real-Time Polygonal-Light Shading with Linearly Transformed Cosines" is available here: https://eheitzresearch.wordpress.com/415-2/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background was taken from the paper mentioned above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In computer graphics, we use the term shading to describe the process of calculating the appearance of a material. This gives the heart and soul of most graphical systems that visualize something on our screen. Let the blue sphere be the object to be shaded, and the red patch be the light source illuminating it. The question is in this configuration, how should the blue sphere look in reality? In order to obtain high-quality images, we need to calculate how much of the red patch is visible from the blue sphere. This describes the object's relation to the light source. Is it close or is it nearby? Is it facing the object or not? What shape is the light source? These factors determine how much light will arrive to the surface of the blue sphere. This is what mathematicians like to call an integration problem. However, beyond this calculation, we also have to take into consideration the reflectance of the material that the blue sphere is made of. Whether we have a white wall surface or an orange makes a great deal of difference and throws a wrench in our already complex calculations. The final shading is the product of this visibility situation and the material properties of the sphere. Needless to say that the mathematical description of many materials can get extremely complex, which makes our calculations really time-consuming. In this piece of work, a technique is proposed that can approximate these two factors in real time. The paper contains a very detailed demonstration of the difference between this and the analytical computations that give us the perfect results, but take extremely long. In short, this technique is very closely matching the analytic results, but it is doing it in real time. I really don't know what to say. We are used to wait for hours to obtain images like this, and now 15 milliseconds per frame. What a hefty value proposition for a paper. Absolutely spectacular. Some of the results really remind me of topological calculations. Topology is a subfield of mathematics that studies what properties of different shapes are preserved when these shapes are undergoing deformations. It's super useful because, for instance, if we can prove that light behaves in some way when the light source has the shape of a disk, then if we are interested in other shapes, topology can help us determine whether all these enormous books, full of theorems on other shapes, are going to apply to this shape or not. It may be that we don't need to invent anything and can just use this vast existing knowledge base. Some of the authors of this paper work at Unity, which means that we can expect these awesome results to appear in the video games of the future. Some code and demos are also available on their website, which I've linked in the description box. Make sure to check them out. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 9.68, "text": " In computer graphics, we use the term shading to describe the process of calculating the"}, {"start": 9.68, "end": 11.72, "text": " appearance of a material."}, {"start": 11.72, "end": 17.64, "text": " This gives the heart and soul of most graphical systems that visualize something on our screen."}, {"start": 17.64, "end": 23.240000000000002, "text": " Let the blue sphere be the object to be shaded, and the red patch be the light source illuminating"}, {"start": 23.240000000000002, "end": 24.240000000000002, "text": " it."}, {"start": 24.240000000000002, "end": 28.68, "text": " The question is in this configuration, how should the blue sphere look in reality?"}, {"start": 28.68, "end": 33.6, "text": " In order to obtain high-quality images, we need to calculate how much of the red patch"}, {"start": 33.6, "end": 35.72, "text": " is visible from the blue sphere."}, {"start": 35.72, "end": 38.8, "text": " This describes the object's relation to the light source."}, {"start": 38.8, "end": 40.92, "text": " Is it close or is it nearby?"}, {"start": 40.92, "end": 43.2, "text": " Is it facing the object or not?"}, {"start": 43.2, "end": 45.04, "text": " What shape is the light source?"}, {"start": 45.04, "end": 49.56, "text": " These factors determine how much light will arrive to the surface of the blue sphere."}, {"start": 49.56, "end": 53.68, "text": " This is what mathematicians like to call an integration problem."}, {"start": 53.68, "end": 58.76, "text": " However, beyond this calculation, we also have to take into consideration the reflectance"}, {"start": 58.76, "end": 61.519999999999996, "text": " of the material that the blue sphere is made of."}, {"start": 61.519999999999996, "end": 65.96000000000001, "text": " Whether we have a white wall surface or an orange makes a great deal of difference and"}, {"start": 65.96000000000001, "end": 69.28, "text": " throws a wrench in our already complex calculations."}, {"start": 69.28, "end": 74.28, "text": " The final shading is the product of this visibility situation and the material properties of"}, {"start": 74.28, "end": 75.28, "text": " the sphere."}, {"start": 75.28, "end": 81.08, "text": " Needless to say that the mathematical description of many materials can get extremely complex,"}, {"start": 81.08, "end": 84.16, "text": " which makes our calculations really time-consuming."}, {"start": 84.16, "end": 88.84, "text": " In this piece of work, a technique is proposed that can approximate these two factors in real"}, {"start": 88.84, "end": 89.84, "text": " time."}, {"start": 89.84, "end": 94.08, "text": " The paper contains a very detailed demonstration of the difference between this and the"}, {"start": 94.08, "end": 99.6, "text": " analytical computations that give us the perfect results, but take extremely long."}, {"start": 99.6, "end": 104.6, "text": " In short, this technique is very closely matching the analytic results, but it is doing it"}, {"start": 104.6, "end": 106.52, "text": " in real time."}, {"start": 106.52, "end": 108.28, "text": " I really don't know what to say."}, {"start": 108.28, "end": 113.72, "text": " We are used to wait for hours to obtain images like this, and now 15 milliseconds per"}, {"start": 113.72, "end": 114.88, "text": " frame."}, {"start": 114.88, "end": 118.16, "text": " What a hefty value proposition for a paper."}, {"start": 118.16, "end": 120.36, "text": " Absolutely spectacular."}, {"start": 120.36, "end": 124.4, "text": " Some of the results really remind me of topological calculations."}, {"start": 124.4, "end": 129.6, "text": " Topology is a subfield of mathematics that studies what properties of different shapes"}, {"start": 129.6, "end": 133.6, "text": " are preserved when these shapes are undergoing deformations."}, {"start": 133.6, "end": 138.92, "text": " It's super useful because, for instance, if we can prove that light behaves in some way"}, {"start": 138.92, "end": 144.16, "text": " when the light source has the shape of a disk, then if we are interested in other shapes,"}, {"start": 144.16, "end": 149.76, "text": " topology can help us determine whether all these enormous books, full of theorems on other"}, {"start": 149.76, "end": 153.28, "text": " shapes, are going to apply to this shape or not."}, {"start": 153.28, "end": 158.68, "text": " It may be that we don't need to invent anything and can just use this vast existing knowledge"}, {"start": 158.68, "end": 159.84, "text": " base."}, {"start": 159.84, "end": 164.64000000000001, "text": " Some of the authors of this paper work at Unity, which means that we can expect these awesome"}, {"start": 164.64000000000001, "end": 167.76, "text": " results to appear in the video games of the future."}, {"start": 167.76, "end": 172.28, "text": " Some code and demos are also available on their website, which I've linked in the description"}, {"start": 172.28, "end": 173.28, "text": " box."}, {"start": 173.28, "end": 174.28, "text": " Make sure to check them out."}, {"start": 174.28, "end": 176.6, "text": " Thanks for watching and for your generous support."}, {"start": 176.6, "end": 206.56, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5PSWr2ovBvU
Deep Learning and Cancer Research | Two Minute Papers #64
A few quite exciting applications of deep learning in cancer research have appeared recently. This new algorithm can recognize cancer cells by looking at blood samples without introducing any intrusive chemicals in the process. Amazing results ahead. :) _________________________ The paper "Deep Learning in Label-free Cell Classification" is available here: http://www.nature.com/articles/srep21471 The link from Healthline: http://www.healthline.com/health/cancer/ovarian-cancer-facts-statistics-infographic#10 Recommended for you: Two+ Minute Papers - Overfitting and Regularization For Deep Learning - https://www.youtube.com/watch?v=6aF9sJrzxaM WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by zhouxuan12345678 (CC BY-SA 2.0). Some blood cells were removed. - https://flic.kr/p/9ATvC1 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Let's try to assess the workflow of this piece of work in the shortest possible form. The input is images of cells, and the output of the algorithm is a decision that tells us which one of these are cancer cells. As the pipeline of the entire experiment is quite elaborate, we'll confine ourselves to discuss the deep learning-related step at the variant. Techniques prior to this one involved adding chemicals to blood samples. The problem is that these techniques were not so reliable and that they also destroyed the cells, so it was not possible to check the samples later. As the title of the paper says, it is a label-free technique, therefore it can recognize cancer cells without any intrusive changes to the samples. The analysis happens by simply looking at them. To even have a chance at saying anything about these cells, domain experts have designed a number of features that help us making an educated decision. For instance, they like to look at refractive indices that tell us how much light slows down when passing through cells. Light absorption and scattering properties are also recognized by the algorithm. Morphological features are also quite important as they describe the shape of the cells and they are among the most useful features for the detection procedure. So, the input is an image, then comes the high-level features, and the neural networks help locating the cancer cells by learning the relation of exactly what values for these high-level features lead to cancer cells. The proposed technique is significantly more accurate and consistent in the detection than previous techniques. It is of utmost importance that we are able to do something like this on a mass scale, because the probability of curing cancer depends greatly on which phase we can identify it. One of the most important factors is early detection, and this is exactly how deep learning can aid us. To demonstrate how important early detection is, have a look at this chart of the ovarian cancer survival rates as a function of how early the detection takes place. I think the numbers speak for themselves, but let's bluntly state the obvious. It goes from almost surely surviving to almost surely dying. By the way, they were using L2 regularization to prevent overfitting in the network. We have talked about what each of these terms mean in a previous episode. I've put a link for that in a description box. 95% success rate with a throughput of millions of cells per second. Wow! Bravo! A real, two-minute paper style head tip for the authors of the paper. It is really amazing to see different people from so many areas working together to defeat this terrible disease. Engineers create instruments to be able to analyze blood samples. Doctors choose the most important features, and computer scientists try to find out the relation between the features and illnesses. Great strides have been made in the last few years, and I am super happy to see that even if you're not a doctor and you haven't studied medicine, you can still help in this process. That's quite amazing. A big shout out to Kram, who has been watching two-minute paper since the very first episodes, and his presence has always been ample with insightful comments. Thanks for being around, and also thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 9.76, "text": " Let's try to assess the workflow of this piece of work in the shortest possible form."}, {"start": 9.76, "end": 17.76, "text": " The input is images of cells, and the output of the algorithm is a decision that tells us which one of these are cancer cells."}, {"start": 17.76, "end": 26.0, "text": " As the pipeline of the entire experiment is quite elaborate, we'll confine ourselves to discuss the deep learning-related step at the variant."}, {"start": 26.0, "end": 30.48, "text": " Techniques prior to this one involved adding chemicals to blood samples."}, {"start": 30.48, "end": 38.64, "text": " The problem is that these techniques were not so reliable and that they also destroyed the cells, so it was not possible to check the samples later."}, {"start": 38.64, "end": 47.04, "text": " As the title of the paper says, it is a label-free technique, therefore it can recognize cancer cells without any intrusive changes to the samples."}, {"start": 47.04, "end": 50.16, "text": " The analysis happens by simply looking at them."}, {"start": 50.16, "end": 59.44, "text": " To even have a chance at saying anything about these cells, domain experts have designed a number of features that help us making an educated decision."}, {"start": 59.44, "end": 66.72, "text": " For instance, they like to look at refractive indices that tell us how much light slows down when passing through cells."}, {"start": 66.72, "end": 71.36, "text": " Light absorption and scattering properties are also recognized by the algorithm."}, {"start": 71.36, "end": 80.56, "text": " Morphological features are also quite important as they describe the shape of the cells and they are among the most useful features for the detection procedure."}, {"start": 80.56, "end": 94.72, "text": " So, the input is an image, then comes the high-level features, and the neural networks help locating the cancer cells by learning the relation of exactly what values for these high-level features lead to cancer cells."}, {"start": 94.72, "end": 100.96000000000001, "text": " The proposed technique is significantly more accurate and consistent in the detection than previous techniques."}, {"start": 100.96, "end": 111.91999999999999, "text": " It is of utmost importance that we are able to do something like this on a mass scale, because the probability of curing cancer depends greatly on which phase we can identify it."}, {"start": 111.91999999999999, "end": 118.0, "text": " One of the most important factors is early detection, and this is exactly how deep learning can aid us."}, {"start": 118.0, "end": 128.48, "text": " To demonstrate how important early detection is, have a look at this chart of the ovarian cancer survival rates as a function of how early the detection takes place."}, {"start": 128.48, "end": 132.88, "text": " I think the numbers speak for themselves, but let's bluntly state the obvious."}, {"start": 132.88, "end": 137.51999999999998, "text": " It goes from almost surely surviving to almost surely dying."}, {"start": 137.51999999999998, "end": 142.39999999999998, "text": " By the way, they were using L2 regularization to prevent overfitting in the network."}, {"start": 142.39999999999998, "end": 146.32, "text": " We have talked about what each of these terms mean in a previous episode."}, {"start": 146.32, "end": 148.79999999999998, "text": " I've put a link for that in a description box."}, {"start": 148.79999999999998, "end": 154.95999999999998, "text": " 95% success rate with a throughput of millions of cells per second."}, {"start": 154.95999999999998, "end": 156.95999999999998, "text": " Wow! Bravo!"}, {"start": 156.96, "end": 161.28, "text": " A real, two-minute paper style head tip for the authors of the paper."}, {"start": 161.28, "end": 169.20000000000002, "text": " It is really amazing to see different people from so many areas working together to defeat this terrible disease."}, {"start": 169.20000000000002, "end": 173.92000000000002, "text": " Engineers create instruments to be able to analyze blood samples."}, {"start": 173.92000000000002, "end": 182.56, "text": " Doctors choose the most important features, and computer scientists try to find out the relation between the features and illnesses."}, {"start": 182.56, "end": 190.8, "text": " Great strides have been made in the last few years, and I am super happy to see that even if you're not a doctor and you haven't studied medicine,"}, {"start": 190.8, "end": 192.8, "text": " you can still help in this process."}, {"start": 192.8, "end": 194.56, "text": " That's quite amazing."}, {"start": 194.56, "end": 203.44, "text": " A big shout out to Kram, who has been watching two-minute paper since the very first episodes, and his presence has always been ample with insightful comments."}, {"start": 203.44, "end": 213.92, "text": " Thanks for being around, and also thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_S1lyQbbJM4
Face2Face: Real-Time Facial Reenactment
In computer animation, animating human faces is an art itself, but transferring expressions from one human to someone else is an even more complex task. One has to take into consideration the geometry, the reflectance properties, pose, and the illumination of both faces, and make sure that mouth movements and wrinkles are transferred properly. The fact that the human eye is very keen on catching artificial changes makes the problem even more difficult. This paper describes a real-time solution to this animation problem. ______________________ The paper "Face2Face: Real-time Face Capture and Reenactment of RGB Videos" is available here: http://www.graphics.stanford.edu/~niessner/thies2016face.html Recommended for you: Real-Time Facial Expression Transfer (previous work) - https://www.youtube.com/watch?v=mkI6qfpEJmI WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The background of the thumbnail image was created by DonkeyHotey (CC BY 2.0) - https://flic.kr/p/aPiKLe Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/ #Deepfake #Face2Face
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. There was a previous episode on a technique where the inputs were a source video of ourselves and a target actor, and the output was a video of this target actor with our facial gestures. With such an algorithm, one can edit pre-recorded videos in real time, and the current version only needs a consumer webcam to do that. This new version addresses two major shortcomings. One, the previous work relied on depth information, which means that we needed to know how far different parts of the image were from the camera. This newer version only relies on color information and does not need anything beyond that. Whoa! And two, previous techniques often resulted to copying the footage from the mouth and adding synthetic proxies for teeth, not anymore with this one. A tip my hat to the authors who came up with a vastly improved version of their previous method so quickly, and it is probably needless to say that the ramifications of such an existing technique are far reaching, and are hopefully pointed in a positive direction. However, we should bear in mind that from now on, we may be one step closer to an era where a video of something happening won't be taken as proper evidence. I wonder how this will affect legal decision-making in the future. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.38, "end": 9.76, "text": " There was a previous episode on a technique where the inputs were a source video of ourselves"}, {"start": 9.76, "end": 16.240000000000002, "text": " and a target actor, and the output was a video of this target actor with our facial gestures."}, {"start": 16.240000000000002, "end": 20.96, "text": " With such an algorithm, one can edit pre-recorded videos in real time,"}, {"start": 20.96, "end": 24.6, "text": " and the current version only needs a consumer webcam to do that."}, {"start": 24.6, "end": 27.96, "text": " This new version addresses two major shortcomings."}, {"start": 27.96, "end": 31.560000000000002, "text": " One, the previous work relied on depth information,"}, {"start": 31.560000000000002, "end": 36.88, "text": " which means that we needed to know how far different parts of the image were from the camera."}, {"start": 36.88, "end": 42.760000000000005, "text": " This newer version only relies on color information and does not need anything beyond that."}, {"start": 42.760000000000005, "end": 44.2, "text": " Whoa!"}, {"start": 44.2, "end": 49.28, "text": " And two, previous techniques often resulted to copying the footage from the mouth"}, {"start": 49.28, "end": 53.96, "text": " and adding synthetic proxies for teeth, not anymore with this one."}, {"start": 53.96, "end": 60.76, "text": " A tip my hat to the authors who came up with a vastly improved version of their previous method so quickly,"}, {"start": 60.76, "end": 66.96000000000001, "text": " and it is probably needless to say that the ramifications of such an existing technique are far reaching,"}, {"start": 66.96000000000001, "end": 69.76, "text": " and are hopefully pointed in a positive direction."}, {"start": 69.76, "end": 75.4, "text": " However, we should bear in mind that from now on, we may be one step closer to an era"}, {"start": 75.4, "end": 79.28, "text": " where a video of something happening won't be taken as proper evidence."}, {"start": 79.28, "end": 82.96000000000001, "text": " I wonder how this will affect legal decision-making in the future."}, {"start": 82.96, "end": 87.55999999999999, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=LhhEv1dMpKE
Training Deep Neural Networks With Dropout | Two Minute Papers #62
In this episode, we discuss the bane of many machine learning algorithms - overfitting. It is also explained why it is an undesirable way to learn and how to combat it via dropout. _____________________ The paper "Dropout: A Simple Way to Prevent Neural Networks from Overtting" is available here: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf Andrej Karpathy's autoencoder is available here: http://cs.stanford.edu/people/karpathy/convnetjs/demo/autoencoder.html Recommended for you: Overfitting and Regularization For Deep Learning - https://www.youtube.com/watch?v=6aF9sJrzxaM Decision Trees and Boosting, XGBoost -https://www.youtube.com/watch?v=0Xc9LIb_HTw A full playlist with machine learning and deep learning-related Two Minute Papers videos - https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by Norma (CC BY 2.0) - https://flic.kr/p/ejXPXt Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir, a quick recap for the Fellow Scholars out there who missed some of our earlier episodes. A neural network is a machine learning technique that was inspired by the human brain. It is not a brain simulation by any stretch of the imagination, but it was inspired by the inner workings of the human brain. We can train it on input and output pairs like images and descriptions whether the images depict a mug or a bus. The goal is that after training, we would give unknown images to the network and expect it to recognize whether there is a mug or a bus on them. It may happen that during training, it seems that the neural network is doing quite well, but when we provide the unknown images, it falters and almost never gets the answer right. This is the problem of overfitting and intuitively. It is a bit like students who are not preparing for an exam by obtaining useful knowledge, but students who prepare by memorizing answers from the textbook instead. No wonder their results will be rubbish on a really exam. But no worries because we have dropout which is a spectacular way of creating diligent students. This is a technique where we create a network where each of the neurons have a chance to be activated or disabled, a network that is filled with unreliable units. And I really want you to think about this. If we could have a system with perfectly reliable units, we should probably never go for one that is built from less reliable units instead. What is even more, these piece of work proposes that we should cripple our systems and seemingly make them worse on purpose. This sounds like a travesty. Why would anyone want to try anything like this? And what is really amazing is that these unreliable units can potentially build a much more useful system that is less prone to overfitting. If we want to win competitions, we have to train many models and average them. As we have seen with the Netflix prize winning algorithm in an earlier episode. It also relates back to the committee of doctors example that is usually more useful than just asking one doctor. And the absolutely amazing thing is that this is exactly what dropout gives us. It gives the average of a very large number of possible neural networks and we only have to train one network that we cripple here and there to obtain that. This procedure without dropout would normally take years and such exorbitant timeframes to compute and would also raise all kinds of pesky problems we really don't want to deal with. To engage a modesty, let's say that if we are struggling with overfitting, we could do a lot worse than using dropout. It indeed teaches slacking students how to do their homework properly. This keeps in mind that using dropout also leads to longer training times. My experience has been between two to ten X, but of course it heavily depends on other external factors. So it is indeed true that dropout is slow compared to training one network, but it is blazing fast at what it actually approximates which is training an exponential number of models. I think dropout is one of the greatest examples of the beauty and the perils of research, where sometimes the most counterintuitive ideas give us the best results. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir, a quick recap for the"}, {"start": 5.5600000000000005, "end": 9.200000000000001, "text": " Fellow Scholars out there who missed some of our earlier episodes."}, {"start": 9.200000000000001, "end": 14.18, "text": " A neural network is a machine learning technique that was inspired by the human brain."}, {"start": 14.18, "end": 19.36, "text": " It is not a brain simulation by any stretch of the imagination, but it was inspired by"}, {"start": 19.36, "end": 21.62, "text": " the inner workings of the human brain."}, {"start": 21.62, "end": 27.04, "text": " We can train it on input and output pairs like images and descriptions whether the images"}, {"start": 27.04, "end": 29.12, "text": " depict a mug or a bus."}, {"start": 29.12, "end": 34.18, "text": " The goal is that after training, we would give unknown images to the network and expect"}, {"start": 34.18, "end": 38.1, "text": " it to recognize whether there is a mug or a bus on them."}, {"start": 38.1, "end": 42.94, "text": " It may happen that during training, it seems that the neural network is doing quite well,"}, {"start": 42.94, "end": 48.620000000000005, "text": " but when we provide the unknown images, it falters and almost never gets the answer right."}, {"start": 48.620000000000005, "end": 51.84, "text": " This is the problem of overfitting and intuitively."}, {"start": 51.84, "end": 57.1, "text": " It is a bit like students who are not preparing for an exam by obtaining useful knowledge,"}, {"start": 57.1, "end": 62.04, "text": " but students who prepare by memorizing answers from the textbook instead."}, {"start": 62.04, "end": 65.3, "text": " No wonder their results will be rubbish on a really exam."}, {"start": 65.3, "end": 70.58, "text": " But no worries because we have dropout which is a spectacular way of creating diligent"}, {"start": 70.58, "end": 71.58, "text": " students."}, {"start": 71.58, "end": 75.68, "text": " This is a technique where we create a network where each of the neurons have a chance"}, {"start": 75.68, "end": 81.92, "text": " to be activated or disabled, a network that is filled with unreliable units."}, {"start": 81.92, "end": 84.62, "text": " And I really want you to think about this."}, {"start": 84.62, "end": 89.7, "text": " If we could have a system with perfectly reliable units, we should probably never go for one"}, {"start": 89.7, "end": 92.7, "text": " that is built from less reliable units instead."}, {"start": 92.7, "end": 98.24000000000001, "text": " What is even more, these piece of work proposes that we should cripple our systems and seemingly"}, {"start": 98.24000000000001, "end": 100.58000000000001, "text": " make them worse on purpose."}, {"start": 100.58000000000001, "end": 102.26, "text": " This sounds like a travesty."}, {"start": 102.26, "end": 105.64, "text": " Why would anyone want to try anything like this?"}, {"start": 105.64, "end": 111.26, "text": " And what is really amazing is that these unreliable units can potentially build a much more"}, {"start": 111.26, "end": 114.58000000000001, "text": " useful system that is less prone to overfitting."}, {"start": 114.58, "end": 119.28, "text": " If we want to win competitions, we have to train many models and average them."}, {"start": 119.28, "end": 123.64, "text": " As we have seen with the Netflix prize winning algorithm in an earlier episode."}, {"start": 123.64, "end": 128.5, "text": " It also relates back to the committee of doctors example that is usually more useful than"}, {"start": 128.5, "end": 130.3, "text": " just asking one doctor."}, {"start": 130.3, "end": 135.52, "text": " And the absolutely amazing thing is that this is exactly what dropout gives us."}, {"start": 135.52, "end": 140.94, "text": " It gives the average of a very large number of possible neural networks and we only have"}, {"start": 140.94, "end": 145.22, "text": " to train one network that we cripple here and there to obtain that."}, {"start": 145.22, "end": 150.26, "text": " This procedure without dropout would normally take years and such exorbitant timeframes to"}, {"start": 150.26, "end": 155.02, "text": " compute and would also raise all kinds of pesky problems we really don't want to deal"}, {"start": 155.02, "end": 156.02, "text": " with."}, {"start": 156.02, "end": 160.18, "text": " To engage a modesty, let's say that if we are struggling with overfitting, we could"}, {"start": 160.18, "end": 162.34, "text": " do a lot worse than using dropout."}, {"start": 162.34, "end": 166.74, "text": " It indeed teaches slacking students how to do their homework properly."}, {"start": 166.74, "end": 170.98000000000002, "text": " This keeps in mind that using dropout also leads to longer training times."}, {"start": 170.98000000000002, "end": 175.86, "text": " My experience has been between two to ten X, but of course it heavily depends on other"}, {"start": 175.86, "end": 177.54000000000002, "text": " external factors."}, {"start": 177.54000000000002, "end": 184.10000000000002, "text": " So it is indeed true that dropout is slow compared to training one network, but it is blazing"}, {"start": 184.10000000000002, "end": 190.14000000000001, "text": " fast at what it actually approximates which is training an exponential number of models."}, {"start": 190.14000000000001, "end": 195.54000000000002, "text": " I think dropout is one of the greatest examples of the beauty and the perils of research,"}, {"start": 195.54, "end": 200.26, "text": " where sometimes the most counterintuitive ideas give us the best results."}, {"start": 200.26, "end": 228.7, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nfPBT71xYVQ
Narrow Band Liquid Simulations | Two Minute Papers #61
We continue our journey in the land of fluid simulations and discuss a really cool FLIP-based technique that uses both particles and grids to create very high quality footage at a much more reasonable cost than previous works. ____________________ The paper "Narrow Band FLIP for Liquid Simulations" is available here: https://wwwcg.in.tum.de/research/research/publications/2016/narrow-band-flip-for-liquid-simulations.html Yearning for more fluids? :) A Two Minute Papers playlist of fluid and cloth simulation-related topics is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was taken from the corresponding paper. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizona Ifehir. Our endeavors in creating amazingly detailed fluid simulations is often hamstrung by the fact that we need to simulate the motion of tens of millions of particles. Needless to say, this means excruciatingly long computation times and large memory consumption. This piece of work tries to alleviate the problem by confining the usage of particles to a narrow band close to the liquid surface and thus decimating the number of particles used in the simulation. The rest of the simulation is computed on a very coarse grid where we compute quantities of the fluid like velocity and pressure in grid points and instead of computing them everywhere we try to guess what is happening between these grid points. The drawback of this is that we may miss a lot of details because of that and the brilliant part of this new technique is that we only use a cheap sparse grid where there is not a lot of things happening and use the expensive particles only near the surface where there are a lot of details we can capture well. The flip term that you see in the video means fluid implicit particle, a popular way of simulating fluids that uses both grids and particles. In this scene, the old method uses 24 million particles while the new technique uses only 1 million and creates closely matching results. You can see a lot of excess particles in the footage with the classical simulation technique and the phoenix looking version is the proposed new, more efficient algorithm. Creating such a technique is anything but trivial. Unless special measures are taken, the simulation may have robustness issues which means that there are situations where it does not produce a sensible result. This is demonstrated in a few examples where with the naive version of the technique, a piece of fluid never ever comes to rest or it may exhibit behaviors that are clearly unstable. It also takes approximately half as much time to run the simulation and uses half as much memory which is a huge relief for visual effects artists. I don't know about you fellow scholars but I see a flood of amazing fluid papers coming in the near future and I'm having quite a bit of trouble containing my excitement. Exciting times are ahead indeed. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.16, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizona Ifehir."}, {"start": 5.16, "end": 10.88, "text": " Our endeavors in creating amazingly detailed fluid simulations is often hamstrung by the"}, {"start": 10.88, "end": 15.44, "text": " fact that we need to simulate the motion of tens of millions of particles."}, {"start": 15.44, "end": 21.28, "text": " Needless to say, this means excruciatingly long computation times and large memory consumption."}, {"start": 21.28, "end": 26.12, "text": " This piece of work tries to alleviate the problem by confining the usage of particles to"}, {"start": 26.12, "end": 31.84, "text": " a narrow band close to the liquid surface and thus decimating the number of particles"}, {"start": 31.84, "end": 33.56, "text": " used in the simulation."}, {"start": 33.56, "end": 38.8, "text": " The rest of the simulation is computed on a very coarse grid where we compute quantities"}, {"start": 38.8, "end": 44.400000000000006, "text": " of the fluid like velocity and pressure in grid points and instead of computing them"}, {"start": 44.400000000000006, "end": 48.92, "text": " everywhere we try to guess what is happening between these grid points."}, {"start": 48.92, "end": 53.620000000000005, "text": " The drawback of this is that we may miss a lot of details because of that and the"}, {"start": 53.62, "end": 59.22, "text": " brilliant part of this new technique is that we only use a cheap sparse grid where there"}, {"start": 59.22, "end": 64.62, "text": " is not a lot of things happening and use the expensive particles only near the surface"}, {"start": 64.62, "end": 68.02, "text": " where there are a lot of details we can capture well."}, {"start": 68.02, "end": 73.86, "text": " The flip term that you see in the video means fluid implicit particle, a popular way of"}, {"start": 73.86, "end": 77.74, "text": " simulating fluids that uses both grids and particles."}, {"start": 77.74, "end": 83.78, "text": " In this scene, the old method uses 24 million particles while the new technique uses only"}, {"start": 83.78, "end": 87.66, "text": " 1 million and creates closely matching results."}, {"start": 87.66, "end": 92.61999999999999, "text": " You can see a lot of excess particles in the footage with the classical simulation technique"}, {"start": 92.61999999999999, "end": 97.74, "text": " and the phoenix looking version is the proposed new, more efficient algorithm."}, {"start": 97.74, "end": 100.5, "text": " Creating such a technique is anything but trivial."}, {"start": 100.5, "end": 105.38, "text": " Unless special measures are taken, the simulation may have robustness issues which means that"}, {"start": 105.38, "end": 109.69999999999999, "text": " there are situations where it does not produce a sensible result."}, {"start": 109.69999999999999, "end": 114.17999999999999, "text": " This is demonstrated in a few examples where with the naive version of the technique, a piece"}, {"start": 114.17999999999999, "end": 120.78, "text": " of fluid never ever comes to rest or it may exhibit behaviors that are clearly unstable."}, {"start": 120.78, "end": 125.47999999999999, "text": " It also takes approximately half as much time to run the simulation and uses half as"}, {"start": 125.47999999999999, "end": 129.34, "text": " much memory which is a huge relief for visual effects artists."}, {"start": 129.34, "end": 134.98, "text": " I don't know about you fellow scholars but I see a flood of amazing fluid papers coming"}, {"start": 134.98, "end": 140.06, "text": " in the near future and I'm having quite a bit of trouble containing my excitement."}, {"start": 140.06, "end": 141.73999999999998, "text": " Exciting times are ahead indeed."}, {"start": 141.74, "end": 171.3, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=OV3Xcv42JSw
No Such Thing As Artificial Intelligence | Two Minute Papers #60
What is happening with the neural networks in this video? You'll find the answers in these videos: 1. How Does Deep Learning Work? - https://www.youtube.com/watch?v=He4t7Zekob0&index=5&list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 2. Overfitting and Regularization For Deep Learning - https://www.youtube.com/watch?v=6aF9sJrzxaM&index=18&list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 In this episode, we discuss the perils of debating whether different existing techniques can be deemed artificial intelligence or not. ____________________ And here is a full playlist of our videos related to machine learning and deep learning: https://www.youtube.com/watch?v=V1eYniJ0Rnk&list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 Tensorflow playground: http://playground.tensorflow.org/ The A* algorithm: https://en.wikipedia.org/wiki/A*_search_algorithm A neat demo application that runs in your browser: http://www.briangrinstead.com/blog/astar-search-algorithm-in-javascript Link to the experiment designed by arthomas: https://www.reddit.com/r/MachineLearning/comments/4eila2/tensorflow_playground/d20noqu WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by jjmusgrove (CC BY 2.0) - https://flic.kr/p/ewPWSk Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. Whether a technique can be deemed as artificial intelligence or not is a question that I would like to see exiled from future debates and argumentations. Of course, anyone may take part in any debate of their liking. I would, however, like to point out the futility of such endeavors. Let me explain why. Ever heard a parent and a son having an argument whether the son is an adult or not? You are not an adult because adults don't behave like this, and arguments like that. The argument is not really about whether a person is an adult, but it is about the very definition of an adult. Do we define an adult as someone who has common sense and behaves responsibly, or is it enough to be of 18 or 21 years old to be an adult? If we decide which definition we go for, the scaffolding for the entire argument crumbles, because it is built upon a term for which the definition is not agreed upon. I feel that we have it the same with artificial intelligence in many debates. The definition of artificial intelligence or at least one possible definition is the following. Artificial intelligence is the intelligence exhibited by machines or software. It is a bit of a cop-out, so we have to go and check the definition of intelligence. There are multiple definitions, but for the sake of argument, we are going to let this one slip. One possible definition for intelligence is the ability to learn or understand things or to deal with new or difficult situations. Now, this sentence is teeming with ill-defined terms, such as learn, understand things, deal with new situations, difficult situations. So if we have a shaky definition of artificial intelligence, it is quite possibly pointless to argue whether self-driving cars can be deemed artificially intelligent or not. Imagine two physicists arguing whether a material is ferromagnetic, but none of them has the slightest idea of what magnetism means. If we look at it like this, it is very easy to see the futility of such arguments. If we had as poorly crafted definitions in physics, as we have for intelligence, magnetism would be defined as stuff pulling on other stuff. This is the first part of the argument. The second part is that artificial intelligence is imagined to be a mystical thing that only exists in the future, or it may exist in the present, but it has to be shrouded in mystery. Let me give you an example. The A-star algorithm used to be called AI, and was, and still is, widely taught in AI courses at many universities. A-star is used in many path-finding situations where we seek to go from A to B on a map in the presence of possible obstacles. It is widely used in robotics and computer games. Nowadays, calling a path-finding algorithm AI is simply preposterous. It is a simple, well-understood technique that does something we are used to. Imagine someone waving their GPS device, claiming that there is AI in there. But back then, when it was new, hazy, and poorly understood, we put it in a drawer with the label AI on it. As soon as people start to understand it, they pull it out from this drawer and discussively claim, well, this is not AI, it's just a graph algorithm. Graphs are not AI, that's just mathematics. It is important to note that none of the techniques that we see today are mysterious in any sense. The entirety of deep learning and everything else is a series of carefully prescribed mathematical operations. I will try to briefly assess the two arguments. Arguments about AI are not about the algorithms they seem to be discussing, but about the very definition of AI, which is ill-defined at best. AI is imagined to be a mystical thing that only exists in the future, or it may exist in the present, but it has to be in some way shrouded in mystery. The good news is that using this knowledge, we can easily diffuse such futile arguments. If someone says that deep learning is not artificial intelligence because all it does is matrix algebra, we can ask, OK, what is your definition of artificial intelligence? If this person defines AI as being a sentient learning being akin to humans, then we have immediately arrived to a conclusion that deep learning is not AI. Let us not fool ourselves by thinking that we are arguing about things when we are simply arguing about definitions. As soon as the definition is agreed upon, the conclusion emerges effortlessly. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.38, "end": 13.56, "text": " Whether a technique can be deemed as artificial intelligence or not is a question that I would like to see exiled from future debates and argumentations."}, {"start": 13.56, "end": 17.22, "text": " Of course, anyone may take part in any debate of their liking."}, {"start": 17.22, "end": 21.22, "text": " I would, however, like to point out the futility of such endeavors."}, {"start": 21.22, "end": 22.42, "text": " Let me explain why."}, {"start": 22.42, "end": 27.86, "text": " Ever heard a parent and a son having an argument whether the son is an adult or not?"}, {"start": 27.86, "end": 32.66, "text": " You are not an adult because adults don't behave like this, and arguments like that."}, {"start": 32.66, "end": 39.26, "text": " The argument is not really about whether a person is an adult, but it is about the very definition of an adult."}, {"start": 39.26, "end": 44.46, "text": " Do we define an adult as someone who has common sense and behaves responsibly,"}, {"start": 44.46, "end": 48.86, "text": " or is it enough to be of 18 or 21 years old to be an adult?"}, {"start": 48.86, "end": 54.56, "text": " If we decide which definition we go for, the scaffolding for the entire argument crumbles,"}, {"start": 54.56, "end": 58.96, "text": " because it is built upon a term for which the definition is not agreed upon."}, {"start": 58.96, "end": 63.760000000000005, "text": " I feel that we have it the same with artificial intelligence in many debates."}, {"start": 63.760000000000005, "end": 69.36, "text": " The definition of artificial intelligence or at least one possible definition is the following."}, {"start": 69.36, "end": 74.46000000000001, "text": " Artificial intelligence is the intelligence exhibited by machines or software."}, {"start": 74.46000000000001, "end": 79.16, "text": " It is a bit of a cop-out, so we have to go and check the definition of intelligence."}, {"start": 79.16, "end": 83.96000000000001, "text": " There are multiple definitions, but for the sake of argument, we are going to let this one slip."}, {"start": 83.96, "end": 89.55999999999999, "text": " One possible definition for intelligence is the ability to learn or understand things"}, {"start": 89.55999999999999, "end": 92.55999999999999, "text": " or to deal with new or difficult situations."}, {"start": 92.55999999999999, "end": 99.16, "text": " Now, this sentence is teeming with ill-defined terms, such as learn, understand things,"}, {"start": 99.16, "end": 102.86, "text": " deal with new situations, difficult situations."}, {"start": 102.86, "end": 106.36, "text": " So if we have a shaky definition of artificial intelligence,"}, {"start": 106.36, "end": 113.25999999999999, "text": " it is quite possibly pointless to argue whether self-driving cars can be deemed artificially intelligent or not."}, {"start": 113.26, "end": 118.06, "text": " Imagine two physicists arguing whether a material is ferromagnetic,"}, {"start": 118.06, "end": 121.86, "text": " but none of them has the slightest idea of what magnetism means."}, {"start": 121.86, "end": 126.86, "text": " If we look at it like this, it is very easy to see the futility of such arguments."}, {"start": 126.86, "end": 131.86, "text": " If we had as poorly crafted definitions in physics, as we have for intelligence,"}, {"start": 131.86, "end": 136.66, "text": " magnetism would be defined as stuff pulling on other stuff."}, {"start": 136.66, "end": 138.76, "text": " This is the first part of the argument."}, {"start": 138.76, "end": 143.95999999999998, "text": " The second part is that artificial intelligence is imagined to be a mystical thing"}, {"start": 143.95999999999998, "end": 147.85999999999999, "text": " that only exists in the future, or it may exist in the present,"}, {"start": 147.85999999999999, "end": 150.06, "text": " but it has to be shrouded in mystery."}, {"start": 150.06, "end": 151.56, "text": " Let me give you an example."}, {"start": 151.56, "end": 160.06, "text": " The A-star algorithm used to be called AI, and was, and still is, widely taught in AI courses at many universities."}, {"start": 160.06, "end": 166.06, "text": " A-star is used in many path-finding situations where we seek to go from A to B on a map"}, {"start": 166.06, "end": 168.26, "text": " in the presence of possible obstacles."}, {"start": 168.26, "end": 171.56, "text": " It is widely used in robotics and computer games."}, {"start": 171.56, "end": 176.56, "text": " Nowadays, calling a path-finding algorithm AI is simply preposterous."}, {"start": 176.56, "end": 180.76, "text": " It is a simple, well-understood technique that does something we are used to."}, {"start": 180.76, "end": 185.76, "text": " Imagine someone waving their GPS device, claiming that there is AI in there."}, {"start": 185.76, "end": 189.45999999999998, "text": " But back then, when it was new, hazy, and poorly understood,"}, {"start": 189.45999999999998, "end": 192.45999999999998, "text": " we put it in a drawer with the label AI on it."}, {"start": 192.46, "end": 197.96, "text": " As soon as people start to understand it, they pull it out from this drawer and discussively claim,"}, {"start": 197.96, "end": 201.26000000000002, "text": " well, this is not AI, it's just a graph algorithm."}, {"start": 201.26000000000002, "end": 204.26000000000002, "text": " Graphs are not AI, that's just mathematics."}, {"start": 204.26000000000002, "end": 209.76000000000002, "text": " It is important to note that none of the techniques that we see today are mysterious in any sense."}, {"start": 209.76000000000002, "end": 216.26000000000002, "text": " The entirety of deep learning and everything else is a series of carefully prescribed mathematical operations."}, {"start": 216.26000000000002, "end": 219.76000000000002, "text": " I will try to briefly assess the two arguments."}, {"start": 219.76, "end": 224.26, "text": " Arguments about AI are not about the algorithms they seem to be discussing,"}, {"start": 224.26, "end": 228.95999999999998, "text": " but about the very definition of AI, which is ill-defined at best."}, {"start": 228.95999999999998, "end": 233.06, "text": " AI is imagined to be a mystical thing that only exists in the future,"}, {"start": 233.06, "end": 238.85999999999999, "text": " or it may exist in the present, but it has to be in some way shrouded in mystery."}, {"start": 238.85999999999999, "end": 244.26, "text": " The good news is that using this knowledge, we can easily diffuse such futile arguments."}, {"start": 244.26, "end": 247.85999999999999, "text": " If someone says that deep learning is not artificial intelligence"}, {"start": 247.86, "end": 251.26000000000002, "text": " because all it does is matrix algebra, we can ask,"}, {"start": 251.26000000000002, "end": 254.86, "text": " OK, what is your definition of artificial intelligence?"}, {"start": 254.86, "end": 259.96000000000004, "text": " If this person defines AI as being a sentient learning being akin to humans,"}, {"start": 259.96000000000004, "end": 264.86, "text": " then we have immediately arrived to a conclusion that deep learning is not AI."}, {"start": 264.86, "end": 268.86, "text": " Let us not fool ourselves by thinking that we are arguing about things"}, {"start": 268.86, "end": 272.16, "text": " when we are simply arguing about definitions."}, {"start": 272.16, "end": 276.96000000000004, "text": " As soon as the definition is agreed upon, the conclusion emerges effortlessly."}, {"start": 276.96, "end": 281.96, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=aKSILzbAqJs
10 Even Cooler Deep Learning Applications | Two Minute Papers #59
For the third time, we present another round of incredible deep learning applications! ___________________ 1. Geolocation - http://arxiv.org/abs/1602.05314 2. Super-resolution - http://arxiv.org/pdf/1511.04491v1.pdf 3. Neural Network visualizer - http://experiments.mostafa.io/public/ffbpann/ 4. Recurrent neural network for sentence completion: http://www.cs.toronto.edu/~ilya/fourth.cgi 5. Human-in-the-loop and Doctor-in-the-loop: http://link.springer.com/article/10.1007/s40708-016-0036-4 6. Emoji suggestions for images - https://emojini.curalate.com/ 7. MNIST handwritten numbers in HD - http://blog.otoro.net/2016/04/01/generating-large-images-from-latent-vectors/ 8. Deep Learning solution to the Netflix prize -https://karthkk.wordpress.com/2016/03/22/deep-learning-solution-for-netflix-prize/ 9. Curating works of art - http://cs231n.stanford.edu/reports2016/210_Report.pdf 10. More robust neural networks against adversarial examples - http://cs231n.stanford.edu/reports2016/103_Report.pdf The Keras library: http://keras.io/ https://github.com/fchollet/keras Recommended for you: Two Minute Papers Machine Learning Playlist - https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by Steven S. (CC BY 2.0) - https://flic.kr/p/sdUQ7 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Ejona Ifehir. This is the third episode in our series of deep learning applications. I have mixed in some recurrent neural networks for your, and honestly, my own enjoyment. I think the series of applications shows what an amazingly versatile tool we have been blessed with with deep learning. And I know you Fellow Scholars have been quite excited for this one. Let's get started. This piece of work accomplishes geolocation for photographs. This means that we toss in a photograph and it tells us exactly where it was made. Super Resolution is a hot topic where we show a course, heavily pixelated image to a system, and it tries to guess what it depicts and increase the resolution of it. If we have a tool that accomplishes this, we consume into images way more than the number of megapixels of our camera would allow. It is really cool to see that deep learning has also made an appearance in this subfield. This handy little tool visualizes the learning process in a neural network with the classical forward and backward propagation steps. This recurrent neural network continues our sentences in a way that kind of makes sense. Well, kind of. Human in the loop techniques seek to create a bidirectional connection between humans and machine learning techniques so they can both learn from each other. I think it definitely is an interesting direction. At first, DeepMind's AlphaGo also learned the basics of Go from Amateurs and then took off like a hermit to learn on its own and came back with gunsplazing. We usually have at least one remarkably rigorous and scientific application of deep learning in every collection episode. This time, I'd like to show you this marvelous little program that suggests emojis for your images. It does so well that nowadays, even computer algorithms are more hip than I am. This application is akin to the previous one we have seen about super resolution. Here, we see beautiful high resolution images of digits created from these tiny, extremely pixelated inputs. Netflix is an online video streaming service. The Netflix prize was a competition where participants wrote programs to estimate how a user would enjoy a given set of movies based on this user's previous preferences. The competition was won by an ensemble algorithm which is essentially a mixture of many existing techniques. And by many, I mean 107. It is not a surprise that some contemptuously use the term abomination instead of ensemble because of their egregious complexity. In this blog post, a simple neural network implementation is described that achieves quite decent results and the core of the solution fits in no more than 20 lines of code. The code has been written using Keras, which also happens to be one of my favorite deep learning libraries. Wholeheartedly recommended for everyone who likes to code and the big shout out to François, the developer of the mentioned library. Convolution on your own networks also have started curating works of art by assigning a score to how aesthetic they are. Oh, sorry Leonardo. Earlier, we talked about adversarial techniques that add a very specific type of noise to images to completely destroy the accuracy of previously existing image classification programs. The arms race has officially started and new techniques are popping up to prevent this behavior. If you find some novel applications of deep learning, just send a link my way in the comments section. Thanks for watching, get for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.44, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Ejona Ifehir."}, {"start": 4.44, "end": 8.6, "text": " This is the third episode in our series of deep learning applications."}, {"start": 8.6, "end": 13.76, "text": " I have mixed in some recurrent neural networks for your, and honestly, my own enjoyment."}, {"start": 13.76, "end": 20.8, "text": " I think the series of applications shows what an amazingly versatile tool we have been blessed with with deep learning."}, {"start": 20.8, "end": 24.48, "text": " And I know you Fellow Scholars have been quite excited for this one."}, {"start": 24.48, "end": 25.44, "text": " Let's get started."}, {"start": 25.44, "end": 29.28, "text": " This piece of work accomplishes geolocation for photographs."}, {"start": 29.28, "end": 34.0, "text": " This means that we toss in a photograph and it tells us exactly where it was made."}, {"start": 37.120000000000005, "end": 42.88, "text": " Super Resolution is a hot topic where we show a course, heavily pixelated image to a system,"}, {"start": 42.88, "end": 47.28, "text": " and it tries to guess what it depicts and increase the resolution of it."}, {"start": 47.28, "end": 54.52, "text": " If we have a tool that accomplishes this, we consume into images way more than the number of megapixels of our camera would allow."}, {"start": 54.52, "end": 59.400000000000006, "text": " It is really cool to see that deep learning has also made an appearance in this subfield."}, {"start": 59.400000000000006, "end": 69.0, "text": " This handy little tool visualizes the learning process in a neural network with the classical forward and backward propagation steps."}, {"start": 69.0, "end": 80.60000000000001, "text": " This recurrent neural network continues our sentences in a way that kind of makes sense."}, {"start": 80.60000000000001, "end": 82.76, "text": " Well, kind of."}, {"start": 82.76, "end": 89.32000000000001, "text": " Human in the loop techniques seek to create a bidirectional connection between humans and machine learning techniques"}, {"start": 89.32000000000001, "end": 91.32000000000001, "text": " so they can both learn from each other."}, {"start": 91.32000000000001, "end": 94.44, "text": " I think it definitely is an interesting direction."}, {"start": 94.44, "end": 105.24000000000001, "text": " At first, DeepMind's AlphaGo also learned the basics of Go from Amateurs and then took off like a hermit to learn on its own and came back with gunsplazing."}, {"start": 105.24000000000001, "end": 111.88000000000001, "text": " We usually have at least one remarkably rigorous and scientific application of deep learning in every collection episode."}, {"start": 111.88, "end": 118.03999999999999, "text": " This time, I'd like to show you this marvelous little program that suggests emojis for your images."}, {"start": 118.03999999999999, "end": 123.24, "text": " It does so well that nowadays, even computer algorithms are more hip than I am."}, {"start": 127.72, "end": 131.96, "text": " This application is akin to the previous one we have seen about super resolution."}, {"start": 131.96, "end": 143.08, "text": " Here, we see beautiful high resolution images of digits created from these tiny, extremely pixelated inputs."}, {"start": 143.08, "end": 146.12, "text": " Netflix is an online video streaming service."}, {"start": 146.12, "end": 155.96, "text": " The Netflix prize was a competition where participants wrote programs to estimate how a user would enjoy a given set of movies based on this user's previous preferences."}, {"start": 155.96, "end": 162.84, "text": " The competition was won by an ensemble algorithm which is essentially a mixture of many existing techniques."}, {"start": 162.84, "end": 165.72, "text": " And by many, I mean 107."}, {"start": 165.72, "end": 173.4, "text": " It is not a surprise that some contemptuously use the term abomination instead of ensemble because of their egregious complexity."}, {"start": 173.4, "end": 183.88, "text": " In this blog post, a simple neural network implementation is described that achieves quite decent results and the core of the solution fits in no more than 20 lines of code."}, {"start": 183.88, "end": 189.88, "text": " The code has been written using Keras, which also happens to be one of my favorite deep learning libraries."}, {"start": 189.88, "end": 197.0, "text": " Wholeheartedly recommended for everyone who likes to code and the big shout out to Fran\u00e7ois, the developer of the mentioned library."}, {"start": 197.0, "end": 204.44, "text": " Convolution on your own networks also have started curating works of art by assigning a score to how aesthetic they are."}, {"start": 207.96, "end": 209.72, "text": " Oh, sorry Leonardo."}, {"start": 209.72, "end": 221.72, "text": " Earlier, we talked about adversarial techniques that add a very specific type of noise to images to completely destroy the accuracy of previously existing image classification programs."}, {"start": 221.72, "end": 227.72, "text": " The arms race has officially started and new techniques are popping up to prevent this behavior."}, {"start": 227.72, "end": 231.72, "text": " If you find some novel applications of deep learning, just send a link my way in the comments section."}, {"start": 231.72, "end": 241.72, "text": " Thanks for watching, get for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=4Y7RIAgOpn0
The Dunning-Kruger Effect | Two Minute Papers #58
The Dunning-Kruger effect describes a phenomenon where incompetent people assess their skills way higher than it is. We will talk about this phenomenon, its connection to impostor syndrome, and most importantly, why we should not use this knowledge to condemn others but to improve ourselves. __________________________ The paper "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" is available here. It is a really easy and enjoyable read, make sure you give it a shot! http://www.nottingham.ac.uk/~ntzcl1/literature/metacognition/kruger.pdf Recommended for you: What Is Impostor Syndrome? - https://www.youtube.com/watch?v=YPpIWQnufu8 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The background of the thumbnail image was created by samuelrodgers752 (CC BY 2.0) - https://flic.kr/p/rjdQyY Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. This episode is about a classic, the Dunning Kruger Effect. I wonder how we could go on for almost 62 Minute Papers episodes without the Dunning Kruger Effect. Here's the experiment. Participants were tested in different subjects. Their test scores were computed and at the same time, without the scores, they were asked to assess their perceived performance. The test subjects were humor, grammar, and logic. Things, of course, everyone excels at. Or do they? And here is the historic plot with the results. Such a simple plot yet it tells us so much about people. From left to right, people were ordered by their test score, as you see with the dotted line. And the other line with the squares shows their perceived score what they thought their scores would be. People from the bottom 10% the absolute worst performers are convinced that they were well above the average. Competent people, on the other hand, seemed to underestimate their skills. Because the test was easy for them, they assumed that it was easy for everyone else. The extreme to the left is often referred to as the Dunning Kruger Effect. And the extreme to the right, maybe if you imagine the lines extending way, way further to the right, is a common example of imposter syndrome. By the way, everyone thinks they are above average, which is a neat mathematical anomaly. We would expect that people who perform poorly should know that they perform poorly, and people who are doing great should know that they are doing great. And one of the conclusions is that this is not the case, not the case at all. The fact that incompetent people are completely ignorant about their own inodexity at first sounds like such a surprising conclusion. But if we think about it, we find there is nothing surprising about this. The more skilled we are, the more adapt we are at estimating our skill level. By gaining more competence, incompetent people also obtain the skill to recognize their own shortcomings. A fish in the world of poker means an inadequate player who is to be extorted by the more experienced. Someone asked how to recognize who the fish is at the poker table. The answer is a classic. If you don't know who the fish is at the table, it is you. The knowledge of the Dunning Kruger effect is such a tempting tool to condemn other people for their inodexity. But please try to resist the temptation. Remember, it doesn't help. That's the point of the paper. It is a much more effective tool for our own development if we attempt to use it on ourselves. Does it hurt a bit more? Oh yes, it does. The results of this paper solidify the argument that we need to be very vigilant about our own shortcomings. This knowledge endows you with a shield against ignorance. Use it wisely. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.48, "end": 8.32, "text": " This episode is about a classic, the Dunning Kruger Effect."}, {"start": 8.32, "end": 14.48, "text": " I wonder how we could go on for almost 62 Minute Papers episodes without the Dunning Kruger Effect."}, {"start": 14.48, "end": 15.72, "text": " Here's the experiment."}, {"start": 15.72, "end": 18.6, "text": " Participants were tested in different subjects."}, {"start": 18.6, "end": 26.240000000000002, "text": " Their test scores were computed and at the same time, without the scores, they were asked to assess their perceived performance."}, {"start": 26.24, "end": 29.919999999999998, "text": " The test subjects were humor, grammar, and logic."}, {"start": 29.919999999999998, "end": 32.72, "text": " Things, of course, everyone excels at."}, {"start": 32.72, "end": 33.92, "text": " Or do they?"}, {"start": 33.92, "end": 37.36, "text": " And here is the historic plot with the results."}, {"start": 37.36, "end": 41.599999999999994, "text": " Such a simple plot yet it tells us so much about people."}, {"start": 41.599999999999994, "end": 46.72, "text": " From left to right, people were ordered by their test score, as you see with the dotted line."}, {"start": 46.72, "end": 52.8, "text": " And the other line with the squares shows their perceived score what they thought their scores would be."}, {"start": 52.8, "end": 60.08, "text": " People from the bottom 10% the absolute worst performers are convinced that they were well above the average."}, {"start": 60.08, "end": 64.72, "text": " Competent people, on the other hand, seemed to underestimate their skills."}, {"start": 64.72, "end": 69.67999999999999, "text": " Because the test was easy for them, they assumed that it was easy for everyone else."}, {"start": 69.67999999999999, "end": 74.56, "text": " The extreme to the left is often referred to as the Dunning Kruger Effect."}, {"start": 74.56, "end": 80.8, "text": " And the extreme to the right, maybe if you imagine the lines extending way, way further to the right,"}, {"start": 80.8, "end": 83.52, "text": " is a common example of imposter syndrome."}, {"start": 83.52, "end": 88.39999999999999, "text": " By the way, everyone thinks they are above average, which is a neat mathematical anomaly."}, {"start": 88.39999999999999, "end": 93.36, "text": " We would expect that people who perform poorly should know that they perform poorly,"}, {"start": 93.36, "end": 96.72, "text": " and people who are doing great should know that they are doing great."}, {"start": 96.72, "end": 101.6, "text": " And one of the conclusions is that this is not the case, not the case at all."}, {"start": 101.6, "end": 106.72, "text": " The fact that incompetent people are completely ignorant about their own inodexity"}, {"start": 106.72, "end": 110.24, "text": " at first sounds like such a surprising conclusion."}, {"start": 110.24, "end": 114.47999999999999, "text": " But if we think about it, we find there is nothing surprising about this."}, {"start": 114.47999999999999, "end": 118.8, "text": " The more skilled we are, the more adapt we are at estimating our skill level."}, {"start": 118.8, "end": 125.11999999999999, "text": " By gaining more competence, incompetent people also obtain the skill to recognize their own shortcomings."}, {"start": 125.11999999999999, "end": 131.84, "text": " A fish in the world of poker means an inadequate player who is to be extorted by the more experienced."}, {"start": 131.84, "end": 135.92, "text": " Someone asked how to recognize who the fish is at the poker table."}, {"start": 135.92, "end": 137.68, "text": " The answer is a classic."}, {"start": 137.68, "end": 141.20000000000002, "text": " If you don't know who the fish is at the table, it is you."}, {"start": 141.20000000000002, "end": 146.56, "text": " The knowledge of the Dunning Kruger effect is such a tempting tool to condemn other people"}, {"start": 146.56, "end": 148.0, "text": " for their inodexity."}, {"start": 148.0, "end": 150.56, "text": " But please try to resist the temptation."}, {"start": 150.56, "end": 152.32, "text": " Remember, it doesn't help."}, {"start": 152.32, "end": 154.0, "text": " That's the point of the paper."}, {"start": 154.0, "end": 159.92000000000002, "text": " It is a much more effective tool for our own development if we attempt to use it on ourselves."}, {"start": 159.92000000000002, "end": 161.68, "text": " Does it hurt a bit more?"}, {"start": 161.68, "end": 163.12, "text": " Oh yes, it does."}, {"start": 163.12, "end": 169.6, "text": " The results of this paper solidify the argument that we need to be very vigilant about our own shortcomings."}, {"start": 169.6, "end": 173.76, "text": " This knowledge endows you with a shield against ignorance."}, {"start": 173.76, "end": 175.04, "text": " Use it wisely."}, {"start": 175.04, "end": 202.48, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=jMZqxfTls-0
From Doodles To Paintings With Deep Learning | Two Minute Papers #57
This technique uses deep learning to create beautiful paintings from terribly drawn sketches. The results look so great that many people called this work out to be an April Fools' day joke! _________________________________ The paper 'Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artwork' and its implementation is available here: https://github.com/alexjc/neural-doodle http://arxiv.org/pdf/1603.01768v1.pdf A playlist with out neural network and deep learning-related videos: https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: John Stockton Slow Drag by Chris Zabriskie is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Source: http://chriszabriskie.com/uvp/ Artist: http://chriszabriskie.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karoji Zorna Ifehir. This is the first paper in Two Minute Papers that showcased so stunning results that people called it out to be an April Fool's Day joke. It is based on a deep neural network and the concept is very simple. You choose an artistic style, you make a terrible drawing, and it creates a beautiful painting out of it. If you would like to know more about deep neural networks, we've had a ton of fun with them in previous episodes, I've put a link to them in the description box. I expect an onslaught of magnificent results with this technique to appear very soon. It is important to note that one needs to create a semantic map for each artistic style, so that the algorithm learns the correspondence between the painting and the semantics. However, these maps have to be created only once and can be used forever, so I expect quite a few of them to show up in the near future, which greatly simplifies the workflow. After that, these annotations can be changed at will, you press a button, and the rest is history. Woah! Wicked results. Some of these neural art results look so good that we should be creating a new class of touring tests for paintings. This means that we are presented with two images, one of them is painted by a human and one by a computer. We need to click the ones that we think were painted by a human. Damn, curses. As always, these techniques are new and heavily experimental, and this usually means that they take quite a bit of time to compute. The presentation video you have seen was sped up considerably. If these works are worthy of further attention, and I definitely think they are, then we can expect great strides towards interactivity in follow-up papers very soon. I am really looking forward to it, and we fellow scholars will have a ton of fun with these tools in the future. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoji Zorna Ifehir."}, {"start": 4.28, "end": 9.92, "text": " This is the first paper in Two Minute Papers that showcased so stunning results that people"}, {"start": 9.92, "end": 12.88, "text": " called it out to be an April Fool's Day joke."}, {"start": 12.88, "end": 16.8, "text": " It is based on a deep neural network and the concept is very simple."}, {"start": 16.8, "end": 22.48, "text": " You choose an artistic style, you make a terrible drawing, and it creates a beautiful painting"}, {"start": 22.48, "end": 23.48, "text": " out of it."}, {"start": 23.48, "end": 27.32, "text": " If you would like to know more about deep neural networks, we've had a ton of fun with"}, {"start": 27.32, "end": 31.12, "text": " them in previous episodes, I've put a link to them in the description box."}, {"start": 31.12, "end": 36.4, "text": " I expect an onslaught of magnificent results with this technique to appear very soon."}, {"start": 36.4, "end": 41.6, "text": " It is important to note that one needs to create a semantic map for each artistic style, so"}, {"start": 41.6, "end": 46.64, "text": " that the algorithm learns the correspondence between the painting and the semantics."}, {"start": 46.64, "end": 51.92, "text": " However, these maps have to be created only once and can be used forever, so I expect"}, {"start": 51.92, "end": 56.760000000000005, "text": " quite a few of them to show up in the near future, which greatly simplifies the workflow."}, {"start": 56.76, "end": 61.96, "text": " After that, these annotations can be changed at will, you press a button, and the rest is"}, {"start": 61.96, "end": 82.0, "text": " history."}, {"start": 82.0, "end": 83.0, "text": " Woah!"}, {"start": 83.0, "end": 84.88, "text": " Wicked results."}, {"start": 84.88, "end": 89.72, "text": " Some of these neural art results look so good that we should be creating a new class"}, {"start": 89.72, "end": 91.92, "text": " of touring tests for paintings."}, {"start": 91.92, "end": 96.72, "text": " This means that we are presented with two images, one of them is painted by a human and"}, {"start": 96.72, "end": 98.47999999999999, "text": " one by a computer."}, {"start": 98.47999999999999, "end": 103.12, "text": " We need to click the ones that we think were painted by a human."}, {"start": 103.12, "end": 106.84, "text": " Damn, curses."}, {"start": 106.84, "end": 111.19999999999999, "text": " As always, these techniques are new and heavily experimental, and this usually means that"}, {"start": 111.19999999999999, "end": 113.67999999999999, "text": " they take quite a bit of time to compute."}, {"start": 113.68, "end": 117.16000000000001, "text": " The presentation video you have seen was sped up considerably."}, {"start": 117.16000000000001, "end": 121.96000000000001, "text": " If these works are worthy of further attention, and I definitely think they are, then we can"}, {"start": 121.96000000000001, "end": 126.60000000000001, "text": " expect great strides towards interactivity in follow-up papers very soon."}, {"start": 126.60000000000001, "end": 131.36, "text": " I am really looking forward to it, and we fellow scholars will have a ton of fun with"}, {"start": 131.36, "end": 132.8, "text": " these tools in the future."}, {"start": 132.8, "end": 143.76000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6aF9sJrzxaM
Overfitting and Regularization For Deep Learning | Two Minute Papers #56
In this episode, we discuss the bane of many machine learning algorithms - overfitting. It is also explained why it is an undesirable way to learn and how to combat it via L1 and L2 regularization. _____________________________ The paper "Regression Shrinkage and Selection via the Lasso" is available here: http://statweb.stanford.edu/~tibs/lasso/lasso.pdf Andrej Karpathy's excellent lecture notes on neural networks and regularization: http://cs231n.github.io/neural-networks-1/ The neural network demo is available here: http://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html A playlist with out neural network and deep learning-related videos: https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by Tony Hisgett (CC BY 2.0). It has undergone recoloring. - https://flic.kr/p/5dkbNV Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In machine learning, we often encounter classification problems where we have to decide whether an image depicts a dog or a cat. We'll have an intuitive but simplified example where we imagine that the red dots represent dogs and the green ones are the cats. We first start learning on a training set which means that we get a bunch of images that are points on this plane. And from these points we try to paint the parts of the plane red and green. This way we can specify which regions correspond to the concept of dogs and cats. And after that we'll get new points that we don't know anything about and we'll ask the algorithm for instance a neural network to classify these unknown images so it tells us whether it thinks that it is a dog or a cat. This is what we call a test set. We have had lots of fun with neural networks and deep learning in previous two-minute papers episodes. I've put some links in the description box, check them out. In this example it is reasonably easy to tell that the reds roughly correspond to the left and the greens to the right. However, if we just jumped on the deep learning hype train and don't know much about neural networks we may get extremely poor results like this. What we see here is the problem of overfitting. Overfitting means that our beloved neural network does not learn the concept of dogs or cats it just tries to adapt as much as possible to the training set. As an intuition think of poorly made real life exams. We have a textbook where we can practice with exercises so this textbook is our training set. Our test set is the exam. The goal is to learn from the textbook and obtain knowledge that proves to be useful at the exam. Overfitting means that we simply memorize parts of the textbook instead of obtaining real knowledge. If you are on page 5 and you see a bus then the right answer is B. Memorizing patterns like this is not real learning. The worst cases if the exam questions are also from the textbook because you can get a great grade just by overfitting. So this kind of overfitting has been a big looming problem in many education systems. Now the question is which kind of neural network do we want? Something that works like a lazy student or one that can learn many complicated concepts. If we are aiming for the latter we have to combat overfitting which is the bane of so many machine learning techniques. Now there's several ways of doing that but today we're going to talk about one possible solution by the name L1 and L2 regularization. The intuition of our problem is that the deeper and bigger neural networks we train the more potent they are but at the same time they get more prone to overfitting. The smarter the student is the more patterns he can memorize. One solution is to hurl a smaller neural network at the problem. If this smaller version is powerful enough to take on the problem we're good. A student who cannot afford to memorize all the examples is forced to learn the actual underlying concepts. However it is very possible that this smaller neural network is not powerful enough to solve the problem so we need to use a bigger one. But bigger network more overfitting. Damn so what do we do? And here is where L1 and L2 regularization comes to save the day. It is a tool to favor simpler models instead of complicated ones. The idea is that the simpler the model is the better it transfers the textbook knowledge to the exam and that's exactly what we're looking for. Here you see images of the same network with different regularization strengths. The first one barely helps anything and as you can see overfitting is still rampant. With a stronger L2 regularization you see that the model is simplified substantially and is likely to perform better on the exam. However if we add more regularization it might be that we simplify the model too much and it is almost the same as a smaller neural network that is not powerful enough to grasp the underlying concepts of the exam. Keep your neural network as simple as possible but not simpler. One has to find the right balance which is an art by itself and it shows that training deep neural networks takes a bit of expertise. It is more than just a plug and play tool that solves every problem by magic. If you want to play with the neural networks you've seen in this video just click on the link in the description box. I hope you'll have at least as much fun with it as I had. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 12.0, "text": " In machine learning, we often encounter classification problems where we have to decide whether an image depicts a dog or a cat."}, {"start": 12.0, "end": 19.68, "text": " We'll have an intuitive but simplified example where we imagine that the red dots represent dogs and the green ones are the cats."}, {"start": 19.68, "end": 26.080000000000002, "text": " We first start learning on a training set which means that we get a bunch of images that are points on this plane."}, {"start": 26.08, "end": 30.72, "text": " And from these points we try to paint the parts of the plane red and green."}, {"start": 30.72, "end": 35.92, "text": " This way we can specify which regions correspond to the concept of dogs and cats."}, {"start": 35.92, "end": 43.2, "text": " And after that we'll get new points that we don't know anything about and we'll ask the algorithm for instance a neural network"}, {"start": 43.2, "end": 48.72, "text": " to classify these unknown images so it tells us whether it thinks that it is a dog or a cat."}, {"start": 48.72, "end": 50.8, "text": " This is what we call a test set."}, {"start": 50.8, "end": 55.839999999999996, "text": " We have had lots of fun with neural networks and deep learning in previous two-minute papers episodes."}, {"start": 55.839999999999996, "end": 58.879999999999995, "text": " I've put some links in the description box, check them out."}, {"start": 58.879999999999995, "end": 66.32, "text": " In this example it is reasonably easy to tell that the reds roughly correspond to the left and the greens to the right."}, {"start": 66.32, "end": 74.64, "text": " However, if we just jumped on the deep learning hype train and don't know much about neural networks we may get extremely poor results like this."}, {"start": 74.64, "end": 77.52, "text": " What we see here is the problem of overfitting."}, {"start": 77.52, "end": 86.96, "text": " Overfitting means that our beloved neural network does not learn the concept of dogs or cats it just tries to adapt as much as possible to the training set."}, {"start": 86.96, "end": 90.64, "text": " As an intuition think of poorly made real life exams."}, {"start": 90.64, "end": 96.08, "text": " We have a textbook where we can practice with exercises so this textbook is our training set."}, {"start": 96.08, "end": 98.16, "text": " Our test set is the exam."}, {"start": 98.16, "end": 103.75999999999999, "text": " The goal is to learn from the textbook and obtain knowledge that proves to be useful at the exam."}, {"start": 103.76, "end": 109.52000000000001, "text": " Overfitting means that we simply memorize parts of the textbook instead of obtaining real knowledge."}, {"start": 109.52000000000001, "end": 114.0, "text": " If you are on page 5 and you see a bus then the right answer is B."}, {"start": 114.0, "end": 117.52000000000001, "text": " Memorizing patterns like this is not real learning."}, {"start": 117.52000000000001, "end": 124.96000000000001, "text": " The worst cases if the exam questions are also from the textbook because you can get a great grade just by overfitting."}, {"start": 124.96000000000001, "end": 129.92000000000002, "text": " So this kind of overfitting has been a big looming problem in many education systems."}, {"start": 129.92000000000002, "end": 133.44, "text": " Now the question is which kind of neural network do we want?"}, {"start": 133.44, "end": 138.96, "text": " Something that works like a lazy student or one that can learn many complicated concepts."}, {"start": 138.96, "end": 146.0, "text": " If we are aiming for the latter we have to combat overfitting which is the bane of so many machine learning techniques."}, {"start": 146.0, "end": 154.72, "text": " Now there's several ways of doing that but today we're going to talk about one possible solution by the name L1 and L2 regularization."}, {"start": 154.72, "end": 162.48, "text": " The intuition of our problem is that the deeper and bigger neural networks we train the more potent they are but at the same time"}, {"start": 162.48, "end": 168.39999999999998, "text": " they get more prone to overfitting. The smarter the student is the more patterns he can memorize."}, {"start": 168.39999999999998, "end": 172.07999999999998, "text": " One solution is to hurl a smaller neural network at the problem."}, {"start": 172.07999999999998, "end": 175.76, "text": " If this smaller version is powerful enough to take on the problem we're good."}, {"start": 175.76, "end": 182.39999999999998, "text": " A student who cannot afford to memorize all the examples is forced to learn the actual underlying concepts."}, {"start": 182.39999999999998, "end": 189.92, "text": " However it is very possible that this smaller neural network is not powerful enough to solve the problem so we need to use a bigger one."}, {"start": 189.92, "end": 195.11999999999998, "text": " But bigger network more overfitting. Damn so what do we do?"}, {"start": 195.11999999999998, "end": 199.2, "text": " And here is where L1 and L2 regularization comes to save the day."}, {"start": 199.2, "end": 203.83999999999997, "text": " It is a tool to favor simpler models instead of complicated ones."}, {"start": 203.83999999999997, "end": 212.07999999999998, "text": " The idea is that the simpler the model is the better it transfers the textbook knowledge to the exam and that's exactly what we're looking for."}, {"start": 212.07999999999998, "end": 216.32, "text": " Here you see images of the same network with different regularization strengths."}, {"start": 216.32, "end": 221.2, "text": " The first one barely helps anything and as you can see overfitting is still rampant."}, {"start": 221.2, "end": 228.95999999999998, "text": " With a stronger L2 regularization you see that the model is simplified substantially and is likely to perform better on the exam."}, {"start": 228.95999999999998, "end": 242.0, "text": " However if we add more regularization it might be that we simplify the model too much and it is almost the same as a smaller neural network that is not powerful enough to grasp the underlying concepts of the exam."}, {"start": 242.0, "end": 246.07999999999998, "text": " Keep your neural network as simple as possible but not simpler."}, {"start": 246.08, "end": 253.52, "text": " One has to find the right balance which is an art by itself and it shows that training deep neural networks takes a bit of expertise."}, {"start": 253.52, "end": 257.92, "text": " It is more than just a plug and play tool that solves every problem by magic."}, {"start": 257.92, "end": 263.04, "text": " If you want to play with the neural networks you've seen in this video just click on the link in the description box."}, {"start": 263.04, "end": 266.08000000000004, "text": " I hope you'll have at least as much fun with it as I had."}, {"start": 266.08, "end": 276.08, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Ee9vF5eChhU
How The Witness Teaches Scientific Thinking
The Witness is an amazing computer game that helps you improve your scientific thinking and reasoning skills. We discuss exactly how, and its relation to a gorgeous book by the name The Art of Learning from Josh Waitzkin. ______________________________ The Witness (the keyword entered to Google to avoid linking to a seller): https://goo.gl/QOhEIV The Art of Learning (again, Google): https://goo.gl/5wdAx7 WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background is a courtesy of Jonathan Blow (we have applied edits to it). Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, I'd like to draw your attention to the witness, which is a computer game that helps you improve your scientific thinking and reasoning skills. It is a first-person game and if you like solving puzzles, well, it has them in abundance. Before we start, I'd like to note for transparency that if you're worried about spoilers, everything shown in this video is taken from the first 5 to 10 minutes of the game. Also, this is not a sponsored video, I'm not paid for any of this, I'm just absolutely spellbound by this game. After completing it, I felt that I learned so much and felt so much smarter, which is something that I did not feel playing through any other game. I remember that I felt sorry after beating every single puzzle, as there was less and less left from the game. I just absolutely love how this game approaches the concept of teaching. It never says a word, it guides you by creating appropriate puzzles for you to overcome. It teaches without teaching. First, we're shown that we have to reach the exit from a starting point. However, there is many different ways of doing that, so to keep it challenging, we have to put constraints on the problem. It seems that these white and black blocks have to be in separate regions. We now have seemingly the same puzzle, but we realize that by changing something as simple as the position of the exit, our previous plans fall apart and we often have to think outside of the box to overcome these new challenges. I see an empty block over there, who knows how it will react if I lock it together with a white block. There's an opportunity to try it, for educational purposes I'll ignore it for now. Another slightly changed puzzle, another time when we need a complete redesign. This series of puzzles beautifully displays how the slightest change to a problem can require us to completely rethink our approach. In this last puzzle, we need to use these empty blocks to create one big region with all the white markers. We had the opportunity to learn about the empty blocks before, but even if we missed it, the game makes sure we understand this concept by the end of this challenge. Beautiful design. These were, of course, tutorial-level educational puzzles from the very start of the game. Later, the search space will be too large to just guess randomly, so we have to systematically put constraints on the problem and eliminate a large number of solutions. If we do this more and more, we inevitably end up with the right solution. This is exactly the kind of thinking that is required for scientific breakthroughs. Some of the teachings really remind me of the book by the name The Art of Learning by Josh Wetskin. He discussed that improvement in almost any field comes by challenging and overcoming dogma. Dogma means a set of principles that are so deeply entrenched in our minds that we take them for granted and are unable to challenge them. For example, in Karate, fighting on the ground is deemed to be not honorable, therefore they don't practice it. A wrestler would exploit this weakness and use it to his advantage and smash them into the ground. The author of the book defeated his opponents in chess and martial arts by not only finding out their dogma, but inserting dogma into their heads and using it against them. One great example of that in martial arts is when you are facing a weaker opponent who pushes you. If you didn't want to, you could remain unwavering, but what is even better is pretending to move when he pushes you. After a few times, your opponent will be undoubtedly sure that when he pushes you, you will move. And in the decisive moment when he pushes, you will remain still and in balance and counterattack. Your opponent will be swiftly defeated and won't have any idea what really happened. This book is such a great read. Check it out. The witness often tries to do the same. It inserts dogma in your head without you noticing it and will ask you to break through it to overcome new challenges. And it does it in the most beautiful way I have seen. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 9.4, "text": " Dear Fellow Scholars, I'd like to draw your attention to the witness, which is a computer game that helps you improve your scientific thinking and reasoning skills."}, {"start": 9.4, "end": 14.9, "text": " It is a first-person game and if you like solving puzzles, well, it has them in abundance."}, {"start": 14.9, "end": 23.900000000000002, "text": " Before we start, I'd like to note for transparency that if you're worried about spoilers, everything shown in this video is taken from the first 5 to 10 minutes of the game."}, {"start": 23.9, "end": 30.599999999999998, "text": " Also, this is not a sponsored video, I'm not paid for any of this, I'm just absolutely spellbound by this game."}, {"start": 30.599999999999998, "end": 38.9, "text": " After completing it, I felt that I learned so much and felt so much smarter, which is something that I did not feel playing through any other game."}, {"start": 38.9, "end": 45.099999999999994, "text": " I remember that I felt sorry after beating every single puzzle, as there was less and less left from the game."}, {"start": 45.099999999999994, "end": 49.5, "text": " I just absolutely love how this game approaches the concept of teaching."}, {"start": 49.5, "end": 57.7, "text": " It never says a word, it guides you by creating appropriate puzzles for you to overcome. It teaches without teaching."}, {"start": 57.7, "end": 61.9, "text": " First, we're shown that we have to reach the exit from a starting point."}, {"start": 61.9, "end": 68.4, "text": " However, there is many different ways of doing that, so to keep it challenging, we have to put constraints on the problem."}, {"start": 68.4, "end": 85.7, "text": " It seems that these white and black blocks have to be in separate regions."}, {"start": 85.7, "end": 92.7, "text": " We now have seemingly the same puzzle, but we realize that by changing something as simple as the position of the exit,"}, {"start": 92.7, "end": 103.3, "text": " our previous plans fall apart and we often have to think outside of the box to overcome these new challenges."}, {"start": 103.3, "end": 108.80000000000001, "text": " I see an empty block over there, who knows how it will react if I lock it together with a white block."}, {"start": 108.80000000000001, "end": 114.30000000000001, "text": " There's an opportunity to try it, for educational purposes I'll ignore it for now."}, {"start": 114.30000000000001, "end": 119.60000000000001, "text": " Another slightly changed puzzle, another time when we need a complete redesign."}, {"start": 119.6, "end": 128.6, "text": " This series of puzzles beautifully displays how the slightest change to a problem can require us to completely rethink our approach."}, {"start": 128.6, "end": 134.9, "text": " In this last puzzle, we need to use these empty blocks to create one big region with all the white markers."}, {"start": 134.9, "end": 144.4, "text": " We had the opportunity to learn about the empty blocks before, but even if we missed it, the game makes sure we understand this concept by the end of this challenge."}, {"start": 144.4, "end": 146.79999999999998, "text": " Beautiful design."}, {"start": 146.8, "end": 152.10000000000002, "text": " These were, of course, tutorial-level educational puzzles from the very start of the game."}, {"start": 152.10000000000002, "end": 162.10000000000002, "text": " Later, the search space will be too large to just guess randomly, so we have to systematically put constraints on the problem and eliminate a large number of solutions."}, {"start": 162.10000000000002, "end": 166.20000000000002, "text": " If we do this more and more, we inevitably end up with the right solution."}, {"start": 166.20000000000002, "end": 171.10000000000002, "text": " This is exactly the kind of thinking that is required for scientific breakthroughs."}, {"start": 171.1, "end": 177.1, "text": " Some of the teachings really remind me of the book by the name The Art of Learning by Josh Wetskin."}, {"start": 177.1, "end": 183.2, "text": " He discussed that improvement in almost any field comes by challenging and overcoming dogma."}, {"start": 183.2, "end": 192.0, "text": " Dogma means a set of principles that are so deeply entrenched in our minds that we take them for granted and are unable to challenge them."}, {"start": 192.0, "end": 199.1, "text": " For example, in Karate, fighting on the ground is deemed to be not honorable, therefore they don't practice it."}, {"start": 199.1, "end": 204.9, "text": " A wrestler would exploit this weakness and use it to his advantage and smash them into the ground."}, {"start": 204.9, "end": 215.4, "text": " The author of the book defeated his opponents in chess and martial arts by not only finding out their dogma, but inserting dogma into their heads and using it against them."}, {"start": 215.4, "end": 220.7, "text": " One great example of that in martial arts is when you are facing a weaker opponent who pushes you."}, {"start": 220.7, "end": 227.79999999999998, "text": " If you didn't want to, you could remain unwavering, but what is even better is pretending to move when he pushes you."}, {"start": 227.8, "end": 233.8, "text": " After a few times, your opponent will be undoubtedly sure that when he pushes you, you will move."}, {"start": 233.8, "end": 239.5, "text": " And in the decisive moment when he pushes, you will remain still and in balance and counterattack."}, {"start": 239.5, "end": 244.20000000000002, "text": " Your opponent will be swiftly defeated and won't have any idea what really happened."}, {"start": 244.20000000000002, "end": 246.8, "text": " This book is such a great read. Check it out."}, {"start": 246.8, "end": 255.60000000000002, "text": " The witness often tries to do the same. It inserts dogma in your head without you noticing it and will ask you to break through it to overcome new challenges."}, {"start": 255.6, "end": 258.8, "text": " And it does it in the most beautiful way I have seen."}, {"start": 258.8, "end": 288.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=0Xc9LIb_HTw
Decision Trees and Boosting, XGBoost | Two Minute Papers #55
A decision tree is a great tool to help making good decisions from a huge bunch of data. In this episode, we talk about boosting, a technique to combine a lot of weak decision trees into a strong learning algorithm. Please note that gradient boosting is a broad concept and this is only one possible application of it! __________________________________ Our Patreon page is available here: https://www.patreon.com/TwoMinutePapers If you don't want to spend a dime or you can't afford it, it's completely okay, I'm very happy to have you around! And please, stay with us and let's continue our journey of science together! The paper "Experiments with a new boosting algorithm" is available here: http://www.public.asu.edu/~jye02/CLASSES/Fall-2005/PAPERS/boosting-icml.pdf Another great introduction to tree boosting: http://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. The thumbnail image background was created by John Voo (CC BY 2.0), content-aware filling has been applied - https://flic.kr/p/BLphju Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePap... Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifahir. A decision tree is a great tool to help making good decisions from a huge bunch of data. The classical example is when we have a bunch of information about people and would like to find out whether they like computer games or not. Note that this is a toy example for educational purposes. We can build the following tree. If the person's age in question is over 15, the person is less likely to like computer games. If the subject is under 15 and is a male, he is quite likely to like video games if she's female than less likely. Note that the output of the tree can be a decision like yes or no, but in our case we will assign positive and negative scores instead. You'll see in a minute why that's beneficial. But this tree was just one possible way of approaching the problem and admittedly not a spectacular one. A different decision tree could be simply asking whether this person uses a computer daily or not. Individually these trees are quite shallow and we call them weak learners. This term means that individually they are quite inaccurate but slightly better than random guessing. And now comes the cool part. The concept of tree boosting means that we take many weak learners and combine them into a strong learner. Using the mentioned scoring system instead of decisions also makes this process easy and straightforward to implement. Boosting is similar to what we do with illnesses. If a doctor says that I have a rare condition, I will make sure and ask at least a few more doctors to make a more educated decision about my health. The cool thing is that the individual trees don't have to be great. If they give you decisions that are just a bit better than random guessing, using a lot of them will produce strong learning results. If we go back to the analogy with doctors, then if the individual doctors know just enough not to kill the patient, a well chosen committee will be able to put together an accurate diagnosis for the patient. An even cooler, adaptive version of this technique brings in new doctors to the committee, according to the deficiencies of the existing members. One other huge advantage of boosted trees over neural networks is that we actually see why and how the computer arrives to a decision. This is a remarkably simple method that leads to results of very respectable accuracy. A well-known software library called XG Boost has been responsible for winning a staggering amount of machine learning competitions in Kaggle. I'd like to take a second and thank you Fellow Scholars for your amazing support on Patreon and making two-minute papers possible. Creating these episodes is a lot of hard work and your support has been invaluable so far. Thank you so much! We used to have three categories for supporters. Undergrad students get access to a Patreon-only activity feed and get to know well in advance the topics of the new episodes. PhD students who are addicted to two-minute papers get a chance to see every episode up to 24 hours in advance. Talking about committees in this episode, full professors form a committee to decide the order of the next few episodes. And now, we introduce a new category, the Noble Laureate. Supporters in this category can literally become part of two-minute papers and will be listed in the video description box in the upcoming episodes. Plus, all of the above. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifahir."}, {"start": 4.4, "end": 9.52, "text": " A decision tree is a great tool to help making good decisions from a huge bunch of data."}, {"start": 9.52, "end": 13.44, "text": " The classical example is when we have a bunch of information about people"}, {"start": 13.44, "end": 17.44, "text": " and would like to find out whether they like computer games or not."}, {"start": 17.44, "end": 20.96, "text": " Note that this is a toy example for educational purposes."}, {"start": 20.96, "end": 22.8, "text": " We can build the following tree."}, {"start": 22.8, "end": 28.72, "text": " If the person's age in question is over 15, the person is less likely to like computer games."}, {"start": 28.72, "end": 33.839999999999996, "text": " If the subject is under 15 and is a male, he is quite likely to like video games"}, {"start": 33.839999999999996, "end": 36.64, "text": " if she's female than less likely."}, {"start": 36.64, "end": 40.64, "text": " Note that the output of the tree can be a decision like yes or no,"}, {"start": 40.64, "end": 45.12, "text": " but in our case we will assign positive and negative scores instead."}, {"start": 45.12, "end": 47.36, "text": " You'll see in a minute why that's beneficial."}, {"start": 47.36, "end": 51.120000000000005, "text": " But this tree was just one possible way of approaching the problem"}, {"start": 51.120000000000005, "end": 53.84, "text": " and admittedly not a spectacular one."}, {"start": 53.84, "end": 57.84, "text": " A different decision tree could be simply asking whether this person"}, {"start": 57.84, "end": 59.84, "text": " uses a computer daily or not."}, {"start": 60.64, "end": 65.52000000000001, "text": " Individually these trees are quite shallow and we call them weak learners."}, {"start": 65.52000000000001, "end": 72.16, "text": " This term means that individually they are quite inaccurate but slightly better than random guessing."}, {"start": 72.16, "end": 74.08000000000001, "text": " And now comes the cool part."}, {"start": 74.08000000000001, "end": 78.4, "text": " The concept of tree boosting means that we take many weak learners"}, {"start": 78.4, "end": 81.36, "text": " and combine them into a strong learner."}, {"start": 81.36, "end": 86.16, "text": " Using the mentioned scoring system instead of decisions also makes this process easy"}, {"start": 86.16, "end": 87.92, "text": " and straightforward to implement."}, {"start": 88.47999999999999, "end": 91.36, "text": " Boosting is similar to what we do with illnesses."}, {"start": 91.36, "end": 94.0, "text": " If a doctor says that I have a rare condition,"}, {"start": 94.0, "end": 97.2, "text": " I will make sure and ask at least a few more doctors"}, {"start": 97.2, "end": 100.24, "text": " to make a more educated decision about my health."}, {"start": 100.24, "end": 104.0, "text": " The cool thing is that the individual trees don't have to be great."}, {"start": 104.0, "end": 108.24, "text": " If they give you decisions that are just a bit better than random guessing,"}, {"start": 108.24, "end": 111.75999999999999, "text": " using a lot of them will produce strong learning results."}, {"start": 111.75999999999999, "end": 114.32, "text": " If we go back to the analogy with doctors,"}, {"start": 114.32, "end": 119.11999999999999, "text": " then if the individual doctors know just enough not to kill the patient,"}, {"start": 119.11999999999999, "end": 124.72, "text": " a well chosen committee will be able to put together an accurate diagnosis for the patient."}, {"start": 124.72, "end": 129.68, "text": " An even cooler, adaptive version of this technique brings in new doctors to the committee,"}, {"start": 129.68, "end": 132.64, "text": " according to the deficiencies of the existing members."}, {"start": 133.35999999999999, "end": 136.88, "text": " One other huge advantage of boosted trees over neural networks"}, {"start": 136.88, "end": 141.44, "text": " is that we actually see why and how the computer arrives to a decision."}, {"start": 141.44, "end": 147.12, "text": " This is a remarkably simple method that leads to results of very respectable accuracy."}, {"start": 147.12, "end": 150.56, "text": " A well-known software library called XG Boost"}, {"start": 150.56, "end": 155.44, "text": " has been responsible for winning a staggering amount of machine learning competitions in Kaggle."}, {"start": 155.44, "end": 158.88, "text": " I'd like to take a second and thank you Fellow Scholars"}, {"start": 158.88, "end": 163.12, "text": " for your amazing support on Patreon and making two-minute papers possible."}, {"start": 163.12, "end": 166.07999999999998, "text": " Creating these episodes is a lot of hard work"}, {"start": 166.07999999999998, "end": 168.64, "text": " and your support has been invaluable so far."}, {"start": 168.64, "end": 170.07999999999998, "text": " Thank you so much!"}, {"start": 170.08, "end": 172.88000000000002, "text": " We used to have three categories for supporters."}, {"start": 172.88000000000002, "end": 176.64000000000001, "text": " Undergrad students get access to a Patreon-only activity feed"}, {"start": 176.64000000000001, "end": 180.24, "text": " and get to know well in advance the topics of the new episodes."}, {"start": 180.24, "end": 187.84, "text": " PhD students who are addicted to two-minute papers get a chance to see every episode up to 24 hours in advance."}, {"start": 187.84, "end": 190.16000000000003, "text": " Talking about committees in this episode,"}, {"start": 190.16000000000003, "end": 195.36, "text": " full professors form a committee to decide the order of the next few episodes."}, {"start": 195.36, "end": 199.60000000000002, "text": " And now, we introduce a new category, the Noble Laureate."}, {"start": 199.6, "end": 204.0, "text": " Supporters in this category can literally become part of two-minute papers"}, {"start": 204.0, "end": 207.68, "text": " and will be listed in the video description box in the upcoming episodes."}, {"start": 207.68, "end": 209.44, "text": " Plus, all of the above."}, {"start": 209.44, "end": 211.84, "text": " Thanks for watching and for your generous support,"}, {"start": 211.84, "end": 238.56, "text": " and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=ZolWxY4f9wc
3D Depth From a Single Photograph | Two Minute Papers #54
This piece of work tries to estimate depth information from an input photogaph. This means that it looks at the photo and tries to tell how far away parts of the image are from the camera, and the final goal is that we provide a photograph for which the depth information is completely unknown and we ask the algorithm to provide it for us. _______________________________ The paper "3-D Depth Reconstruction from a Single Still Image" is available here: http://www.cs.cornell.edu/~asaxena/learningdepth/saxena_ijcv07_learningdepth.pdf The source of the shown video at the end: https://www.youtube.com/watch?v=GWWIn29ZV4Q WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by Willy Verhulst (CC BY 2.0) - https://flic.kr/p/pZj8KD Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Fehr. This piece of work tries to estimate depth information from an input photograph. This means that it looks at the photo and tries to tell how far away parts of the image are from the camera. An example output looks like this. On the left, there's an input photograph and on the right, you see a heat map with true distance information. This is what we are trying to approximate. This means that we collect a lot of indoor and outdoor images with their true depth information and we try to learn the correspondence how they relate to each other. Sidewalks, forests, buildings, you name it. These images and depth pairs can be captured by mounting 3D scanners on this awesome custom build vehicle. And gentlemen, that is one heck of a way of spending research funds. The final goal is that we provide a photograph for which the depth information is completely unknown and we ask the algorithm to provide it for us. Here you can see some results. The first image is the input photograph. The second shows the true depth information. The third image is the depth information that was created by this technique. And here's a bunch of results for images downloaded from the internet. It probably does at least as good as a human would. Spectacular. This sounds like a sensory or problem for humans and a perilous journey for computers to say the least. What is quite remarkable is that these relations can be learned by a computer algorithm. What can we use this for? Well, a number of different things, one of which is to create multiple views of this 2D photograph using the guest depth information. It can also be super helpful in building robots that can wander about reliably with inexpensive consumer cameras mounted on them. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Fehr."}, {"start": 4.28, "end": 9.44, "text": " This piece of work tries to estimate depth information from an input photograph."}, {"start": 9.44, "end": 15.96, "text": " This means that it looks at the photo and tries to tell how far away parts of the image are from the camera."}, {"start": 15.96, "end": 18.04, "text": " An example output looks like this."}, {"start": 18.04, "end": 24.240000000000002, "text": " On the left, there's an input photograph and on the right, you see a heat map with true distance information."}, {"start": 24.240000000000002, "end": 26.32, "text": " This is what we are trying to approximate."}, {"start": 26.32, "end": 32.0, "text": " This means that we collect a lot of indoor and outdoor images with their true depth information"}, {"start": 32.0, "end": 36.16, "text": " and we try to learn the correspondence how they relate to each other."}, {"start": 36.16, "end": 39.56, "text": " Sidewalks, forests, buildings, you name it."}, {"start": 39.56, "end": 46.2, "text": " These images and depth pairs can be captured by mounting 3D scanners on this awesome custom build vehicle."}, {"start": 46.2, "end": 50.8, "text": " And gentlemen, that is one heck of a way of spending research funds."}, {"start": 50.8, "end": 56.44, "text": " The final goal is that we provide a photograph for which the depth information is completely unknown"}, {"start": 56.44, "end": 59.599999999999994, "text": " and we ask the algorithm to provide it for us."}, {"start": 59.599999999999994, "end": 61.239999999999995, "text": " Here you can see some results."}, {"start": 61.239999999999995, "end": 63.44, "text": " The first image is the input photograph."}, {"start": 63.44, "end": 66.47999999999999, "text": " The second shows the true depth information."}, {"start": 66.48, "end": 80.92, "text": " The third image is the depth information that was created by this technique."}, {"start": 80.92, "end": 84.88000000000001, "text": " And here's a bunch of results for images downloaded from the internet."}, {"start": 84.88000000000001, "end": 88.4, "text": " It probably does at least as good as a human would."}, {"start": 88.4, "end": 89.76, "text": " Spectacular."}, {"start": 89.76, "end": 95.84, "text": " This sounds like a sensory or problem for humans and a perilous journey for computers to say the least."}, {"start": 95.84, "end": 101.52000000000001, "text": " What is quite remarkable is that these relations can be learned by a computer algorithm."}, {"start": 101.52000000000001, "end": 102.92, "text": " What can we use this for?"}, {"start": 102.92, "end": 107.80000000000001, "text": " Well, a number of different things, one of which is to create multiple views of this 2D"}, {"start": 107.80000000000001, "end": 110.72, "text": " photograph using the guest depth information."}, {"start": 110.72, "end": 116.2, "text": " It can also be super helpful in building robots that can wander about reliably with inexpensive"}, {"start": 116.2, "end": 118.64, "text": " consumer cameras mounted on them."}, {"start": 118.64, "end": 126.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=a-ovvd_ZrmA
How DeepMind's AlphaGo Defeated Lee Sedol | Two Minute Papers #53
This time around, Google DeepMind embarked on a journey to write an algorithm that plays Go. Go is an ancient chinese board game where the opposing players try to capture each other's stones on the board. Behind the veil of this deceptively simple ruleset, lies an enormous layer of depth and complexity. As scientists like to say, the search space of this problem is significantly larger than that of chess. So large, that one often has to rely on human intuition to find a suitable next move, therefore it is not surprising that playing Go on a high level is, or maybe was widely believed to be intractable for machines. The result is Google DeepMind's AlphaGo, the deep learning technique that defeated a professional player and world champion, Lee Sedol. What it also important to note is that the techniques used in this algorithm are general, and can be used for a large number of different tasks. By this, I mean not AlphaGo specifically, but the Monte Carlo Tree Search, the value network and deep neural networks. ______________________ The paper "Mastering the Game of Go with Deep Neural Networks and Tree Search" is available here: https://storage.googleapis.com/deepmind-data/assets/papers/deepmind-mastering-go.pdf http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html A great Go analysis video by Brady Daniels. Make sure to check it out and subscribe if you like what you see there! https://www.youtube.com/watch?v=dOQsYWxMNJQ The mentioned post on the Go reddit: https://www.reddit.com/r/baduk/comments/49y17z/the_true_strength_of_alphago/ Some clarification on what part of the algorithm is specific to Go and how: https://news.ycombinator.com/item?id=11280744 Go board image credits (all CC BY 2.0): Renato Ganoza - https://flic.kr/p/7nX4kK Jaro Larnos - https://flic.kr/p/dDeQU9 Luis de Bethencourt - https://flic.kr/p/4c5RaR WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The background of the thumbnail image is the property of Google DeepMind. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. A few months ago, AlphaGoal played and defeated Fan Hui, a two-dan master and European champion player in the game of Goal. However, the next opponent, Lys Adal, is a nine-dan master and world champion player. Just to give an intuition of the difference, Lys Adal is expected to beat Fan Hui 97 times out of 100 games. Google DeepMind had six months of preparation for this bout. Five matches were played over five days. In my time zone, the match started around 4am and the results would usually pop up exactly a few minutes after I woke up. It was amazing. I could barely fall asleep. I was so excited for the results. And when I woke up, I kissed my daughter and immediately ran to my computer to see what was going on. Most people were convinced that Lys Adal was going to beat the machine 5 to 0 and I was stunned to see that AlphaGoal triumphed over Lys Adal in the first match and then the second and then the third. Huge respect for both Google DeepMind for putting together such a spectacular algorithm and for Lys Adal who played extremely well under enormous pressure. He is indeed a true champion. The game of Goal has a stupendously large search space that makes it completely impossible to check every move and choose the best. What is also not often talked about is that processing through many moves is one thing, but judging which move is advantageous and which is not is just as difficult as the search itself. The definition of the best move is not clear cut by any stretch of the imagination. We also have to look into the future and simulate the moves of the opponent. I think it is easy to see that the difficulty of this problem is completely out of this world. A neural network is a crude approximation of the human brain just like a stick figure is a crude approximation of a human being. In this work, neural networks are used to reduce the size of the search space and value networks are used to predict the expected outcome of a move. This value network basically tries to determine who will win if a sequence of moves is made. To defeat AlphaGoal or any computer opponent, playing non-traditional moves that it surely hasn't practiced sounds like a great idea. However, there is no database involved per se. This technique is simulating the moves until the very end of the game, so non-traditional weird moves won't throw it off. It is also very important to note that the structure of AlphaGo is not like deep blue for chess. Deep blue was specifically designed to maximize metrics that are likely to lead to victory, such as pawn advantage, king safety, tempo and more. AlphaGo doesn't do any of that. It is a general technique that can learn to solve a large number of different problems. I cannot overstate the significance of this. Almost the entirety of computer science research revolves around creating algorithms that are specifically tailored to one task. Different tasks, different research projects, different algorithms. Imagine how empowering it would be to have a general algorithm that can solve a large amount of problems. It's incredible. Just as people who don't speak a word of Chinese can write an artificial intelligence program to recognize handwritten Chinese text, someone who hasn't played more than a few games can write a chess or go program that is beyond the skill of most professional players. This is a wonderful testament of the power of mathematics and science. It was quite surprising to see that AlphaGo played seemingly suboptimal moves when it was a head to reduce the variance and maximize this chance of victory. Take a look at DeepMind's other technique by the name DeepQ Learning that plays space invaders on the superhuman level. This shot, at first, looks like a blunder, but if you wait it out, you'll see how brilliant it really is. A move that seems like a blunder at a time may be the optimal move in the grand scheme of things. It is not a blunder. It is a move from someone whose brilliance is way beyond the capabilities of even the best human players. There is an excellent analysis of this phenomenon on the Go Reddit. I've put a link in the description box. Check it out. I'd like to emphasize that the technique learns at first by looking at a large number of games by Amateurs. But the question is, how can it get beyond the level of Amateurs? After looking at these games, it will learn the basics and will play millions of games against itself and learn from them. And to be emphasized, nothing in this algorithm is specific to Go. Nothing. It can be used to solve a number of different problems without significant changes. It would be immensely difficult to overstate the significance of that. Shout out to Brady Daniels, who has an excellent Go educational channel. He has very fluid, enjoyable, and understandable explanations. Highly recommended. Check it out. There's a link to one of his videos in the description box. It is a possibility that the first Go Grandmaster to reach 10 dance may not be a human, but a computer. My mind is officially blown. Insanity. One more cobblestone has been laid on the path to artificial general intelligence. This achievement I find to be of equivalent magnitude to landing on the moon. And this is just the beginning. I can't wait to see this technique being used for research in medicine. Huge respect for Demis Hassabis and Lisadol who were both respectful and humble, both in victory and in defeat. They are true champions of their craft. Thanks so much for DeepMind for creating this rivetingly awesome event. My daughter Yasmin was born one day before this glorious day. Was an exciting time to be alive. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 10.72, "text": " A few months ago, AlphaGoal played and defeated Fan Hui, a two-dan master and European champion"}, {"start": 10.72, "end": 12.32, "text": " player in the game of Goal."}, {"start": 12.32, "end": 18.16, "text": " However, the next opponent, Lys Adal, is a nine-dan master and world champion player."}, {"start": 18.16, "end": 24.48, "text": " Just to give an intuition of the difference, Lys Adal is expected to beat Fan Hui 97 times"}, {"start": 24.48, "end": 26.16, "text": " out of 100 games."}, {"start": 26.16, "end": 29.76, "text": " Google DeepMind had six months of preparation for this bout."}, {"start": 29.76, "end": 32.64, "text": " Five matches were played over five days."}, {"start": 32.64, "end": 38.44, "text": " In my time zone, the match started around 4am and the results would usually pop up exactly"}, {"start": 38.44, "end": 40.480000000000004, "text": " a few minutes after I woke up."}, {"start": 40.480000000000004, "end": 41.64, "text": " It was amazing."}, {"start": 41.64, "end": 43.160000000000004, "text": " I could barely fall asleep."}, {"start": 43.160000000000004, "end": 45.24, "text": " I was so excited for the results."}, {"start": 45.24, "end": 49.68000000000001, "text": " And when I woke up, I kissed my daughter and immediately ran to my computer to see what"}, {"start": 49.68000000000001, "end": 50.68000000000001, "text": " was going on."}, {"start": 50.68000000000001, "end": 55.92, "text": " Most people were convinced that Lys Adal was going to beat the machine 5 to 0 and I was"}, {"start": 55.92, "end": 62.480000000000004, "text": " stunned to see that AlphaGoal triumphed over Lys Adal in the first match and then the second"}, {"start": 62.480000000000004, "end": 63.88, "text": " and then the third."}, {"start": 63.88, "end": 69.04, "text": " Huge respect for both Google DeepMind for putting together such a spectacular algorithm"}, {"start": 69.04, "end": 72.84, "text": " and for Lys Adal who played extremely well under enormous pressure."}, {"start": 72.84, "end": 74.84, "text": " He is indeed a true champion."}, {"start": 74.84, "end": 80.04, "text": " The game of Goal has a stupendously large search space that makes it completely impossible"}, {"start": 80.04, "end": 82.68, "text": " to check every move and choose the best."}, {"start": 82.68, "end": 87.4, "text": " What is also not often talked about is that processing through many moves is one thing,"}, {"start": 87.4, "end": 92.72000000000001, "text": " but judging which move is advantageous and which is not is just as difficult as the search"}, {"start": 92.72000000000001, "end": 93.72000000000001, "text": " itself."}, {"start": 93.72000000000001, "end": 98.32000000000001, "text": " The definition of the best move is not clear cut by any stretch of the imagination."}, {"start": 98.32000000000001, "end": 102.48, "text": " We also have to look into the future and simulate the moves of the opponent."}, {"start": 102.48, "end": 108.04, "text": " I think it is easy to see that the difficulty of this problem is completely out of this world."}, {"start": 108.04, "end": 112.84, "text": " A neural network is a crude approximation of the human brain just like a stick figure"}, {"start": 112.84, "end": 115.32000000000001, "text": " is a crude approximation of a human being."}, {"start": 115.32000000000001, "end": 120.72, "text": " In this work, neural networks are used to reduce the size of the search space and value networks"}, {"start": 120.72, "end": 124.36000000000001, "text": " are used to predict the expected outcome of a move."}, {"start": 124.36000000000001, "end": 129.84, "text": " This value network basically tries to determine who will win if a sequence of moves is made."}, {"start": 129.84, "end": 134.84, "text": " To defeat AlphaGoal or any computer opponent, playing non-traditional moves that it surely"}, {"start": 134.84, "end": 137.72, "text": " hasn't practiced sounds like a great idea."}, {"start": 137.72, "end": 140.84, "text": " However, there is no database involved per se."}, {"start": 140.84, "end": 145.6, "text": " This technique is simulating the moves until the very end of the game, so non-traditional"}, {"start": 145.6, "end": 148.24, "text": " weird moves won't throw it off."}, {"start": 148.24, "end": 153.12, "text": " It is also very important to note that the structure of AlphaGo is not like deep blue for"}, {"start": 153.12, "end": 154.12, "text": " chess."}, {"start": 154.12, "end": 159.48, "text": " Deep blue was specifically designed to maximize metrics that are likely to lead to victory,"}, {"start": 159.48, "end": 163.84, "text": " such as pawn advantage, king safety, tempo and more."}, {"start": 163.84, "end": 165.68, "text": " AlphaGo doesn't do any of that."}, {"start": 165.68, "end": 170.48000000000002, "text": " It is a general technique that can learn to solve a large number of different problems."}, {"start": 170.48000000000002, "end": 173.52, "text": " I cannot overstate the significance of this."}, {"start": 173.52, "end": 178.56, "text": " Almost the entirety of computer science research revolves around creating algorithms that"}, {"start": 178.56, "end": 181.8, "text": " are specifically tailored to one task."}, {"start": 181.8, "end": 185.56, "text": " Different tasks, different research projects, different algorithms."}, {"start": 185.56, "end": 190.20000000000002, "text": " Imagine how empowering it would be to have a general algorithm that can solve a large"}, {"start": 190.20000000000002, "end": 191.84, "text": " amount of problems."}, {"start": 191.84, "end": 193.20000000000002, "text": " It's incredible."}, {"start": 193.2, "end": 197.95999999999998, "text": " Just as people who don't speak a word of Chinese can write an artificial intelligence program"}, {"start": 197.95999999999998, "end": 203.39999999999998, "text": " to recognize handwritten Chinese text, someone who hasn't played more than a few games can"}, {"start": 203.39999999999998, "end": 208.92, "text": " write a chess or go program that is beyond the skill of most professional players."}, {"start": 208.92, "end": 213.16, "text": " This is a wonderful testament of the power of mathematics and science."}, {"start": 213.16, "end": 218.28, "text": " It was quite surprising to see that AlphaGo played seemingly suboptimal moves when it was"}, {"start": 218.28, "end": 222.76, "text": " a head to reduce the variance and maximize this chance of victory."}, {"start": 222.76, "end": 227.44, "text": " Take a look at DeepMind's other technique by the name DeepQ Learning that plays space"}, {"start": 227.44, "end": 230.44, "text": " invaders on the superhuman level."}, {"start": 230.44, "end": 235.48, "text": " This shot, at first, looks like a blunder, but if you wait it out, you'll see how brilliant"}, {"start": 235.48, "end": 236.48, "text": " it really is."}, {"start": 236.48, "end": 241.07999999999998, "text": " A move that seems like a blunder at a time may be the optimal move in the grand scheme"}, {"start": 241.07999999999998, "end": 242.07999999999998, "text": " of things."}, {"start": 242.07999999999998, "end": 243.39999999999998, "text": " It is not a blunder."}, {"start": 243.39999999999998, "end": 248.28, "text": " It is a move from someone whose brilliance is way beyond the capabilities of even the"}, {"start": 248.28, "end": 250.04, "text": " best human players."}, {"start": 250.04, "end": 253.44, "text": " There is an excellent analysis of this phenomenon on the Go Reddit."}, {"start": 253.44, "end": 255.4, "text": " I've put a link in the description box."}, {"start": 255.4, "end": 256.4, "text": " Check it out."}, {"start": 256.4, "end": 261.08, "text": " I'd like to emphasize that the technique learns at first by looking at a large number"}, {"start": 261.08, "end": 262.92, "text": " of games by Amateurs."}, {"start": 262.92, "end": 267.0, "text": " But the question is, how can it get beyond the level of Amateurs?"}, {"start": 267.0, "end": 271.76, "text": " After looking at these games, it will learn the basics and will play millions of games"}, {"start": 271.76, "end": 274.52, "text": " against itself and learn from them."}, {"start": 274.52, "end": 279.12, "text": " And to be emphasized, nothing in this algorithm is specific to Go."}, {"start": 279.12, "end": 279.96, "text": " Nothing."}, {"start": 279.96, "end": 284.47999999999996, "text": " It can be used to solve a number of different problems without significant changes."}, {"start": 284.47999999999996, "end": 288.64, "text": " It would be immensely difficult to overstate the significance of that."}, {"start": 288.64, "end": 293.2, "text": " Shout out to Brady Daniels, who has an excellent Go educational channel."}, {"start": 293.2, "end": 297.35999999999996, "text": " He has very fluid, enjoyable, and understandable explanations."}, {"start": 297.35999999999996, "end": 298.35999999999996, "text": " Highly recommended."}, {"start": 298.35999999999996, "end": 299.35999999999996, "text": " Check it out."}, {"start": 299.35999999999996, "end": 301.84, "text": " There's a link to one of his videos in the description box."}, {"start": 301.84, "end": 307.52, "text": " It is a possibility that the first Go Grandmaster to reach 10 dance may not be a human, but"}, {"start": 307.52, "end": 308.52, "text": " a computer."}, {"start": 308.52, "end": 310.76, "text": " My mind is officially blown."}, {"start": 310.76, "end": 311.76, "text": " Insanity."}, {"start": 311.76, "end": 317.28, "text": " One more cobblestone has been laid on the path to artificial general intelligence."}, {"start": 317.28, "end": 322.47999999999996, "text": " This achievement I find to be of equivalent magnitude to landing on the moon."}, {"start": 322.47999999999996, "end": 323.84, "text": " And this is just the beginning."}, {"start": 323.84, "end": 328.32, "text": " I can't wait to see this technique being used for research in medicine."}, {"start": 328.32, "end": 333.47999999999996, "text": " Huge respect for Demis Hassabis and Lisadol who were both respectful and humble, both"}, {"start": 333.47999999999996, "end": 335.44, "text": " in victory and in defeat."}, {"start": 335.44, "end": 337.71999999999997, "text": " They are true champions of their craft."}, {"start": 337.72, "end": 342.32000000000005, "text": " Thanks so much for DeepMind for creating this rivetingly awesome event."}, {"start": 342.32000000000005, "end": 347.08000000000004, "text": " My daughter Yasmin was born one day before this glorious day."}, {"start": 347.08000000000004, "end": 349.12, "text": " Was an exciting time to be alive."}, {"start": 349.12, "end": 376.56, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hPKJBXkyTKM
10 More Cool Deep Learning Applications | Two Minute Papers #52
In this episode, we present another round of incredible deep learning applications! _________________________ 1. Colorization - http://tinyclouds.org/colorize/ 2. RNN Music on Bob Sturm's YouTube channel - https://www.youtube.com/watch?v=RaO4HpM07hE 3. Flow Machines by Sony - https://www.youtube.com/watch?v=buXqNqBFd6E 4. RNN Passwords - https://github.com/gehaxelt/RNN-Passwords 5. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding - http://arxiv.org/abs/1510.00149 6. Right Whale Kaggle Competition - http://felixlaumon.github.io/2015/01/08/kaggle-right-whale.html 7. Improving YouTube video thumbnails - http://youtube-eng.blogspot.hu/2015/10/improving-youtube-video-thumbnails-with_8.html 8. Celebrity super-resolution: https://github.com/mikesj-public/dcgan-autoencoder 9. Convolutional Neural Network visualization - http://scs.ryerson.ca/~aharley/vis/conv/ + Paper: http://scs.ryerson.ca/~aharley/vis/harley_vis_isvc15.pdf 10. DarkNet RNN writes in the style of George RR Martin - http://pjreddie.com/darknet/rnns-in-darknet/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz WE'D LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. The thumbnail image background was created by Dan Ruscoe (CC BY 2.0) - https://flic.kr/p/deHtEb Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér, to all of you Fellow Scholars out there who are yearning for some more deep learning action like I do. Here goes the second package. Buckle up amazing applications await you. As always, links to every one of these works are available in the description box. This convolutional neural network can learn how to colorize by looking at the same images both in color and black and white. The first image is the black and white input. The second is how the algorithm colorized it, and the third is how the image originally looked like in color. Insanity. Recurrent neural networks are able to learn and produce sequences of data, and they are getting better and better at music generation. Nowadays, people are experimenting with human-added music generation with pretty amazing results. Sony has also been working on such a solution with spectacular results. One can also run a network on a large database of leaked human passwords and try to crack new accounts building on that knowledge. Deep neural networks take a substantial amount of time to train, and the final contents of each of the neurons have to be stored, which takes a lot of space. New techniques are being explored to compress the information content of these networks. There's another application where endangered whale species are recognized by convolutional neural networks. Some of them have a worldwide population of less than 500, and this is where machine learning steps in to try to save them. Awesome. YouTube has a huge database full of information on what kind of video thumbnails are the ones that people end up clicking on. They use deep learning to automatically find and suggest the most appealing images for your videos. There is also this crazy application where a network was trained on a huge dataset with images of celebrities. A low-quality image is given where the algorithm creates a higher resolution version building on this knowledge. The leftmost images are the true high resolution images, the second one is the grainy, low resolution input, and the third is the neural network's attempt to reconstruct the original. This application takes your handwriting of a number and visualizes how a convolutional neural network understands and classifies it. Apparently, George R.R. Martin is late with writing the next book of Game of Thrones, but luckily we have recurrent neural networks that can generate text in his style. An infinite amount, so beware George, winter is coming. I mean, the machines are coming. It is truly amazing what these techniques are capable of, and as machine learning is a remarkably fast moving field, new applications pop up pretty much every day. I am quite enthused to do at least one more batch of these. Of course, provided that you liked this one. Let me know. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r,"}, {"start": 4.72, "end": 10.36, "text": " to all of you Fellow Scholars out there who are yearning for some more deep learning action like I do."}, {"start": 10.36, "end": 11.92, "text": " Here goes the second package."}, {"start": 11.92, "end": 14.44, "text": " Buckle up amazing applications await you."}, {"start": 14.44, "end": 18.92, "text": " As always, links to every one of these works are available in the description box."}, {"start": 18.92, "end": 26.76, "text": " This convolutional neural network can learn how to colorize by looking at the same images both in color and black and white."}, {"start": 26.76, "end": 32.08, "text": " The first image is the black and white input. The second is how the algorithm colorized it,"}, {"start": 32.08, "end": 35.68, "text": " and the third is how the image originally looked like in color."}, {"start": 35.68, "end": 36.88, "text": " Insanity."}, {"start": 37.84, "end": 42.160000000000004, "text": " Recurrent neural networks are able to learn and produce sequences of data,"}, {"start": 42.160000000000004, "end": 45.480000000000004, "text": " and they are getting better and better at music generation."}, {"start": 45.48, "end": 60.8, "text": " Nowadays, people are experimenting with human-added music generation with pretty amazing results."}, {"start": 60.8, "end": 70.8, "text": " Sony has also been working on such a solution with spectacular results."}, {"start": 90.8, "end": 113.84, "text": " One can also run a network on a large database of leaked human passwords and try to crack new accounts building on that knowledge."}, {"start": 113.84, "end": 117.6, "text": " Deep neural networks take a substantial amount of time to train,"}, {"start": 117.6, "end": 123.32, "text": " and the final contents of each of the neurons have to be stored, which takes a lot of space."}, {"start": 123.32, "end": 128.07999999999998, "text": " New techniques are being explored to compress the information content of these networks."}, {"start": 128.07999999999998, "end": 133.95999999999998, "text": " There's another application where endangered whale species are recognized by convolutional neural networks."}, {"start": 133.95999999999998, "end": 137.51999999999998, "text": " Some of them have a worldwide population of less than 500,"}, {"start": 137.51999999999998, "end": 140.76, "text": " and this is where machine learning steps in to try to save them."}, {"start": 141.76, "end": 142.95999999999998, "text": " Awesome."}, {"start": 142.96, "end": 150.08, "text": " YouTube has a huge database full of information on what kind of video thumbnails are the ones that people end up clicking on."}, {"start": 150.08, "end": 156.44, "text": " They use deep learning to automatically find and suggest the most appealing images for your videos."}, {"start": 156.44, "end": 163.28, "text": " There is also this crazy application where a network was trained on a huge dataset with images of celebrities."}, {"start": 163.28, "end": 169.04000000000002, "text": " A low-quality image is given where the algorithm creates a higher resolution version building on this knowledge."}, {"start": 169.04, "end": 175.12, "text": " The leftmost images are the true high resolution images, the second one is the grainy, low resolution input,"}, {"start": 175.12, "end": 180.48, "text": " and the third is the neural network's attempt to reconstruct the original."}, {"start": 180.48, "end": 188.48, "text": " This application takes your handwriting of a number and visualizes how a convolutional neural network understands and classifies it."}, {"start": 188.48, "end": 202.39999999999998, "text": " Apparently, George R.R. Martin is late with writing the next book of Game of Thrones,"}, {"start": 202.39999999999998, "end": 206.79999999999998, "text": " but luckily we have recurrent neural networks that can generate text in his style."}, {"start": 206.79999999999998, "end": 210.88, "text": " An infinite amount, so beware George, winter is coming."}, {"start": 210.88, "end": 213.44, "text": " I mean, the machines are coming."}, {"start": 213.44, "end": 220.4, "text": " It is truly amazing what these techniques are capable of, and as machine learning is a remarkably fast moving field,"}, {"start": 220.4, "end": 223.35999999999999, "text": " new applications pop up pretty much every day."}, {"start": 223.35999999999999, "end": 226.72, "text": " I am quite enthused to do at least one more batch of these."}, {"start": 226.72, "end": 228.96, "text": " Of course, provided that you liked this one."}, {"start": 228.96, "end": 229.84, "text": " Let me know."}, {"start": 229.84, "end": 244.72, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fpd_wiOsgDk
5000 Fellow Scholars Special! | Two Minute Papers
We have reached 5000 Fellow Scholars on Two Minute Papers! In this video I share my delight and talk a bit about our future plans with the series. ____________________ WE'D LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by Werwin15 (CC BY 2.0) - https://flic.kr/p/6uUa4p Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karoji Zolnai-Fehir. We just hit 5,000 subscribers. More than 5,000 fellow scholars who wish to join us on our journey of science. It really shows that everyone loves science. They just don't know about it yet. About six months ago, we were celebrating 250 subscribers. The growth of the channel has been nothing short of incredible and this is all attributed to you. Without you, fellow scholars, this series would be nothing but a crazy person sitting at home, talking into a microphone and having way too much fun. Thank you so much for hanging in there. I love doing this and I'm delighted to have each of you in our growing club of fellow scholars. We have also hit half a million views. Holy cow, if we would substitute one human being for every view, we would be close to 6% of the population of Austria or almost 30% of the population of the beautiful Vienna. This is equivalent to about 60% of the population of San Francisco. This is way beyond the amount of people I could ever reach by teaching at the university. It is a true privilege to teach so many people from all around the world. We have so many plans to improve the series in different directions. We have recently switched to 60 frames per second for beautiful smooth and silky animations and closed captions are also now uploaded for most episodes to improve the clarity of the presentations. We are also looking at adding more Patreon perks in the future. There are also tons of amazing research works up the sleeve that you will see very soon in the upcoming videos. Graphic skies, I got your back. Machine learners, this way please. Other spicy topics will also be showcased to keep it fresh and exciting. My wife Felicia is also preparing some incredible artwork for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 7.0, "text": " Dear Fellow Scholars, this is two minute papers with Karoji Zolnai-Fehir. We just hit 5,000 subscribers."}, {"start": 7.0, "end": 12.0, "text": " More than 5,000 fellow scholars who wish to join us on our journey of science."}, {"start": 12.0, "end": 17.0, "text": " It really shows that everyone loves science. They just don't know about it yet."}, {"start": 17.0, "end": 21.0, "text": " About six months ago, we were celebrating 250 subscribers."}, {"start": 21.0, "end": 27.0, "text": " The growth of the channel has been nothing short of incredible and this is all attributed to you."}, {"start": 27.0, "end": 32.0, "text": " Without you, fellow scholars, this series would be nothing but a crazy person sitting at home,"}, {"start": 32.0, "end": 36.0, "text": " talking into a microphone and having way too much fun."}, {"start": 36.0, "end": 43.0, "text": " Thank you so much for hanging in there. I love doing this and I'm delighted to have each of you in our growing club of fellow scholars."}, {"start": 43.0, "end": 46.0, "text": " We have also hit half a million views."}, {"start": 46.0, "end": 51.0, "text": " Holy cow, if we would substitute one human being for every view,"}, {"start": 51.0, "end": 58.0, "text": " we would be close to 6% of the population of Austria or almost 30% of the population of the beautiful Vienna."}, {"start": 58.0, "end": 62.0, "text": " This is equivalent to about 60% of the population of San Francisco."}, {"start": 62.0, "end": 67.0, "text": " This is way beyond the amount of people I could ever reach by teaching at the university."}, {"start": 67.0, "end": 72.0, "text": " It is a true privilege to teach so many people from all around the world."}, {"start": 72.0, "end": 76.0, "text": " We have so many plans to improve the series in different directions."}, {"start": 76.0, "end": 81.0, "text": " We have recently switched to 60 frames per second for beautiful smooth and silky animations"}, {"start": 81.0, "end": 87.0, "text": " and closed captions are also now uploaded for most episodes to improve the clarity of the presentations."}, {"start": 87.0, "end": 90.0, "text": " We are also looking at adding more Patreon perks in the future."}, {"start": 90.0, "end": 96.0, "text": " There are also tons of amazing research works up the sleeve that you will see very soon in the upcoming videos."}, {"start": 96.0, "end": 101.0, "text": " Graphic skies, I got your back. Machine learners, this way please."}, {"start": 101.0, "end": 105.0, "text": " Other spicy topics will also be showcased to keep it fresh and exciting."}, {"start": 105.0, "end": 109.0, "text": " My wife Felicia is also preparing some incredible artwork for you."}, {"start": 109.0, "end": 138.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_ZLXKt4L-AA
Awesome Research For Everyone! - Two Minute Papers Channel Trailer
Two Minute Papers is a series where the most recent and awesome scientific works are discussed in a simple and enjoyable way, two minutes at a time. Give it a try! A full playlist with every episode is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz ______________________ WE'D LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. The thumbnail image was created by NASA (CC BY 2.0) - https://flic.kr/p/7GGgdx Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Research is a glimpse of the future. Computer algorithms are capable of making digital creatures walk, paint in the style of famous artists, create photorealistic images of virtual objects, simulate the motion of fluids and a tom of other super exciting works. However, scientific papers are meant to communicate ideas between experts. They involve lots of mathematics and terminology. In two-minute papers, I try to explain these incredible scientific works in a language that is not only understandable, but enjoyable, two minutes at a time. Papers are for experts, but two-minute papers is for you. If you are interested, let's celebrate science together. There are two new science videos coming every week. Give it a shot, you'll love it. Thanks for watching, and I'm looking forward to greeting you in a growing club of fellow scholars. Cheers!
[{"start": 0.0, "end": 8.0, "text": " Research is a glimpse of the future. Computer algorithms are capable of making digital creatures walk,"}, {"start": 8.0, "end": 12.0, "text": " paint in the style of famous artists,"}, {"start": 14.0, "end": 19.0, "text": " create photorealistic images of virtual objects,"}, {"start": 20.0, "end": 25.0, "text": " simulate the motion of fluids and a tom of other super exciting works."}, {"start": 25.0, "end": 30.0, "text": " However, scientific papers are meant to communicate ideas between experts."}, {"start": 30.0, "end": 33.0, "text": " They involve lots of mathematics and terminology."}, {"start": 33.0, "end": 40.0, "text": " In two-minute papers, I try to explain these incredible scientific works in a language that is not only understandable,"}, {"start": 40.0, "end": 43.0, "text": " but enjoyable, two minutes at a time."}, {"start": 43.0, "end": 47.0, "text": " Papers are for experts, but two-minute papers is for you."}, {"start": 47.0, "end": 50.0, "text": " If you are interested, let's celebrate science together."}, {"start": 50.0, "end": 55.0, "text": " There are two new science videos coming every week. Give it a shot, you'll love it."}, {"start": 55.0, "end": 60.0, "text": " Thanks for watching, and I'm looking forward to greeting you in a growing club of fellow scholars."}, {"start": 60.0, "end": 80.0, "text": " Cheers!"}]
Two Minute Papers
https://www.youtube.com/watch?v=4h0uC9FPVMQ
How To Get Started With Machine Learning? | Two Minute Papers #51
I get a lot of messages from you Fellow Scholars that you would like to get started in machine learning and are looking for materials. Below you find a ton of resources to get you started! __________________________ The AI Revolution: The Road to Superintelligence on Wait But Why: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html Superintelligence by Nick Bostrom: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies Courses: Welch Labs - https://www.youtube.com/playlist?list=PLiaHhY2iBX9hdHaRr6b7XevZtgZRa1PoU Andrew Ng on Coursera - https://class.coursera.org/ml-005/lecture Andrew Ng (YouTube playlist) - https://www.youtube.com/playlist?list=PLA89DCFA6ADACE599 Nando de Freitas (UBC) - https://www.youtube.com/playlist?list=PLE6Wd9FR--Ecf_5nCbnSQMHqORpiChfJf Nando de Freitas (Oxford) - https://www.youtube.com/playlist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu Nando de Freitas (more) - https://www.youtube.com/playlist?list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6 https://www.youtube.com/watch?v=PlhFWT7vAEw&list=PLjK8ddCbDMphIMSXn-w1IjyYpHU3DaUYw One more at Caltech - https://work.caltech.edu/telecourse.html Andrej Karpathy - https://www.youtube.com/playlist?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC UC Berkeley - https://www.youtube.com/channel/UCshmLD2MsyqAKBx8ctivb5Q/videos Geoffrey Hinton - https://www.coursera.org/course/neuralnets Machine Learning specialization at Coursera - https://www.coursera.org/specializations/machine-learning MIT - http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/ Mathematicalmonk's course: https://www.youtube.com/watch?v=yDLKJtOVx5c&list=PLD0F06AA0D2E8FFBA&index=0 "Pattern Recognition and Machine Learning" by Christoper Bishop: http://research.microsoft.com/en-us/um/people/cmbishop/prml/ "Algorithms for Reinforcement Learning" by Csaba Szepesvári: http://www.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf A great talk on deep learning libraries: https://www.youtube.com/watch?v=Vf_-OkqbwPo&feature=youtu.be Two great sources to check for new papers: http://gitxiv.com/top http://www.arxiv-sanity.com/top Recent machine learning papers on the arXiv: http://arxiv.org/list/stat.ML/recent The Machine Learning Reddit: http://www.reddit.com/r/MachineLearning/ One more great post on how to get started with machine learning: https://www.quora.com/How-do-I-get-started-in-machine-learning-both-theory-and-programming/answer/Sebastian-Raschka-1 A great blog post on how to get started with Keras: http://swanintelligence.com/first-steps-with-neural-nets-in-keras.html A website with lots of intuitive articles on deep learning: http://neuralnetworksanddeeplearning.com/ A free book on deep learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville: http://www.deeplearningbook.org/ WE'D LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Vinay S. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by C_osett - https://flic.kr/p/sDTYmm Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Ifehir. I get a lot of messages from you fellow scholars that you would like to get started in machine learning and are looking for materials. Words fail to describe how great the feeling is that the series inspires many of you to start your career in research. At this point we are not only explaining the work of research scientists but creating new research scientists. Machine learning is an amazing field of research that provides us with incredible tools that help us solve problems that were previously impossible to solve. Neural networks can paint in the style of famous artists or recognize images and are capable of so many other things that it simply blows my mind. However, bear in mind that machine learning is not an easy field. This field fuses together the beauty, rigor and preciseness of mathematics with the useful applications of engineering. It is also a fast moving field on almost any given day 10 new scientific papers pop up in the repositories. For everything that I mentioned in this video there is a link in the description box and more so make sure to dive in and check them out. If you have other materials that help you understand some of the more difficult concepts, please let me know in the comments section and I'll include them in the text below. First, some non-scientific texts to get you in the mood are recommending the road to superintelligence on a fantastic blog by the name WeightButWhy. This is a frighteningly long article for many but I guarantee that you won't be able to stop reading it. Beware. Nick Bastrom's superintelligence is also a fantastic read after which you'll probably be convinced that it doesn't make sense to work on anything else but machine learning. There is a previous two minute papers episode on artificial superintelligence if you're looking for a teaser for this book. Now let's get a bit more technical with some of the better video series and courses out there. Welch Labs is an amazing YouTube channel with a very intuitive introduction to the concept of neural networks. Andrew Inc. is a chief scientist at BIDO research in deep learning. His wonderful course is widely regarded as the pinnacle of all machine learning courses and is therefore highly recommended. Nando De Freitas is a professor at the University of Oxford and has also worked with DeepMind. His course that he held at the University of British Columbia covers many of the more advanced concepts in machine learning. Regarding books, I recommend reading my favorite holy tomb of machine learning that goes by the name of pattern recognition and machine learning by Christopher Bischop. A sample chapter is available from the book if you wish to take a look. It has beautiful typesetting, lots of intuition and crystal clear presentation. Definitely worth every penny of the price. I'd like to note that I am not paid for any of the book endorsements in the series. When I recommend a book, I genuinely think that it provides great value to you fellow scholars. About software libraries, usually in most fields, the main problem is that the implementation of many state of the art techniques are severely lacking. Well, luckily in the machine learning community we have them in abundance. I've linked a great talk on what libraries are available and the strengths and weaknesses for each of them. At this point, you'll probably have an idea of which direction you're most excited about. Start searching for keywords, make sure to read the living hell out of the machine learning ready to stay up to date, and the best part is yet to come, starting to explore on your own. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Ifehir."}, {"start": 4.6000000000000005, "end": 9.48, "text": " I get a lot of messages from you fellow scholars that you would like to get started in machine"}, {"start": 9.48, "end": 12.16, "text": " learning and are looking for materials."}, {"start": 12.16, "end": 17.400000000000002, "text": " Words fail to describe how great the feeling is that the series inspires many of you to"}, {"start": 17.400000000000002, "end": 19.44, "text": " start your career in research."}, {"start": 19.44, "end": 24.400000000000002, "text": " At this point we are not only explaining the work of research scientists but creating new"}, {"start": 24.400000000000002, "end": 26.44, "text": " research scientists."}, {"start": 26.44, "end": 31.28, "text": " Machine learning is an amazing field of research that provides us with incredible tools that"}, {"start": 31.28, "end": 35.56, "text": " help us solve problems that were previously impossible to solve."}, {"start": 35.56, "end": 41.0, "text": " Neural networks can paint in the style of famous artists or recognize images and are capable"}, {"start": 41.0, "end": 45.0, "text": " of so many other things that it simply blows my mind."}, {"start": 45.0, "end": 49.040000000000006, "text": " However, bear in mind that machine learning is not an easy field."}, {"start": 49.040000000000006, "end": 54.24, "text": " This field fuses together the beauty, rigor and preciseness of mathematics with the useful"}, {"start": 54.24, "end": 55.92, "text": " applications of engineering."}, {"start": 55.92, "end": 62.0, "text": " It is also a fast moving field on almost any given day 10 new scientific papers pop"}, {"start": 62.0, "end": 63.84, "text": " up in the repositories."}, {"start": 63.84, "end": 67.4, "text": " For everything that I mentioned in this video there is a link in the description box"}, {"start": 67.4, "end": 70.24000000000001, "text": " and more so make sure to dive in and check them out."}, {"start": 70.24000000000001, "end": 74.92, "text": " If you have other materials that help you understand some of the more difficult concepts,"}, {"start": 74.92, "end": 78.92, "text": " please let me know in the comments section and I'll include them in the text below."}, {"start": 78.92, "end": 84.36, "text": " First, some non-scientific texts to get you in the mood are recommending the road to"}, {"start": 84.36, "end": 89.24, "text": " superintelligence on a fantastic blog by the name WeightButWhy."}, {"start": 89.24, "end": 93.96, "text": " This is a frighteningly long article for many but I guarantee that you won't be able"}, {"start": 93.96, "end": 95.36, "text": " to stop reading it."}, {"start": 95.36, "end": 96.36, "text": " Beware."}, {"start": 96.36, "end": 101.4, "text": " Nick Bastrom's superintelligence is also a fantastic read after which you'll probably"}, {"start": 101.4, "end": 106.52, "text": " be convinced that it doesn't make sense to work on anything else but machine learning."}, {"start": 106.52, "end": 110.76, "text": " There is a previous two minute papers episode on artificial superintelligence if you're"}, {"start": 110.76, "end": 112.96000000000001, "text": " looking for a teaser for this book."}, {"start": 112.96, "end": 117.75999999999999, "text": " Now let's get a bit more technical with some of the better video series and courses out"}, {"start": 117.75999999999999, "end": 118.75999999999999, "text": " there."}, {"start": 118.75999999999999, "end": 124.88, "text": " Welch Labs is an amazing YouTube channel with a very intuitive introduction to the concept"}, {"start": 124.88, "end": 126.67999999999999, "text": " of neural networks."}, {"start": 126.67999999999999, "end": 130.76, "text": " Andrew Inc. is a chief scientist at BIDO research in deep learning."}, {"start": 130.76, "end": 135.92, "text": " His wonderful course is widely regarded as the pinnacle of all machine learning courses"}, {"start": 135.92, "end": 138.16, "text": " and is therefore highly recommended."}, {"start": 138.16, "end": 144.07999999999998, "text": " Nando De Freitas is a professor at the University of Oxford and has also worked with DeepMind."}, {"start": 144.07999999999998, "end": 148.72, "text": " His course that he held at the University of British Columbia covers many of the more advanced"}, {"start": 148.72, "end": 150.92, "text": " concepts in machine learning."}, {"start": 150.92, "end": 156.24, "text": " Regarding books, I recommend reading my favorite holy tomb of machine learning that goes"}, {"start": 156.24, "end": 160.8, "text": " by the name of pattern recognition and machine learning by Christopher Bischop."}, {"start": 160.8, "end": 164.4, "text": " A sample chapter is available from the book if you wish to take a look."}, {"start": 164.4, "end": 169.72, "text": " It has beautiful typesetting, lots of intuition and crystal clear presentation."}, {"start": 169.72, "end": 171.96, "text": " Definitely worth every penny of the price."}, {"start": 171.96, "end": 176.32, "text": " I'd like to note that I am not paid for any of the book endorsements in the series."}, {"start": 176.32, "end": 181.08, "text": " When I recommend a book, I genuinely think that it provides great value to you fellow"}, {"start": 181.08, "end": 182.32, "text": " scholars."}, {"start": 182.32, "end": 187.84, "text": " About software libraries, usually in most fields, the main problem is that the implementation"}, {"start": 187.84, "end": 191.20000000000002, "text": " of many state of the art techniques are severely lacking."}, {"start": 191.2, "end": 195.2, "text": " Well, luckily in the machine learning community we have them in abundance."}, {"start": 195.2, "end": 199.72, "text": " I've linked a great talk on what libraries are available and the strengths and weaknesses"}, {"start": 199.72, "end": 201.32, "text": " for each of them."}, {"start": 201.32, "end": 206.35999999999999, "text": " At this point, you'll probably have an idea of which direction you're most excited about."}, {"start": 206.35999999999999, "end": 211.2, "text": " Start searching for keywords, make sure to read the living hell out of the machine learning"}, {"start": 211.2, "end": 217.0, "text": " ready to stay up to date, and the best part is yet to come, starting to explore on your"}, {"start": 217.0, "end": 218.0, "text": " own."}, {"start": 218.0, "end": 221.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-dbkE4FFPrI
Interactive Photo Recoloring | Two Minute Papers #50
Image and color editing is an actively researched topic with really cool applications that you will see in a second. Most of the existing solutions are either easy to use but lack in expressiveness, or they are expressive, but too complex for novices to use. Computation time is also an issue as some of the operations in photoshop can take more than a minute to carry out. Using a naive color transfer technique would destroy a sizeable part of the dynamic range of the input image image, and hence, legitimate features which are all preserved if we use this algorithm instead. ______________________________ The paper "Palette-based Photo Recoloring" is available here: http://gfx.cs.princeton.edu/pubs/Chang_2015_PPR/index.php The thumbnail background image was created by zoutedrop (CC BY 2.0) - https://flic.kr/p/5E32Cc Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejol Naifahir. Image and color editing is an actively researched topic with really cool applications that you will see in a second. Most of the existing solutions are either easy to use but lacking expressiveness, or they are expressive but too complex for novices to use. Computation time is also an issue as some of the operations in Photoshop can take more than a minute to carry out. For color editing, the workflow is very simple. The program extracts the dominant colors of an image, which we can interactively edit ourselves. An example use case would be recaloring the girl's blue sweater to turquoise, or changing the overall tone of the image to orange. Existing tools that can do this are usually either too slow or only accessible to adept users. It is also important to note that it is quite easy to take these great results for granted. Using a Naif color transfer technique would destroy a sizable part of the dynamic range of the image, and hence legitimate features which are all preserved if we use this algorithm instead. One can also use masks to selectively edit different parts of the image. The technique executes really quickly, opening up the possibility of not real time, but interactive recaloring of animated sequences. Or you can also leverage the efficiency of the method to edit not one, but a collection of images in one go. The paper also contains a rigorous evaluation against existing techniques. For instance, they show that this method executed 3 to 20 times faster than the one implemented in Adobe Photoshop. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejol Naifahir."}, {"start": 4.8, "end": 11.28, "text": " Image and color editing is an actively researched topic with really cool applications that you will see in a second."}, {"start": 11.28, "end": 15.84, "text": " Most of the existing solutions are either easy to use but lacking expressiveness,"}, {"start": 15.84, "end": 19.52, "text": " or they are expressive but too complex for novices to use."}, {"start": 19.52, "end": 26.0, "text": " Computation time is also an issue as some of the operations in Photoshop can take more than a minute to carry out."}, {"start": 26.0, "end": 31.92, "text": " For color editing, the workflow is very simple. The program extracts the dominant colors of an image,"}, {"start": 31.92, "end": 34.4, "text": " which we can interactively edit ourselves."}, {"start": 34.4, "end": 39.120000000000005, "text": " An example use case would be recaloring the girl's blue sweater to turquoise,"}, {"start": 39.120000000000005, "end": 42.08, "text": " or changing the overall tone of the image to orange."}, {"start": 42.08, "end": 47.760000000000005, "text": " Existing tools that can do this are usually either too slow or only accessible to adept users."}, {"start": 49.28, "end": 54.64, "text": " It is also important to note that it is quite easy to take these great results for granted."}, {"start": 54.64, "end": 60.480000000000004, "text": " Using a Naif color transfer technique would destroy a sizable part of the dynamic range of the image,"}, {"start": 60.480000000000004, "end": 65.84, "text": " and hence legitimate features which are all preserved if we use this algorithm instead."}, {"start": 65.84, "end": 69.92, "text": " One can also use masks to selectively edit different parts of the image."}, {"start": 75.92, "end": 80.48, "text": " The technique executes really quickly, opening up the possibility of not real time,"}, {"start": 80.48, "end": 85.36, "text": " but interactive recaloring of animated sequences."}, {"start": 91.84, "end": 96.08, "text": " Or you can also leverage the efficiency of the method to edit not one,"}, {"start": 96.08, "end": 98.48, "text": " but a collection of images in one go."}, {"start": 98.48, "end": 102.56, "text": " The paper also contains a rigorous evaluation against existing techniques."}, {"start": 102.56, "end": 110.56, "text": " For instance, they show that this method executed 3 to 20 times faster than the one implemented in Adobe Photoshop."}, {"start": 110.56, "end": 140.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UGAzi1QBVEg
Deep Learning Program Learns to Paint | Two Minute Papers #49
Artificial neural networks were inspired by the human brain and simulate how neurons behave when they are shown a sensory input (e.g., images, sounds, etc). They are known to be excellent tools for image recognition, any many other problems beyond that - they also excel at weather predictions, breast cancer cell mitosis detection, brain image segmentation and toxicity prediction among many others. Deep learning means that we use an artificial neural network with multiple layers, making it even more powerful for more difficult tasks. This time they have been shown to be apt at reproducing the artistic style of many famous painters, such as Vincent Van Gogh and Pablo Picasso among many others. All the user needs to do is provide an input photograph and a target image from which the artistic style will be learned. _______________________ The paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis" is available here: http://arxiv.org/pdf/1601.04589v1.pdf Previous work - the paper "A Neural Algorithm of Artistic Style" is available here http://arxiv.org/pdf/1508.06576v2.pdf Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Recommended for you: Deep Neural Network Learns Van Gogh's Art - https://www.youtube.com/watch?v=-R9bJGNHltQ&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=42 Artificial Neural Networks and Deep Learning - https://www.youtube.com/watch?v=rCWTOOgVXyE&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=31 How Does Deep Learning Work? - https://www.youtube.com/watch?v=He4t7Zekob0&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=39 9 Cool Deep Learning Applications - https://www.youtube.com/watch?v=Bui3DWs02h4&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=36 The shown website with neural art: http://deepart.io/ Check this one out too (it is a different implementation)! https://deepforger.com/ https://twitter.com/deepforger Thumbnail image: Andreas Achenbach - Clearing Up Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. In a previous episode, we discussed how a machine learning technique called a convolutional neural network could paint in the style of famous artists. The key thought is that we are not interested in individual details. We want to teach the neural network the high level concept of artistic style. A convolutional neural network is a fantastic tool for this, since it does not only recognize images well, but the deeper we go in the layers, the more high level concepts neurons will uncode, therefore the better idea the algorithm will have of the artistic style. In an earlier example, we've shown that the neurons in the first hidden layer will create edges as a combination of the input pixels of the image. The next layer is a combination of edges that create object parts. One layer deeper, a combination of object parts create object models, and this is what makes convolutional neural networks so useful in recognizing them. In this follow-up paper, the authors use a very deep 19 layer convolutional network that they mix together with Markov random fields, a popular technique in image and texture synthesis. The resulting algorithm retains the important structures of the input image significantly better than the previous work, which is also awesome, by the way. Failure cases are also reported in the paper, which was a joy to read. Make sure to take a look if you're interested. We also have a ton of video resources in the description box that you can voraciously consume for more information. There is already a really cool website where you either wait quite a bit and get results for free or you pay someone to compute it and get results almost immediately. If any of you are in the mood of doing some neural art of something two minute papers related, make sure to show it to me. I'd love to see that. As a criticism, I've heard people saying that the technique takes forever on an HD image, which is absolutely true. But please bear in mind that the most exciting research is not speeding up something that runs slowly. The most exciting thing about research is making something possible that was previously impossible. If the work is worthy of attention, it doesn't matter if it's slow. Three follow-up papers later, it will be done in a matter of seconds. In summary, the results are nothing short of amazing. I was full of ecstatically when I first seen them. This is insanity and it's only been a few months since the initial algorithm was published. I always say this, but we are living amazing times indeed. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.92, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 4.92, "end": 9.74, "text": " In a previous episode, we discussed how a machine learning technique called a convolutional"}, {"start": 9.74, "end": 14.08, "text": " neural network could paint in the style of famous artists."}, {"start": 14.08, "end": 18.12, "text": " The key thought is that we are not interested in individual details."}, {"start": 18.12, "end": 22.72, "text": " We want to teach the neural network the high level concept of artistic style."}, {"start": 22.72, "end": 27.72, "text": " A convolutional neural network is a fantastic tool for this, since it does not only recognize"}, {"start": 27.72, "end": 32.92, "text": " images well, but the deeper we go in the layers, the more high level concepts neurons will"}, {"start": 32.92, "end": 38.16, "text": " uncode, therefore the better idea the algorithm will have of the artistic style."}, {"start": 38.16, "end": 43.04, "text": " In an earlier example, we've shown that the neurons in the first hidden layer will create"}, {"start": 43.04, "end": 46.64, "text": " edges as a combination of the input pixels of the image."}, {"start": 46.64, "end": 51.28, "text": " The next layer is a combination of edges that create object parts."}, {"start": 51.28, "end": 56.56, "text": " One layer deeper, a combination of object parts create object models, and this is what makes"}, {"start": 56.56, "end": 60.24, "text": " convolutional neural networks so useful in recognizing them."}, {"start": 60.24, "end": 65.92, "text": " In this follow-up paper, the authors use a very deep 19 layer convolutional network"}, {"start": 65.92, "end": 71.04, "text": " that they mix together with Markov random fields, a popular technique in image and texture"}, {"start": 71.04, "end": 72.04, "text": " synthesis."}, {"start": 72.04, "end": 76.30000000000001, "text": " The resulting algorithm retains the important structures of the input image significantly"}, {"start": 76.30000000000001, "end": 80.64, "text": " better than the previous work, which is also awesome, by the way."}, {"start": 80.64, "end": 84.52000000000001, "text": " Failure cases are also reported in the paper, which was a joy to read."}, {"start": 84.52, "end": 86.56, "text": " Make sure to take a look if you're interested."}, {"start": 86.56, "end": 91.11999999999999, "text": " We also have a ton of video resources in the description box that you can voraciously"}, {"start": 91.11999999999999, "end": 93.28, "text": " consume for more information."}, {"start": 93.28, "end": 97.8, "text": " There is already a really cool website where you either wait quite a bit and get results"}, {"start": 97.8, "end": 102.28, "text": " for free or you pay someone to compute it and get results almost immediately."}, {"start": 102.28, "end": 106.52, "text": " If any of you are in the mood of doing some neural art of something two minute papers"}, {"start": 106.52, "end": 108.47999999999999, "text": " related, make sure to show it to me."}, {"start": 108.47999999999999, "end": 109.92, "text": " I'd love to see that."}, {"start": 109.92, "end": 114.94, "text": " As a criticism, I've heard people saying that the technique takes forever on an HD image,"}, {"start": 114.94, "end": 116.48, "text": " which is absolutely true."}, {"start": 116.48, "end": 121.2, "text": " But please bear in mind that the most exciting research is not speeding up something that"}, {"start": 121.2, "end": 122.52, "text": " runs slowly."}, {"start": 122.52, "end": 127.1, "text": " The most exciting thing about research is making something possible that was previously"}, {"start": 127.1, "end": 128.2, "text": " impossible."}, {"start": 128.2, "end": 131.64, "text": " If the work is worthy of attention, it doesn't matter if it's slow."}, {"start": 131.64, "end": 135.92000000000002, "text": " Three follow-up papers later, it will be done in a matter of seconds."}, {"start": 135.92000000000002, "end": 139.4, "text": " In summary, the results are nothing short of amazing."}, {"start": 139.4, "end": 142.36, "text": " I was full of ecstatically when I first seen them."}, {"start": 142.36, "end": 147.48000000000002, "text": " This is insanity and it's only been a few months since the initial algorithm was published."}, {"start": 147.48000000000002, "end": 151.32, "text": " I always say this, but we are living amazing times indeed."}, {"start": 151.32, "end": 178.44, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=K-0KJtk07YU
Artistic Manipulation of Caustics | Two Minute Papers #48
A caustic is a beautiful phenomenon in nature where curved surfaces reflect or refract light, thereby concentrating it to a relatively small area. Since we, humans are pretty bad at estimating how exactly caustics should look like, one can manipulate them to be more in line with their artistic vision. __________________________________ The paper "Stylized Caustics: Progressive Rendering of Animated Caustics" is available here: http://vc.cs.ovgu.de/files/publications/2016/Guenther_2016_EGb.pdf CynicatPro's channel is available here: https://www.youtube.com/user/CynicatPro/videos Recommended for you: Manipulating Photorealistic Renderings - https://www.youtube.com/watch?v=L7MOeQw47BM Ray Tracing / Subsurface Scattering @ Function 2015 - https://www.youtube.com/watch?v=qyDUvatu5M8 Metropolis Light Transport - https://www.youtube.com/watch?v=f0Uzit_-h3M The background of the thumbnail image was created by woodleywonderworks (CC BY 2.0) - https://flic.kr/p/2tKhPY Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir. A caustic is a beautiful phenomenon in nature, where curved surfaces reflect or refract light, thereby concentrating it to a relatively small area. If it's not your favorite visual phenomenon in nature yet, which is almost impossible, then you absolutely have to watch this episode. If it is, all the better because you're gonna love what's coming up now. Imagine that we have a photorealistic rendering program that simulates the path of light rays in a scene that we put together and creates beautiful imagery of our caustics. However, since we, humans, are pretty bad at estimating how exactly caustics should look like, one can manipulate them to be more in line with their artistic vision. Previously we had an episode on a technique which made it possible to pull the caustic patterns to be more visible, but this paper offers a much more sophisticated toolset to talk to torment these caustic patterns to our liking. We can specify a target pattern that we would like to see and obtain a blend between what would normally happen in physics and what we imagined to appear there. It also supports animated sequences. Artists who use these tools are just as skilled in their trade as the scientists who created this algorithm, so I can only imagine the miracles they will create with such a technique. If you are interested in diving into photorealistic rendering, material modeling and all that cool stuff, there are completely free and open source tools out there like blender that you can use. If you would like to get started, check out Cinecad Pro's YouTube channel that has tons of really great material. Here's a quick teaser of his channel. Thanks, Scott. Over on Cinecad Pro, I do a bunch of art related videos and I also, a while back, did a physically evasuating series talking about how light interact with materials and how we can model that in blender cycles, which is a render that Catois used on a couple occasions to show off ray tracing. It's actually a really fun program and worth checking out if 3D art or anything in that vicinity is your jam, maybe come check me out. And yeah, thanks for the time, Catois, and back to you. Thanks. There's a link to his channel in the description box. Make sure to check it out and subscribe if you like what you see there. I just realized that the year has barely started and it is already lavishing in beautiful papers. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir."}, {"start": 4.5200000000000005, "end": 11.120000000000001, "text": " A caustic is a beautiful phenomenon in nature, where curved surfaces reflect or refract light,"}, {"start": 11.120000000000001, "end": 14.280000000000001, "text": " thereby concentrating it to a relatively small area."}, {"start": 14.280000000000001, "end": 19.2, "text": " If it's not your favorite visual phenomenon in nature yet, which is almost impossible,"}, {"start": 19.2, "end": 21.72, "text": " then you absolutely have to watch this episode."}, {"start": 21.72, "end": 25.560000000000002, "text": " If it is, all the better because you're gonna love what's coming up now."}, {"start": 25.56, "end": 30.56, "text": " Imagine that we have a photorealistic rendering program that simulates the path of light rays"}, {"start": 30.56, "end": 35.839999999999996, "text": " in a scene that we put together and creates beautiful imagery of our caustics."}, {"start": 35.839999999999996, "end": 41.4, "text": " However, since we, humans, are pretty bad at estimating how exactly caustics should look"}, {"start": 41.4, "end": 45.8, "text": " like, one can manipulate them to be more in line with their artistic vision."}, {"start": 45.8, "end": 50.599999999999994, "text": " Previously we had an episode on a technique which made it possible to pull the caustic patterns"}, {"start": 50.599999999999994, "end": 55.44, "text": " to be more visible, but this paper offers a much more sophisticated toolset to talk"}, {"start": 55.44, "end": 58.68, "text": " to torment these caustic patterns to our liking."}, {"start": 58.68, "end": 63.56, "text": " We can specify a target pattern that we would like to see and obtain a blend between what"}, {"start": 63.56, "end": 68.36, "text": " would normally happen in physics and what we imagined to appear there."}, {"start": 68.36, "end": 72.28, "text": " It also supports animated sequences."}, {"start": 72.28, "end": 77.16, "text": " Artists who use these tools are just as skilled in their trade as the scientists who created"}, {"start": 77.16, "end": 82.28, "text": " this algorithm, so I can only imagine the miracles they will create with such a technique."}, {"start": 82.28, "end": 86.96000000000001, "text": " If you are interested in diving into photorealistic rendering, material modeling and all that"}, {"start": 86.96000000000001, "end": 91.88, "text": " cool stuff, there are completely free and open source tools out there like blender that"}, {"start": 91.88, "end": 92.88, "text": " you can use."}, {"start": 92.88, "end": 97.4, "text": " If you would like to get started, check out Cinecad Pro's YouTube channel that has tons"}, {"start": 97.4, "end": 99.16, "text": " of really great material."}, {"start": 99.16, "end": 101.4, "text": " Here's a quick teaser of his channel."}, {"start": 101.4, "end": 102.4, "text": " Thanks, Scott."}, {"start": 102.4, "end": 107.92, "text": " Over on Cinecad Pro, I do a bunch of art related videos and I also, a while back, did a physically"}, {"start": 107.92, "end": 112.88, "text": " evasuating series talking about how light interact with materials and how we can model that"}, {"start": 112.88, "end": 117.64, "text": " in blender cycles, which is a render that Catois used on a couple occasions to show off"}, {"start": 117.64, "end": 119.12, "text": " ray tracing."}, {"start": 119.12, "end": 124.2, "text": " It's actually a really fun program and worth checking out if 3D art or anything in that"}, {"start": 124.2, "end": 126.8, "text": " vicinity is your jam, maybe come check me out."}, {"start": 126.8, "end": 131.12, "text": " And yeah, thanks for the time, Catois, and back to you."}, {"start": 131.12, "end": 132.12, "text": " Thanks."}, {"start": 132.12, "end": 134.16, "text": " There's a link to his channel in the description box."}, {"start": 134.16, "end": 137.56, "text": " Make sure to check it out and subscribe if you like what you see there."}, {"start": 137.56, "end": 143.2, "text": " I just realized that the year has barely started and it is already lavishing in beautiful"}, {"start": 143.2, "end": 144.2, "text": " papers."}, {"start": 144.2, "end": 170.95999999999998, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=58tsN03IXlw
Should You Take the Stairs at Work? (For Weight Loss) | Two Minute Papers #47
Let's find out the answer to ever occurring question - if you are aiming for weight loss, should you take the stairs at work? How many calories do you burn by running up a few flights of stairs? There are so many rumors floating around, let's see what the researchers say! Scientists set up a controlled experiment where over a hundred subjects climbed 11 stories of staircases, ascending a total of 27 meters vertically. Their oxygen consumption and heart rate was measured, and most importantly for us, the amount of caloric cost of this undertaking. Hint: it is not so great for weight loss, but the oxygen and heart rate responses make it is a wonderful way to refresh your body and keep is healthy. Keep climbing! :) _________________ The paper "Heart rate, oxygen uptake, and energy cost of ascending and descending the stairs" is available here: https://www.researchgate.net/publication/11432301_Heart_rate_oxygen_uptake_and_energy_cost_of_ascending_and_descending_the_stairs http://www.ncbi.nlm.nih.gov/pubmed/11932581 The source of the thumbnail image (CC0): https://www.pexels.com/photo/stairs-staircase-28188/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. I am sure that every one of us have overheard conversations at the workplace where people talked about taking the stairs instead of the elevator and as a result getting leaner. There was also a running joke on the internet about Arnold Classic, a famous bodybuilding competition slash festival where I think it's fair to say that people tended to favor the escalator instead of the stairs. So, this is it. We're going to settle this here and now. Do we get lean from taking the stairs every day? Scientists set up a controlled experiment where over 100 subjects climbed 11 stories of staircases ascending a total of 27 meters vertically. Their oxygen consumption and heart rate was measured and most importantly for us the amount of calorie cost of this undertaking. They have found that all this self-legilation with ascending 11 stories of staircases burns a whopping 19.7 kilo calories. Each step is worth approximately one-tenth of a kilo calorie if you're ascending. Descending is worth approximately half of that. Apparently these bodybuilders know what they are doing. The authors diplomatically noted, stair climbing exercise using a local public access staircase met the minimum requirements for cardio-respiratory benefits and can therefore be considered a viable exercise for most people and suitable for promotion of physical activity. Which sounds like the scientific equivalent of basically, well, better than nothing. So does this mean that you shouldn't take the stairs at work if you're looking to get lean because of that? Not a chance. However, if you're looking for a refreshing cardiovascular exercise in the morning that refreshes your body and makes you happier, start climbing. I do it all the time and I love it. We are exploring so far uncharted territories and this makes the first episode on nutrition in the series. If you would like to hear more of this, let me know in the comment section. I'd also be happy to see your paper recommendations in nutrition as well. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 5.0, "end": 10.14, "text": " I am sure that every one of us have overheard conversations at the workplace where people"}, {"start": 10.14, "end": 16.740000000000002, "text": " talked about taking the stairs instead of the elevator and as a result getting leaner."}, {"start": 16.740000000000002, "end": 22.38, "text": " There was also a running joke on the internet about Arnold Classic, a famous bodybuilding competition"}, {"start": 22.38, "end": 28.86, "text": " slash festival where I think it's fair to say that people tended to favor the escalator"}, {"start": 28.86, "end": 30.86, "text": " instead of the stairs."}, {"start": 30.86, "end": 32.86, "text": " So, this is it."}, {"start": 32.86, "end": 35.6, "text": " We're going to settle this here and now."}, {"start": 35.6, "end": 39.36, "text": " Do we get lean from taking the stairs every day?"}, {"start": 39.36, "end": 45.379999999999995, "text": " Scientists set up a controlled experiment where over 100 subjects climbed 11 stories of"}, {"start": 45.379999999999995, "end": 50.36, "text": " staircases ascending a total of 27 meters vertically."}, {"start": 50.36, "end": 55.54, "text": " Their oxygen consumption and heart rate was measured and most importantly for us the amount"}, {"start": 55.54, "end": 58.32, "text": " of calorie cost of this undertaking."}, {"start": 58.32, "end": 65.36, "text": " They have found that all this self-legilation with ascending 11 stories of staircases burns"}, {"start": 65.36, "end": 70.7, "text": " a whopping 19.7 kilo calories."}, {"start": 70.7, "end": 75.98, "text": " Each step is worth approximately one-tenth of a kilo calorie if you're ascending."}, {"start": 75.98, "end": 79.38, "text": " Descending is worth approximately half of that."}, {"start": 79.38, "end": 82.58, "text": " Apparently these bodybuilders know what they are doing."}, {"start": 82.58, "end": 88.74, "text": " The authors diplomatically noted, stair climbing exercise using a local public access staircase"}, {"start": 88.74, "end": 94.94, "text": " met the minimum requirements for cardio-respiratory benefits and can therefore be considered a viable"}, {"start": 94.94, "end": 99.94, "text": " exercise for most people and suitable for promotion of physical activity."}, {"start": 99.94, "end": 105.62, "text": " Which sounds like the scientific equivalent of basically, well, better than nothing."}, {"start": 105.62, "end": 109.98, "text": " So does this mean that you shouldn't take the stairs at work if you're looking to get"}, {"start": 109.98, "end": 111.78, "text": " lean because of that?"}, {"start": 111.78, "end": 113.26, "text": " Not a chance."}, {"start": 113.26, "end": 118.5, "text": " However, if you're looking for a refreshing cardiovascular exercise in the morning that"}, {"start": 118.5, "end": 122.66, "text": " refreshes your body and makes you happier, start climbing."}, {"start": 122.66, "end": 125.78, "text": " I do it all the time and I love it."}, {"start": 125.78, "end": 131.26, "text": " We are exploring so far uncharted territories and this makes the first episode on nutrition"}, {"start": 131.26, "end": 132.5, "text": " in the series."}, {"start": 132.5, "end": 135.94, "text": " If you would like to hear more of this, let me know in the comment section."}, {"start": 135.94, "end": 140.58, "text": " I'd also be happy to see your paper recommendations in nutrition as well."}, {"start": 140.58, "end": 144.42000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YPpIWQnufu8
What is Impostor Syndrome? | Two Minute Papers #46
Who, or what is an impostor? An impostor is a person who deceives others by pretending to be someone else. In this episode, we look in the mind of someone who suffers from impostor syndrome and see the fickle understanding they have of their own achievements. Researchers, academics and high achieving women are especially vulnerable to this condition. There will also be a few words on how to treat it. __________________________ The paper "The imposter phenomenon in high achieving women: Dynamics and therapeutic intervention" is available here: http://www.suzanneimes.com/wp-content/uploads/2012/09/Imposter-Phenomenon.pdf An article on Hayden Christensen: http://www.vulture.com/2015/12/hayden-christensen-quit-movies-after-star-wars.html The filmography of Hayden Christensen: https://en.wikipedia.org/wiki/Hayden_Christensen Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Izuna Ifei, who or what is an imposter? Simple definition. A person who deceives others by pretending to be someone else. Full definition. One that assumes false identity or title for the purpose of deception. Wow! The full definition is a whopping seven characters more. I don't even know what to do with this amount of time that I saved reading the simple definition first. Let's look in the mind of someone who suffers from imposter syndrome and see the fickle understanding they have of their own achievements. 98 points out of 100. This surely means that they mixed up my submission with someone else's who was way smarter than I am. I went in for the next round of interviews, messed up big time, and I got hired with an incredible salary. This can, of course, only be a misunderstanding. I got elected for this prestigious award. I don't know how this could possibly have happened. Maybe someone who really likes me tried to pressure the price committee to vote for me. I cannot possibly imagine any other way of this happening. I've been five years at this company now, and still, no one found out that I'm a fraud. That's a disaster. Nothing can convince me that I'm not an imposter who fooled everyone else for being a bright person. If funny as it may sound, this is a very real problem. Researchers, academics, and high-achieving women are especially vulnerable to this condition. But it is indeed not limited to these professions. For instance, Hayden Christensen, the actor playing Anakin Skywalker in the beloved Star Wars series, appears to suffer from very similar symptoms. He said, I felt like I had this great thing in Star Wars that provided all the opportunities and gave me a career, but it all kind of felt a little too handed to me. I didn't want to go through life feeling like I was just riding a wave. So as a response, he hasn't really done any acting for four years. He also said, If this time away is going to be damaging to my career, then so be it. If I can come back afterward and claw my way back in, then maybe I'll feel like I earned it. The treatment of imposter syndrome includes group sittings where the patients discuss their lives and come to a sudden realization that they are not alone, and this is not an individual case, but a common pattern among high-achieving people. As they are also very keen on dismissing praise and kind words, they are instructed to be more vigilant about doing that and to try to take in all the nourishment they get from their colleagues. These are the more common ways to treat this serious condition that poisons so many people's minds. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 8.120000000000001, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Izuna Ifei, who or what is an imposter?"}, {"start": 8.120000000000001, "end": 9.36, "text": " Simple definition."}, {"start": 9.36, "end": 13.8, "text": " A person who deceives others by pretending to be someone else."}, {"start": 13.8, "end": 15.200000000000001, "text": " Full definition."}, {"start": 15.200000000000001, "end": 19.64, "text": " One that assumes false identity or title for the purpose of deception."}, {"start": 19.64, "end": 20.96, "text": " Wow!"}, {"start": 20.96, "end": 23.76, "text": " The full definition is a whopping seven characters more."}, {"start": 23.76, "end": 27.96, "text": " I don't even know what to do with this amount of time that I saved reading the simple"}, {"start": 27.96, "end": 29.28, "text": " definition first."}, {"start": 29.28, "end": 35.92, "text": " Let's look in the mind of someone who suffers from imposter syndrome and see the fickle understanding"}, {"start": 35.92, "end": 38.28, "text": " they have of their own achievements."}, {"start": 38.28, "end": 40.68, "text": " 98 points out of 100."}, {"start": 40.68, "end": 45.52, "text": " This surely means that they mixed up my submission with someone else's who was way smarter"}, {"start": 45.52, "end": 46.52, "text": " than I am."}, {"start": 46.52, "end": 51.32, "text": " I went in for the next round of interviews, messed up big time, and I got hired with an"}, {"start": 51.32, "end": 53.2, "text": " incredible salary."}, {"start": 53.2, "end": 57.0, "text": " This can, of course, only be a misunderstanding."}, {"start": 57.0, "end": 59.52, "text": " I got elected for this prestigious award."}, {"start": 59.52, "end": 62.52, "text": " I don't know how this could possibly have happened."}, {"start": 62.52, "end": 67.32, "text": " Maybe someone who really likes me tried to pressure the price committee to vote for me."}, {"start": 67.32, "end": 71.4, "text": " I cannot possibly imagine any other way of this happening."}, {"start": 71.4, "end": 76.8, "text": " I've been five years at this company now, and still, no one found out that I'm a fraud."}, {"start": 76.8, "end": 78.92, "text": " That's a disaster."}, {"start": 78.92, "end": 83.56, "text": " Nothing can convince me that I'm not an imposter who fooled everyone else for being a bright"}, {"start": 83.56, "end": 84.56, "text": " person."}, {"start": 84.56, "end": 88.04, "text": " If funny as it may sound, this is a very real problem."}, {"start": 88.04, "end": 92.96000000000001, "text": " Researchers, academics, and high-achieving women are especially vulnerable to this condition."}, {"start": 92.96000000000001, "end": 96.32000000000001, "text": " But it is indeed not limited to these professions."}, {"start": 96.32000000000001, "end": 101.96000000000001, "text": " For instance, Hayden Christensen, the actor playing Anakin Skywalker in the beloved Star Wars"}, {"start": 101.96000000000001, "end": 105.72, "text": " series, appears to suffer from very similar symptoms."}, {"start": 105.72, "end": 106.72, "text": " He said,"}, {"start": 106.72, "end": 111.32000000000001, "text": " I felt like I had this great thing in Star Wars that provided all the opportunities and"}, {"start": 111.32, "end": 115.24, "text": " gave me a career, but it all kind of felt a little too handed to me."}, {"start": 115.24, "end": 120.24, "text": " I didn't want to go through life feeling like I was just riding a wave."}, {"start": 120.24, "end": 128.76, "text": " So as a response, he hasn't really done any acting for four years."}, {"start": 128.76, "end": 129.76, "text": " He also said,"}, {"start": 129.76, "end": 133.72, "text": " If this time away is going to be damaging to my career, then so be it."}, {"start": 133.72, "end": 138.72, "text": " If I can come back afterward and claw my way back in, then maybe I'll feel like I earned"}, {"start": 138.72, "end": 139.72, "text": " it."}, {"start": 139.72, "end": 144.04, "text": " The treatment of imposter syndrome includes group sittings where the patients discuss their"}, {"start": 144.04, "end": 149.4, "text": " lives and come to a sudden realization that they are not alone, and this is not an individual"}, {"start": 149.4, "end": 153.2, "text": " case, but a common pattern among high-achieving people."}, {"start": 153.2, "end": 157.96, "text": " As they are also very keen on dismissing praise and kind words, they are instructed to be"}, {"start": 157.96, "end": 162.48, "text": " more vigilant about doing that and to try to take in all the nourishment they get from"}, {"start": 162.48, "end": 163.48, "text": " their colleagues."}, {"start": 163.48, "end": 168.44, "text": " These are the more common ways to treat this serious condition that poisons so many people's"}, {"start": 168.44, "end": 169.44, "text": " minds."}, {"start": 169.44, "end": 172.84, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u3C4zkxNtok
Biophysical Skin Aging Simulations | Two Minute Papers #45
The faithful simulation of human skin is incredibly important both in computer games, the movie industry, and also in medical sciences. The appearance of our face is strongly determined by the underlying structure of our skin. Human skin changes significantly with age. Scientists at the University of Zaragoza came up with a really cool, fully-fledged biophysically-based model that opens up the possibility of simply specifying intuitive parameters like age, gender, skin type, and get, after some processing, a much lighter skin representation ready to generate photorealistic rendered results in real time. _________________________ The paper "A Biophysically-Based Model of the Optical Properties of Skin Aging" is available here: http://giga.cps.unizar.es/~ajarabo/pubs/skinAgingEG15/ The thumbnail image was taken from this work. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizhola Ifehir. The faithful simulation of human skin is incredibly important, both in computer games, the movie industry, and also in medical sciences. The appearance of our face is strongly determined by the underlying structure of our skin. Human skin changes significantly with age. It becomes thinner and more dry, while the concentration of chromophores, the main skin pigments diminishes and becomes more irregular. Those pigment concentrations are determined by our age, gender, skin type, and even external factors like, for example, exposition to UV radiation or our smoking habits. As we age, the outermost layer of our skin, the epidermis, thins. The melanin, hemoglobin, and water concentration levels drop over time. As you could imagine, having a plausible simulation considering all the involved actors is fraught with difficulties. Scientists at the University of Saragosa came up with a really cool, fully-fledged biophysically-based model that opens up the possibility of simply specifying intuitive parameters like age, gender, skin type, and get, after some processing, a much lighter skin representation, ready to generate photorealistic rendered results in real time. Luckily, one can record diffusion profiles, also called scattering profiles, that tell us the color of light that is reflected by our skin. In this image above, you can see a rendered image and a diffusion profile of a 30 and an 80-year-old person. The idea is the following, you specify intuitive inputs like age and skin, then run a detailed simulation once that creates these diffusion profiles that you can use forever in your rendering program. And all this is done in a way that is biophysically impeccable. I was sure that there was some potential in this topic, but when I first saw these results, they completely crushed my expectations. Excellent piece of work. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizhola Ifehir."}, {"start": 4.72, "end": 10.76, "text": " The faithful simulation of human skin is incredibly important, both in computer games, the movie"}, {"start": 10.76, "end": 13.92, "text": " industry, and also in medical sciences."}, {"start": 13.92, "end": 19.6, "text": " The appearance of our face is strongly determined by the underlying structure of our skin."}, {"start": 19.6, "end": 22.44, "text": " Human skin changes significantly with age."}, {"start": 22.44, "end": 27.64, "text": " It becomes thinner and more dry, while the concentration of chromophores, the main skin"}, {"start": 27.64, "end": 31.080000000000002, "text": " pigments diminishes and becomes more irregular."}, {"start": 31.080000000000002, "end": 37.64, "text": " Those pigment concentrations are determined by our age, gender, skin type, and even external"}, {"start": 37.64, "end": 44.120000000000005, "text": " factors like, for example, exposition to UV radiation or our smoking habits."}, {"start": 44.120000000000005, "end": 48.68, "text": " As we age, the outermost layer of our skin, the epidermis, thins."}, {"start": 48.68, "end": 53.2, "text": " The melanin, hemoglobin, and water concentration levels drop over time."}, {"start": 53.2, "end": 58.440000000000005, "text": " As you could imagine, having a plausible simulation considering all the involved actors"}, {"start": 58.440000000000005, "end": 60.68000000000001, "text": " is fraught with difficulties."}, {"start": 60.68000000000001, "end": 66.48, "text": " Scientists at the University of Saragosa came up with a really cool, fully-fledged biophysically-based"}, {"start": 66.48, "end": 72.80000000000001, "text": " model that opens up the possibility of simply specifying intuitive parameters like age,"}, {"start": 72.80000000000001, "end": 78.92, "text": " gender, skin type, and get, after some processing, a much lighter skin representation, ready"}, {"start": 78.92, "end": 82.88, "text": " to generate photorealistic rendered results in real time."}, {"start": 82.88, "end": 88.72, "text": " Luckily, one can record diffusion profiles, also called scattering profiles, that tell"}, {"start": 88.72, "end": 92.39999999999999, "text": " us the color of light that is reflected by our skin."}, {"start": 92.39999999999999, "end": 97.88, "text": " In this image above, you can see a rendered image and a diffusion profile of a 30 and an"}, {"start": 97.88, "end": 99.75999999999999, "text": " 80-year-old person."}, {"start": 99.75999999999999, "end": 106.16, "text": " The idea is the following, you specify intuitive inputs like age and skin, then run a detailed"}, {"start": 106.16, "end": 111.88, "text": " simulation once that creates these diffusion profiles that you can use forever in your rendering"}, {"start": 111.88, "end": 112.88, "text": " program."}, {"start": 112.88, "end": 117.8, "text": " And all this is done in a way that is biophysically impeccable."}, {"start": 117.8, "end": 122.72, "text": " I was sure that there was some potential in this topic, but when I first saw these results,"}, {"start": 122.72, "end": 125.39999999999999, "text": " they completely crushed my expectations."}, {"start": 125.39999999999999, "end": 126.64, "text": " Excellent piece of work."}, {"start": 126.64, "end": 156.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=AHl2JjGsu0s
Extrapolations and Crowdfunded Research (Experiment) | Two Minute Papers #44
What is extrapolation? Extrapolation basically means continuing lines (or connecting dots, if you like this intuition better). A good example is when we have data for something from the last few days or years, and would like to have a forecast for the future. We will do some linear and nonlinear extrapolations (and learn what they mean) and try to find out the amount of money Experiment will raise for open research. Experiment is a cool new startup that is trying to accelerate progress in research by crowdsourcing it. ______________________ Logarithmic growth examples from the comments: - athletic training - at first, you make great improvements, then as you approach the limits of your endurance, progress slows down, and eventually stops (Morten Eriksen), - The approximate number of Olympic records on the men's 100 m sprint (RelatedGiraffe), - Bacterial growth. At first, there is a lot of sugar to feed bacteria but there simply aren't that many bacteria and they split as fast as they possibly can, roughly doubling each time step. But eventually the limits of the available sugar become apparent and newly born bacteria either don't find enough nutrition to split again or they outright starve. Inevitably you run into a balance where about as many bacteria die as are born and thus the population growth runs flat (Kram). Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Experiment, crowdsourcing research: https://experiment.com/ Links to Wolfram|Alpha to reproduce the experiments ( Linear fit: http://www.wolframalpha.com/input/?i=linear+fit+52700,+527197,+766924,+3856542 Quadratic fit: http://www.wolframalpha.com/input/?i=quadratic+fit+52700,+527197,+766924,+3856542 Logarithmic fit: http://www.wolframalpha.com/input/?i=logarithmic+fit+52700,+527197,+766924,+3856542 Plot ALL the functions! http://www.wolframalpha.com/input/?i=plot+sqrt%28x%29+and+x+and+x%5E2+and+e%5Ex-2+where+x+%3D+0..5+y%3D0..4 One more great image to explain the concept of sublinear and superlinear: http://deliveryimages.acm.org/10.1145/2720000/2719919/figs/f1.jpg The good old xkcd: https://xkcd.com/605/ The thumbnail image background was created by NIAID (CC BY 2.0): https://flic.kr/p/rg1p9H Animation at the start (MIT license): https://www.shadertoy.com/view/llXSD7 Pregnant lady image by Tobias Lindman (CC BY 2.0): https://flic.kr/p/nhZ7Yh Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir. What is extrapolation? We hear the term a lot, so let's try to learn what's behind it. Despite the complicated definitions that are out there, extrapolation basically means continuing lines. A good example is when we have data for something from the last few days or years and would like to have a forecast for the future. We'll jump right into an example, just give me a second to build this up. It's going to make sense in the end, I promise. So in many fields of science, it is really difficult to get research projects funded. Experiment is a cool new startup that is trying to accelerate progress by crowdsourcing it. It doesn't get simpler than this system. Scientists pitch their research project plan and kindhearted people pledge a one-time donation to help their cause. It is like Kickstarter for research. Some of the newer funded projects include growing food in space, developing an open protocol for insulin production, and of course, a mandatory cat project that includes sequencing the genome of rare mutations. Crowdfunding research is such a terrific idea and I tell you, these guys are really doing it right. The startup has been founded in 2012 and people pledged $52,000 that year. In the next year, 10 times that, and they have kept the study and quite impressive growth ever since. In 2015, they raised almost $4 million for open research. It's amazing. Okay, so, a nice extrapolation problem. How much can they expect to raise next year in 2016? Before we start, we have to be extremely sure to extrapolate only if we are reasonably sure about the nature of the trends and that they won't change significantly in the near future. With that out of the way, let's do a linear extrapolation. Linear means that growth follows a straight line. So, we put these dots on a paper and try to connect them with a line. Now, we take the mathematical description of this line and substitute something in it. Since we have four years of data, four dots, we would be interested in the location of the fifth point, which is the amount of raised money in 2016. So, let's do it. 10 to the sixth is 1 million, so this says that we can expect $4.2 million. But let's be more optimistic and do a super-linear extrapolation. Super-linear means that the rate of growth is not a straight line, but something that is accelerating in time. If this assumption is true, we can expect them to raise way more $7.4 million. A bit more pessimistic solution would be a sublinear extrapolation. Sublinear means that growth slows down in time. This kind of growth is described well with, for instance, the logarithm function. This effect is also often called the effect of diminishing returns. A good example of this is the skill level of Google DeepMind's Artificial Intelligence program that plays go. As we add more and more computational resources, the algorithm gets better and better at the game, but after a point, there's only so much one can learn, therefore progress slows down and eventually gets close to stopping. There are so many examples of this effect in our lives. If you have some great examples of logarithmic growth, let me know in the comments section. I'll include the best ones in the video description box. According to this logarithm, we can expect the company to raise less than the previous estimation. $3.1 million next year. Sorry guys. A common pitfall in popular media is that the mathematically untrained minds almost always assume linear growth due to its simplicity. This can lead to hilariously wrong results. If you would extrapolate the size of the belly of a pregnant woman after nine months, your conclusion would be run because she is going to explode, whereas we know that a baby is going to be born and she is going to get back in shape. If I had zero wives yesterday and it's my wedding day today, I will sure as hell have a couple dozen wives by next month. Many things are inherently non-linear and doing a simple linear extrapolation often doesn't do justice to the problem at hand. Bear in mind that there are many different ways to connect a bunch of dots. Let's try to find out why we had wildly varying results. This is due to the fact that we only had four samples, that means four dots. If I plot these possible functions that we have been talking about, we get the following. It seems that the further we go, the more they diverge. However, in this case, if we have data only between zero and one, for instance, there is very little difference between a wild exponential function and a very conservative square root base growth. You can also imagine your logarithm here. The more dots we have over a greater period of time, the more we can distinguish the nature of our growth. And an educated mind has to take into consideration that many phenomena are inherently non-linear – if you catch someone doing a linear extrapolation, always ask, are you sure that the process your modeling is indeed linear and do you have enough data to prove that? That's all for today. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir."}, {"start": 5.0, "end": 6.8, "text": " What is extrapolation?"}, {"start": 6.8, "end": 10.36, "text": " We hear the term a lot, so let's try to learn what's behind it."}, {"start": 10.36, "end": 15.120000000000001, "text": " Despite the complicated definitions that are out there, extrapolation basically means"}, {"start": 15.120000000000001, "end": 16.8, "text": " continuing lines."}, {"start": 16.8, "end": 22.0, "text": " A good example is when we have data for something from the last few days or years and would"}, {"start": 22.0, "end": 24.560000000000002, "text": " like to have a forecast for the future."}, {"start": 24.560000000000002, "end": 28.48, "text": " We'll jump right into an example, just give me a second to build this up."}, {"start": 28.48, "end": 31.04, "text": " It's going to make sense in the end, I promise."}, {"start": 31.04, "end": 36.68, "text": " So in many fields of science, it is really difficult to get research projects funded."}, {"start": 36.68, "end": 41.72, "text": " Experiment is a cool new startup that is trying to accelerate progress by crowdsourcing it."}, {"start": 41.72, "end": 44.4, "text": " It doesn't get simpler than this system."}, {"start": 44.4, "end": 48.84, "text": " Scientists pitch their research project plan and kindhearted people pledge a one-time"}, {"start": 48.84, "end": 51.0, "text": " donation to help their cause."}, {"start": 51.0, "end": 53.6, "text": " It is like Kickstarter for research."}, {"start": 53.6, "end": 59.24, "text": " Some of the newer funded projects include growing food in space, developing an open protocol"}, {"start": 59.24, "end": 65.54, "text": " for insulin production, and of course, a mandatory cat project that includes sequencing the"}, {"start": 65.54, "end": 67.84, "text": " genome of rare mutations."}, {"start": 67.84, "end": 72.68, "text": " Crowdfunding research is such a terrific idea and I tell you, these guys are really doing"}, {"start": 72.68, "end": 73.68, "text": " it right."}, {"start": 73.68, "end": 79.36, "text": " The startup has been founded in 2012 and people pledged $52,000 that year."}, {"start": 79.36, "end": 84.76, "text": " In the next year, 10 times that, and they have kept the study and quite impressive growth"}, {"start": 84.76, "end": 86.0, "text": " ever since."}, {"start": 86.0, "end": 91.03999999999999, "text": " In 2015, they raised almost $4 million for open research."}, {"start": 91.03999999999999, "end": 92.03999999999999, "text": " It's amazing."}, {"start": 92.03999999999999, "end": 95.08, "text": " Okay, so, a nice extrapolation problem."}, {"start": 95.08, "end": 99.72, "text": " How much can they expect to raise next year in 2016?"}, {"start": 99.72, "end": 105.0, "text": " Before we start, we have to be extremely sure to extrapolate only if we are reasonably sure"}, {"start": 105.0, "end": 110.0, "text": " about the nature of the trends and that they won't change significantly in the near future."}, {"start": 110.0, "end": 113.76, "text": " With that out of the way, let's do a linear extrapolation."}, {"start": 113.76, "end": 117.12, "text": " Linear means that growth follows a straight line."}, {"start": 117.12, "end": 121.96000000000001, "text": " So, we put these dots on a paper and try to connect them with a line."}, {"start": 121.96000000000001, "end": 127.24000000000001, "text": " Now, we take the mathematical description of this line and substitute something in it."}, {"start": 127.24000000000001, "end": 132.04, "text": " Since we have four years of data, four dots, we would be interested in the location of"}, {"start": 132.04, "end": 136.12, "text": " the fifth point, which is the amount of raised money in 2016."}, {"start": 136.12, "end": 138.56, "text": " So, let's do it."}, {"start": 138.56, "end": 145.56, "text": " 10 to the sixth is 1 million, so this says that we can expect $4.2 million."}, {"start": 145.56, "end": 150.56, "text": " But let's be more optimistic and do a super-linear extrapolation."}, {"start": 150.56, "end": 155.39999999999998, "text": " Super-linear means that the rate of growth is not a straight line, but something that is"}, {"start": 155.4, "end": 166.88, "text": " accelerating in time."}, {"start": 166.88, "end": 173.32, "text": " If this assumption is true, we can expect them to raise way more $7.4 million."}, {"start": 173.32, "end": 178.12, "text": " A bit more pessimistic solution would be a sublinear extrapolation."}, {"start": 178.12, "end": 181.76, "text": " Sublinear means that growth slows down in time."}, {"start": 181.76, "end": 186.79999999999998, "text": " This kind of growth is described well with, for instance, the logarithm function."}, {"start": 186.79999999999998, "end": 190.95999999999998, "text": " This effect is also often called the effect of diminishing returns."}, {"start": 190.95999999999998, "end": 195.64, "text": " A good example of this is the skill level of Google DeepMind's Artificial Intelligence"}, {"start": 195.64, "end": 197.48, "text": " program that plays go."}, {"start": 197.48, "end": 202.16, "text": " As we add more and more computational resources, the algorithm gets better and better at the"}, {"start": 202.16, "end": 207.28, "text": " game, but after a point, there's only so much one can learn, therefore progress slows"}, {"start": 207.28, "end": 210.44, "text": " down and eventually gets close to stopping."}, {"start": 210.44, "end": 213.44, "text": " There are so many examples of this effect in our lives."}, {"start": 213.44, "end": 217.6, "text": " If you have some great examples of logarithmic growth, let me know in the comments section."}, {"start": 217.6, "end": 221.28, "text": " I'll include the best ones in the video description box."}, {"start": 221.28, "end": 225.8, "text": " According to this logarithm, we can expect the company to raise less than the previous"}, {"start": 225.8, "end": 226.8, "text": " estimation."}, {"start": 226.8, "end": 229.92, "text": " $3.1 million next year."}, {"start": 229.92, "end": 232.84, "text": " Sorry guys."}, {"start": 232.84, "end": 238.48, "text": " A common pitfall in popular media is that the mathematically untrained minds almost always"}, {"start": 238.48, "end": 241.64, "text": " assume linear growth due to its simplicity."}, {"start": 241.64, "end": 244.32, "text": " This can lead to hilariously wrong results."}, {"start": 244.32, "end": 248.88, "text": " If you would extrapolate the size of the belly of a pregnant woman after nine months, your"}, {"start": 248.88, "end": 254.32, "text": " conclusion would be run because she is going to explode, whereas we know that a baby"}, {"start": 254.32, "end": 257.71999999999997, "text": " is going to be born and she is going to get back in shape."}, {"start": 257.71999999999997, "end": 263.0, "text": " If I had zero wives yesterday and it's my wedding day today, I will sure as hell have"}, {"start": 263.0, "end": 267.96, "text": " a couple dozen wives by next month."}, {"start": 267.96, "end": 272.88, "text": " Many things are inherently non-linear and doing a simple linear extrapolation often doesn't"}, {"start": 272.88, "end": 275.12, "text": " do justice to the problem at hand."}, {"start": 275.12, "end": 279.15999999999997, "text": " Bear in mind that there are many different ways to connect a bunch of dots."}, {"start": 279.15999999999997, "end": 282.47999999999996, "text": " Let's try to find out why we had wildly varying results."}, {"start": 282.47999999999996, "end": 287.56, "text": " This is due to the fact that we only had four samples, that means four dots."}, {"start": 287.56, "end": 295.88, "text": " If I plot these possible functions that we have been talking about, we get the following."}, {"start": 295.88, "end": 299.68, "text": " It seems that the further we go, the more they diverge."}, {"start": 299.68, "end": 305.04, "text": " However, in this case, if we have data only between zero and one, for instance, there is"}, {"start": 305.04, "end": 311.04, "text": " very little difference between a wild exponential function and a very conservative square root"}, {"start": 311.04, "end": 312.28, "text": " base growth."}, {"start": 312.28, "end": 314.56, "text": " You can also imagine your logarithm here."}, {"start": 314.56, "end": 319.12, "text": " The more dots we have over a greater period of time, the more we can distinguish the nature"}, {"start": 319.12, "end": 320.56, "text": " of our growth."}, {"start": 320.56, "end": 325.84, "text": " And an educated mind has to take into consideration that many phenomena are inherently non-linear"}, {"start": 325.84, "end": 331.35999999999996, "text": " \u2013 if you catch someone doing a linear extrapolation, always ask, are you sure that the process"}, {"start": 331.35999999999996, "end": 335.84, "text": " your modeling is indeed linear and do you have enough data to prove that?"}, {"start": 335.84, "end": 336.84, "text": " That's all for today."}, {"start": 336.84, "end": 364.84, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=j9FLOinaG94
Breaking Deep Learning Systems With Adversarial Examples | Two Minute Papers #43
Artificial neural networks are computer programs that try to approximate what the human brain does to solve problems like recognizing objects in images. In this piece of work, the authors analyze the properties of these neural networks and try to unveil what exactly makes them think that a paper towel is a paper towel, and, building on this knowledge, try to fool these programs. Carefully crafted adversarial examples can be used to fool deep neural network reliably. _______________ The paper "Intriguing properties of neural networks" is available here: http://arxiv.org/abs/1312.6199 The paper "Explaining and Harnessing Adversarial Examples" is available here: http://arxiv.org/abs/1412.6572 Image credits: Thumbnail image - https://www.flickr.com/photos/healthblog/8384110298 (CC BY-SA 2.0) Shower cap - Code Words / Julia Evans - https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture MNIST - hxhl95 Andrej Karpathy's online convolutional neural network: http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir. Artificial neural networks are computer programs that try to approximate what the human brain does to solve problems like recognizing objects in images. In this piece of work, the authors analyze the properties of these neural networks and try to unveil what exactly makes them think that the paper towel is a paper towel. And, building on this knowledge, try to fool these programs. Let's have a look at this example. One can grab this input image and this noise pattern and add these two images together similarly as one would add two numbers together. The operation yields the image you see here. I think it's fair to say that the difference is barely perceptible for the human eye. Not so much for neural networks because the input image we started with is classified correctly as a bus and the image that you see on the right is classified as an ostrich. In simple terms, bus plus noise equals an ostrich. These two images look almost exactly the same but the neural networks see them quite differently. We call these examples adversarial examples because they are designed to fool these image recognition programs. In machine learning research, there are common datasets to test different classification techniques on. One of the best known example is the M-nist handwriting dataset. It is basically a bunch of images depicting handwritten numbers that machine learning algorithms have to recognize. Long ago, this used to be a difficult problem but nowadays, any half of this algorithm can guess the numbers correctly more than 99% of the time after learning for just a few seconds. Now we'll see that these adversarial examples are not created by chance. If we add a lot of random noise to these images, they get quite difficult to recognize. Let's engage in modesty and say that I, myself as a human, can recognize approximately half of them. But only if I look closely and maybe even squint. A neural network can guess this correctly, approximately 50% of the time as well, which is a quite respectable result. Therefore adding random noise is not really fooling the neural networks. However, if you look at these adversarial examples in the even columns, you see how carefully they are crafted as they look very similar to the original images. But the classification accuracy of the neural network on these examples is 0%. You heard it correctly. It gets it wrong basically all the time. The take home message is that carefully crafted adversarial examples can be used to fool deep neural networks reliably. You can watch them flounder on many hilarious examples to your enjoyment. My dear sir, the queen wears a shower cap, you say. I beg your pardon. If you would like to support two minute papers, we are available on Patreon and offer really cool perks for our fellow scholars. For instance, you can watch each episode around 24 hours in advance or even decide the topic of the next episodes. How cool is that? If you're interested, just click on the box below on the screen. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir."}, {"start": 5.0, "end": 9.94, "text": " Artificial neural networks are computer programs that try to approximate what the human brain"}, {"start": 9.94, "end": 14.200000000000001, "text": " does to solve problems like recognizing objects in images."}, {"start": 14.200000000000001, "end": 18.72, "text": " In this piece of work, the authors analyze the properties of these neural networks and"}, {"start": 18.72, "end": 24.400000000000002, "text": " try to unveil what exactly makes them think that the paper towel is a paper towel."}, {"start": 24.400000000000002, "end": 27.8, "text": " And, building on this knowledge, try to fool these programs."}, {"start": 27.8, "end": 29.72, "text": " Let's have a look at this example."}, {"start": 29.72, "end": 35.54, "text": " One can grab this input image and this noise pattern and add these two images together"}, {"start": 35.54, "end": 38.8, "text": " similarly as one would add two numbers together."}, {"start": 38.8, "end": 41.12, "text": " The operation yields the image you see here."}, {"start": 41.12, "end": 46.08, "text": " I think it's fair to say that the difference is barely perceptible for the human eye."}, {"start": 46.08, "end": 50.739999999999995, "text": " Not so much for neural networks because the input image we started with is classified"}, {"start": 50.739999999999995, "end": 56.84, "text": " correctly as a bus and the image that you see on the right is classified as an ostrich."}, {"start": 56.84, "end": 61.36000000000001, "text": " In simple terms, bus plus noise equals an ostrich."}, {"start": 61.36000000000001, "end": 66.80000000000001, "text": " These two images look almost exactly the same but the neural networks see them quite differently."}, {"start": 66.80000000000001, "end": 72.32000000000001, "text": " We call these examples adversarial examples because they are designed to fool these image"}, {"start": 72.32000000000001, "end": 74.0, "text": " recognition programs."}, {"start": 74.0, "end": 78.44, "text": " In machine learning research, there are common datasets to test different classification"}, {"start": 78.44, "end": 79.52000000000001, "text": " techniques on."}, {"start": 79.52000000000001, "end": 83.48, "text": " One of the best known example is the M-nist handwriting dataset."}, {"start": 83.48, "end": 88.84, "text": " It is basically a bunch of images depicting handwritten numbers that machine learning algorithms"}, {"start": 88.84, "end": 90.52000000000001, "text": " have to recognize."}, {"start": 90.52000000000001, "end": 95.84, "text": " Long ago, this used to be a difficult problem but nowadays, any half of this algorithm"}, {"start": 95.84, "end": 101.24000000000001, "text": " can guess the numbers correctly more than 99% of the time after learning for just a few"}, {"start": 101.24000000000001, "end": 102.56, "text": " seconds."}, {"start": 102.56, "end": 107.56, "text": " Now we'll see that these adversarial examples are not created by chance."}, {"start": 107.56, "end": 113.04, "text": " If we add a lot of random noise to these images, they get quite difficult to recognize."}, {"start": 113.04, "end": 119.48, "text": " Let's engage in modesty and say that I, myself as a human, can recognize approximately half"}, {"start": 119.48, "end": 120.48, "text": " of them."}, {"start": 120.48, "end": 123.28, "text": " But only if I look closely and maybe even squint."}, {"start": 123.28, "end": 128.76000000000002, "text": " A neural network can guess this correctly, approximately 50% of the time as well, which"}, {"start": 128.76000000000002, "end": 131.32, "text": " is a quite respectable result."}, {"start": 131.32, "end": 135.84, "text": " Therefore adding random noise is not really fooling the neural networks."}, {"start": 135.84, "end": 141.08, "text": " However, if you look at these adversarial examples in the even columns, you see how carefully"}, {"start": 141.08, "end": 144.88000000000002, "text": " they are crafted as they look very similar to the original images."}, {"start": 144.88000000000002, "end": 151.24, "text": " But the classification accuracy of the neural network on these examples is 0%."}, {"start": 151.24, "end": 152.44, "text": " You heard it correctly."}, {"start": 152.44, "end": 155.48000000000002, "text": " It gets it wrong basically all the time."}, {"start": 155.48000000000002, "end": 160.96, "text": " The take home message is that carefully crafted adversarial examples can be used to fool deep"}, {"start": 160.96, "end": 162.92000000000002, "text": " neural networks reliably."}, {"start": 162.92000000000002, "end": 167.52, "text": " You can watch them flounder on many hilarious examples to your enjoyment."}, {"start": 167.52, "end": 171.32000000000002, "text": " My dear sir, the queen wears a shower cap, you say."}, {"start": 171.32000000000002, "end": 176.92000000000002, "text": " I beg your pardon."}, {"start": 176.92000000000002, "end": 182.20000000000002, "text": " If you would like to support two minute papers, we are available on Patreon and offer really"}, {"start": 182.20000000000002, "end": 184.64000000000001, "text": " cool perks for our fellow scholars."}, {"start": 184.64000000000001, "end": 190.76000000000002, "text": " For instance, you can watch each episode around 24 hours in advance or even decide the topic"}, {"start": 190.76000000000002, "end": 192.28, "text": " of the next episodes."}, {"start": 192.28, "end": 193.52, "text": " How cool is that?"}, {"start": 193.52, "end": 196.76000000000002, "text": " If you're interested, just click on the box below on the screen."}, {"start": 196.76, "end": 200.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=IFmj5M5Q5jg
How DeepMind Conquered Go With Deep Learning (AlphaGo) | Two Minute Papers #42
This time around, Google DeepMind embarked on a journey to write an algorithm that plays Go. Go is an ancient chinese board game where the opposing players try to capture each other's stones on the board. Behind the veil of this deceptively simple ruleset, lies an enormous layer of depth and complexity. As scientists like to say, the search space of this problem is significantly larger than that of chess. So large, that one often has to rely on human intuition to find a suitable next move, therefore it is not surprising that playing Go on a high level is, or maybe was widely believed to be intractable for machines. The result is Google DeepMind's AlphaGo, the deep learning technique that defeated a professional player and European champion, Fan Hui. __________________ The paper "Mastering the Game of Go with Deep Neural Networks and Tree Search" is available here: https://storage.googleapis.com/deepmind-data/assets/papers/deepmind-mastering-go.pdf http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html Wired's coverage of AlphaGo: http://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/ Video coverage from DeepMind and Nature: https://www.youtube.com/watch?v=g-dKXOlsf98 https://www.youtube.com/watch?v=SUbqykXVx0A Myungwan Kim analysis: https://www.youtube.com/watch?v=NHRHUHW6HQE Photo credits: Watson - AP Photo/Jeopardy Productions, Inc. Fan Hui match photo - Google DeepMind - https://www.youtube.com/watch?v=SUbqykXVx0A Go board image credits (all CC BY 2.0): Renato Ganoza - https://flic.kr/p/7nX4kK Jaro Larnos (changes were applied, mostly recoloring) - https://flic.kr/p/dDeQU9 Luis de Bethencourt - https://flic.kr/p/4c5RaR Detailed analysis of the games against Fan Hui and some more speculation: https://www.reddit.com/r/MachineLearning/comments/43fl90/synopsis_of_top_go_professionals_analysis_of/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In 1997, the news took the world by storm. Gárikas Párov, World Champion and Grandmaster chess player, was defeated by an Artificial Intelligence Program by the name Deep Blue. In 2011, IBM Watson won first place in the famous American quiz show, Jeopardy. In 2014, Google DeepMind created an algorithm that mastered a number of Atari games by working on a raw pixel input. This algorithm learned in a similar way as a human would. This time around, Google DeepMind embarked on a journey to write an algorithm that plays Go. Go is an ancient Chinese board game where the opposing players try to capture each other's stones on the board. And the veil of this deceptively simple rule set lies an enormous layer of depth and complexity. As scientists like to say, the search space of this problem is significantly larger than that of chess. So large that one often has to rely on human intuition to find a suitable next move. Therefore, it is not surprising that Go on a high level is, or maybe was, widely believed to be intractable for machines. This chart shows the skill level of previous artificial intelligence programs. The green bar shows the skill level of a professional player used as a reference. The red bars mean that these older techniques required a significant starting advantage to be able to contend with human opponents. As you can see, DeepMind's new program skill level is well beyond most professional players. An elite pro player and European champion, Fun Hui, was challenged to play AlphaGo, Google DeepMind's newest invention and got defeated in all five matches they played together. During these games, each turn it took approximately two seconds for the algorithm to come up with the next move. An interesting detail is that these strange black bars show confidence intervals, which means that the smaller they are, the more confident one can be in the validity of the measurements. As one can see, these confidence intervals are much shorter for the artificial intelligence programs than the human player, likely because one can fire up a machine and let it play a million games and get a great estimation of its skill level, while the human player can only play a very limited number of matches. There is still a lot left to be excited for. In March, the algorithm will play a world champion. The rate of improvement in artificial intelligence research is accelerating at a staggering pace. The only question that remains is not if something is possible, but when it will become possible. I wake up every day, excited to read the newest breakthroughs in the field. And of course, trying to add some leaves to the tree of knowledge with my own projects. I feel privileged to be alive in such an amazing time. As always, there is lots of references in the description box, make sure to check them out. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.76, "end": 8.68, "text": " In 1997, the news took the world by storm."}, {"start": 8.68, "end": 14.72, "text": " G\u00e1rikas P\u00e1rov, World Champion and Grandmaster chess player, was defeated by an Artificial Intelligence"}, {"start": 14.72, "end": 17.400000000000002, "text": " Program by the name Deep Blue."}, {"start": 17.400000000000002, "end": 24.2, "text": " In 2011, IBM Watson won first place in the famous American quiz show, Jeopardy."}, {"start": 24.2, "end": 30.88, "text": " In 2014, Google DeepMind created an algorithm that mastered a number of Atari games by working"}, {"start": 30.88, "end": 32.72, "text": " on a raw pixel input."}, {"start": 32.72, "end": 36.36, "text": " This algorithm learned in a similar way as a human would."}, {"start": 36.36, "end": 41.32, "text": " This time around, Google DeepMind embarked on a journey to write an algorithm that plays"}, {"start": 41.32, "end": 42.32, "text": " Go."}, {"start": 42.32, "end": 47.56, "text": " Go is an ancient Chinese board game where the opposing players try to capture each other's"}, {"start": 47.56, "end": 49.44, "text": " stones on the board."}, {"start": 49.44, "end": 56.519999999999996, "text": " And the veil of this deceptively simple rule set lies an enormous layer of depth and complexity."}, {"start": 56.519999999999996, "end": 61.76, "text": " As scientists like to say, the search space of this problem is significantly larger than"}, {"start": 61.76, "end": 63.16, "text": " that of chess."}, {"start": 63.16, "end": 68.96, "text": " So large that one often has to rely on human intuition to find a suitable next move."}, {"start": 68.96, "end": 75.16, "text": " Therefore, it is not surprising that Go on a high level is, or maybe was, widely believed"}, {"start": 75.16, "end": 77.8, "text": " to be intractable for machines."}, {"start": 77.8, "end": 82.52, "text": " This chart shows the skill level of previous artificial intelligence programs."}, {"start": 82.52, "end": 87.47999999999999, "text": " The green bar shows the skill level of a professional player used as a reference."}, {"start": 87.47999999999999, "end": 92.36, "text": " The red bars mean that these older techniques required a significant starting advantage"}, {"start": 92.36, "end": 95.28, "text": " to be able to contend with human opponents."}, {"start": 95.28, "end": 100.24, "text": " As you can see, DeepMind's new program skill level is well beyond most professional"}, {"start": 100.24, "end": 101.24, "text": " players."}, {"start": 101.24, "end": 107.2, "text": " An elite pro player and European champion, Fun Hui, was challenged to play AlphaGo, Google"}, {"start": 107.2, "end": 113.56, "text": " DeepMind's newest invention and got defeated in all five matches they played together."}, {"start": 113.56, "end": 118.44, "text": " During these games, each turn it took approximately two seconds for the algorithm to come up"}, {"start": 118.44, "end": 120.04, "text": " with the next move."}, {"start": 120.04, "end": 125.64, "text": " An interesting detail is that these strange black bars show confidence intervals, which"}, {"start": 125.64, "end": 131.72, "text": " means that the smaller they are, the more confident one can be in the validity of the measurements."}, {"start": 131.72, "end": 136.32, "text": " As one can see, these confidence intervals are much shorter for the artificial intelligence"}, {"start": 136.32, "end": 141.48, "text": " programs than the human player, likely because one can fire up a machine and let it play"}, {"start": 141.48, "end": 146.76, "text": " a million games and get a great estimation of its skill level, while the human player can"}, {"start": 146.76, "end": 150.32, "text": " only play a very limited number of matches."}, {"start": 150.32, "end": 152.79999999999998, "text": " There is still a lot left to be excited for."}, {"start": 152.79999999999998, "end": 156.35999999999999, "text": " In March, the algorithm will play a world champion."}, {"start": 156.35999999999999, "end": 162.16, "text": " The rate of improvement in artificial intelligence research is accelerating at a staggering pace."}, {"start": 162.16, "end": 168.35999999999999, "text": " The only question that remains is not if something is possible, but when it will become possible."}, {"start": 168.35999999999999, "end": 173.35999999999999, "text": " I wake up every day, excited to read the newest breakthroughs in the field."}, {"start": 173.35999999999999, "end": 178.2, "text": " And of course, trying to add some leaves to the tree of knowledge with my own projects."}, {"start": 178.2, "end": 182.35999999999999, "text": " I feel privileged to be alive in such an amazing time."}, {"start": 182.35999999999999, "end": 186.35999999999999, "text": " As always, there is lots of references in the description box, make sure to check them"}, {"start": 186.35999999999999, "end": 187.35999999999999, "text": " out."}, {"start": 187.36, "end": 192.8, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZaFqvM1IsP8
What Do Virtual Objects Sound Like? | Two Minute Papers #41
In many episodes about computer graphics, we explored works on how to simulate the motion and the collision of different bodies. However, sounds are just as important as visuals, and there are really cool techniques out there that take the geometry and material description of such objects and they simulate how smashing them together would sound like. What is really cool is that the technique also offers editing capabilities. You compute a simulation only once, and then, edit and explore as much as you desire. __________________________ The paper "Interactive Acoustic Transfer Approximation for Modal Sound " is available here: http://www.cs.columbia.edu/cg/transfer/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoi Zona Ifehir. In many episodes about computer graphics, we explored works on how to simulate the motion and the collision of different bodies. However, sounds are just as important as visuals, and there are really cool techniques out there that take the geometry and material description of such objects, and they simulate how smashing them together would sound like. This one is a more sophisticated method, which is not only faster than previous works, but can simulate a greater variety of materials, and we can also edit the solutions without needing to recompute the expensive equations that yield the sound as a result. The faster part comes from a set of optimizations, most importantly something that is called mesh simplification. This means that the simulations are done not only on the original, but vastly simplified shapes. The result of this simplified simulation is close to indistinguishable from the real deal, but is considerably cheaper to compute. What is really cool is that the technique also offers editing capabilities. You compute the simulation only once, and then edit and explore as much as you desire. The stiffness and damping parameters can be edited without any additional work. And a few materials can be characterized with this. The model can also approximate a quite sophisticated phenomenon where the frequency of a sound is changing in time. One can, for instance, specify stiffness values that vary in time to produce these cool frequency shifting effects. It is also possible to exaggerate or dampen different frequencies of these sound effects, and these results are given to you immediately. This is meeting all my standards. Amazing piece of work. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karoi Zona Ifehir."}, {"start": 4.64, "end": 10.0, "text": " In many episodes about computer graphics, we explored works on how to simulate the motion"}, {"start": 10.0, "end": 12.52, "text": " and the collision of different bodies."}, {"start": 12.52, "end": 18.04, "text": " However, sounds are just as important as visuals, and there are really cool techniques out"}, {"start": 18.04, "end": 24.240000000000002, "text": " there that take the geometry and material description of such objects, and they simulate how smashing"}, {"start": 24.240000000000002, "end": 26.68, "text": " them together would sound like."}, {"start": 26.68, "end": 31.92, "text": " This one is a more sophisticated method, which is not only faster than previous works,"}, {"start": 31.92, "end": 37.28, "text": " but can simulate a greater variety of materials, and we can also edit the solutions without"}, {"start": 37.28, "end": 42.480000000000004, "text": " needing to recompute the expensive equations that yield the sound as a result."}, {"start": 42.480000000000004, "end": 48.44, "text": " The faster part comes from a set of optimizations, most importantly something that is called"}, {"start": 48.44, "end": 50.6, "text": " mesh simplification."}, {"start": 50.6, "end": 55.32, "text": " This means that the simulations are done not only on the original, but vastly simplified"}, {"start": 55.32, "end": 56.32, "text": " shapes."}, {"start": 56.32, "end": 60.88, "text": " The result of this simplified simulation is close to indistinguishable from the real"}, {"start": 60.88, "end": 69.04, "text": " deal, but is considerably cheaper to compute."}, {"start": 69.04, "end": 73.8, "text": " What is really cool is that the technique also offers editing capabilities."}, {"start": 73.8, "end": 79.6, "text": " You compute the simulation only once, and then edit and explore as much as you desire."}, {"start": 79.6, "end": 84.24000000000001, "text": " The stiffness and damping parameters can be edited without any additional work."}, {"start": 84.24, "end": 99.16, "text": " And a few materials can be characterized with this."}, {"start": 99.16, "end": 105.0, "text": " The model can also approximate a quite sophisticated phenomenon where the frequency of a sound is"}, {"start": 105.0, "end": 107.08, "text": " changing in time."}, {"start": 107.08, "end": 112.52, "text": " One can, for instance, specify stiffness values that vary in time to produce these cool"}, {"start": 112.52, "end": 114.8, "text": " frequency shifting effects."}, {"start": 114.8, "end": 120.19999999999999, "text": " It is also possible to exaggerate or dampen different frequencies of these sound effects,"}, {"start": 120.19999999999999, "end": 130.4, "text": " and these results are given to you immediately."}, {"start": 130.4, "end": 134.4, "text": " This is meeting all my standards."}, {"start": 134.4, "end": 135.6, "text": " Amazing piece of work."}, {"start": 135.6, "end": 145.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=KgIrnR2O8KQ
Simulating Viscosity and Melting Fluids | Two Minute Papers #40
In this series, we have studied fluid simulations extensively. But we haven't talked about one important quantity that describes a fluid, and this quantity is none other than viscosity. Viscosity means the resistance of a fluid against deformation. The large viscosity of honey makes it highly resistant to deformation, and this is responsible for its famous and beautiful coiling effect. Water, however, does not have a lot of objections against deformations, making it so easy to pour it into a glass. With this piece of work, it is possible to efficiently simulate the motion of fluids, and it supports the simulation of a large range of viscosities. __________________________ The paper "An Implicit Viscosity Formulation for SPH Fluids" is available here: http://cg.informatik.uni-freiburg.de/publications/2015_SIGGRAPH_viscousSPH.pdf Recommended for you: Painting with Fluid Simulations - https://www.youtube.com/watch?v=1aVSb-UbYWc Modeling Colliding and Merging Fluids - https://www.youtube.com/watch?v=uj8b5mu0P7Y Adaptive Fluid Simulations - https://www.youtube.com/watch?v=dH1s49-lrBk Video source: Smarter Every Day - https://www.youtube.com/watch?v=zz5lGkDdk78 The thumbnail image was created by Dino Giordano (CC BY 2.0) - https://flic.kr/p/4p9z4w Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejolna Ifehir. In this series, we have studied fluid simulations extensively, but we haven't talked about one important quantity that describes a fluid, and this quantity is none other than viscosity. Viscosity means the resistance of a fluid against deformation. The large viscosity of honey makes it highly resistant to deformation, and this is responsible for its famous and beautiful coiling effect. Water, however, does not have a lot of objections against deformations, making it so easy to pour it into a glass. With this piece of work, it is possible to efficiently simulate the motion of fluids, and not only that, but it also supports the simulation of a large range of viscositives. This can also change in time, for instance, physicists know that raising the temperature will make the viscosity of fluids decrease, which leads to melting. Therefore, decreasing the viscosity in time will lead to a simulation result that looks exactly like melting. The technique also supports two-way coiling, where the objects have effects on the fluid and vice versa. One can also put multiple fluids with different densities and viscositives into the same domain, and see how they you get out. This is exactly what people need in the industry. Robots techniques that work for small and large scale simulations with multiple objects, and material settings that can possibly change in time. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejolna Ifehir."}, {"start": 4.88, "end": 10.24, "text": " In this series, we have studied fluid simulations extensively, but we haven't talked about"}, {"start": 10.24, "end": 17.32, "text": " one important quantity that describes a fluid, and this quantity is none other than viscosity."}, {"start": 17.32, "end": 21.240000000000002, "text": " Viscosity means the resistance of a fluid against deformation."}, {"start": 21.240000000000002, "end": 26.88, "text": " The large viscosity of honey makes it highly resistant to deformation, and this is responsible"}, {"start": 26.88, "end": 30.36, "text": " for its famous and beautiful coiling effect."}, {"start": 30.36, "end": 36.36, "text": " Water, however, does not have a lot of objections against deformations, making it so easy to"}, {"start": 36.36, "end": 38.519999999999996, "text": " pour it into a glass."}, {"start": 38.519999999999996, "end": 43.480000000000004, "text": " With this piece of work, it is possible to efficiently simulate the motion of fluids,"}, {"start": 43.480000000000004, "end": 52.28, "text": " and not only that, but it also supports the simulation of a large range of viscositives."}, {"start": 52.28, "end": 57.44, "text": " This can also change in time, for instance, physicists know that raising the temperature"}, {"start": 57.44, "end": 61.760000000000005, "text": " will make the viscosity of fluids decrease, which leads to melting."}, {"start": 61.760000000000005, "end": 66.52, "text": " Therefore, decreasing the viscosity in time will lead to a simulation result that looks"}, {"start": 66.52, "end": 70.12, "text": " exactly like melting."}, {"start": 70.12, "end": 75.2, "text": " The technique also supports two-way coiling, where the objects have effects on the fluid"}, {"start": 75.2, "end": 78.68, "text": " and vice versa."}, {"start": 78.68, "end": 84.2, "text": " One can also put multiple fluids with different densities and viscositives into the same domain,"}, {"start": 84.2, "end": 86.72000000000001, "text": " and see how they you get out."}, {"start": 86.72000000000001, "end": 89.36000000000001, "text": " This is exactly what people need in the industry."}, {"start": 89.36000000000001, "end": 94.92000000000002, "text": " Robots techniques that work for small and large scale simulations with multiple objects,"}, {"start": 94.92000000000002, "end": 101.28, "text": " and material settings that can possibly change in time."}, {"start": 101.28, "end": 111.28, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eI_QUtgJHH8
Interactive Editing of Subsurface Scattering | Two Minute Papers #39
Subsurface scattering is a technique to model light transport not between surfaces, but volumes - it therefore enables rendering digital images of human skin, marble, milk, and many other translucent materials. This piece of work takes this a step beyond, and offers a compelling solution to editing subsurface scattering. ______________________________ The paper "Interactive Albedo Editing in Path-Traced Volumetric Materials" is available here: http://graphics.berkeley.edu/papers/Milos-IAE-2013-02/ Recommended for you: More on subsurface scattering - https://www.youtube.com/watch?v=qyDUvatu5M8&feature=youtu.be&t=13m2s Image credits (CC-BY): https://flic.kr/p/9RCYEw https://flic.kr/p/38fLAH https://flic.kr/p/5EP5bw https://en.wikipedia.org/wiki/Subsurface_scattering#/media/File:Skin_Subsurface_Scattering.jpg Blender scene file for the burning flame: http://www.blendswap.com/blends/view/74722 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojola Ifahir. Subsurface scattering means that a portion of light that hits a translucent material does not bounce back from the surface but penetrates it and scatters many many times inside the material. Now, if you have a keen eye, you recognize that there are a lot of materials in real life that have subsurface scattering. Many don't know, but our skin is a great example of that and so is marble, milk, wax, plant leaves, apple and many others. If you would like to hear a bit more about subsurface scattering, check the second part of the video that you see recommended in the corner of this window or just click it in the description box below. Subsurface scattering looks unbelievably beautiful, but at the same time it is very expensive because we have to simulate up to thousands and thousands of scattering events for every ray of light. It really takes forever. And if you'd like to tweak your material settings just a bit because the result is not 100% up to your taste, you have to recreate or what graphics people like to say, re-render these images. It's not really a convenient workflow. This piece of work offers a great solution where you have to wait a bit longer than you would wait for one image, but only once because it runs a generalized light simulation. And after that, whatever changes you apply to your materials, you will see immediately. You can also paint the reflectance properties of this material that we call albedos and get results with full subsurface scattering immediately. Here's another interactive editing workflow where you get results instantaneously, and the result with this technique is indistinguishable from the real deal, which would be re-rendering this result image every time some adjustment is made. With this technique, you can really create the materials you thought up in a fraction of the time of the classical workflow. Spectacular work. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojola Ifahir."}, {"start": 5.0, "end": 11.0, "text": " Subsurface scattering means that a portion of light that hits a translucent material does"}, {"start": 11.0, "end": 18.0, "text": " not bounce back from the surface but penetrates it and scatters many many times inside the material."}, {"start": 18.0, "end": 25.0, "text": " Now, if you have a keen eye, you recognize that there are a lot of materials in real life that have subsurface scattering."}, {"start": 25.0, "end": 35.0, "text": " Many don't know, but our skin is a great example of that and so is marble, milk, wax, plant leaves, apple and many others."}, {"start": 35.0, "end": 45.0, "text": " If you would like to hear a bit more about subsurface scattering, check the second part of the video that you see recommended in the corner of this window or just click it in the description box below."}, {"start": 45.0, "end": 58.0, "text": " Subsurface scattering looks unbelievably beautiful, but at the same time it is very expensive because we have to simulate up to thousands and thousands of scattering events for every ray of light."}, {"start": 58.0, "end": 60.0, "text": " It really takes forever."}, {"start": 60.0, "end": 72.0, "text": " And if you'd like to tweak your material settings just a bit because the result is not 100% up to your taste, you have to recreate or what graphics people like to say, re-render these images."}, {"start": 72.0, "end": 75.0, "text": " It's not really a convenient workflow."}, {"start": 75.0, "end": 85.0, "text": " This piece of work offers a great solution where you have to wait a bit longer than you would wait for one image, but only once because it runs a generalized light simulation."}, {"start": 85.0, "end": 91.0, "text": " And after that, whatever changes you apply to your materials, you will see immediately."}, {"start": 91.0, "end": 111.0, "text": " You can also paint the reflectance properties of this material that we call albedos and get results with full subsurface scattering immediately."}, {"start": 111.0, "end": 125.0, "text": " Here's another interactive editing workflow where you get results instantaneously, and the result with this technique is indistinguishable from the real deal, which would be re-rendering this result image every time some adjustment is made."}, {"start": 125.0, "end": 143.0, "text": " With this technique, you can really create the materials you thought up in a fraction of the time of the classical workflow."}, {"start": 143.0, "end": 146.0, "text": " Spectacular work."}, {"start": 146.0, "end": 156.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_r-eIKkyAco
3D Printing Objects With Caustics | Two Minute Papers #38
What are caustics? A caustic is a beautiful phenomenon in nature where curved surfaces reflect or refract light, thereby concentrating it to a relatively small area. This technique makes it possible to essentially imagine any kind of caustic pattern, for instance, this brain pattern, and it will create the model that will cast caustics that look exactly like that. It also works with sunlight, and you can also choose different colors for your caustics. The authors found their simulations to be in good agreement with reality, therefore the desired caustic patterns can be fabricated faithfully. ___________________________ The paper "High-contrast Computational Caustic Design" is available here: http://chateaunoir.net/caustics.html The full Rendering course at the TU Wien is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi More results from this project are available here: http://rayform.ch/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Image credits (all CC-BY): https://en.wikipedia.org/wiki/Caustic_(optics) https://www.flickr.com/photos/fdecomite/2486275725 https://flic.kr/p/pamCiP https://flic.kr/p/nD7Ex https://flic.kr/p/iJUi3 https://flic.kr/p/8DvPiz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. What are caustics? A caustic is a beautiful phenomenon in nature, where curved surfaces reflect or refract light thereby concentrating it to a relatively small area. It looks majestic and it is the favorite effect of most light transport researchers. You can witness it around rings, plastic bottles or when you're underwater just to name a few examples. If you have a powerful algorithm at hand that can simulate many light transport effects, then you can expect to get some caustics forming in the presence of curved, refractive or reflective surfaces and small light sources. If you would like to know more about caustics, I am holding an entire university course at the Technical University of Vienna, the entirety of which we have recorded live on video for you. It is available for everyone free of charge if you're interested, check it out, as always, a link is available in the description box. Now, the laws that lead to caustics are well understood by physicists. Therefore we can not only put some objects on the table and just enjoy the imagery of the caustics, but we can turn the whole thing around. This technique makes it possible to essentially imagine any kind of caustic pattern. For instance, this brain pattern and it will create the model that will cast caustics that look exactly like that. We can thereby design an object by its caustics. It also works with sunlight. And you can also choose different colors for your caustics. This result with an extremely high fidelity image of Albert Einstein and his signature shows that first, a light transport simulation is run and then the final solution can be 3D printed. I am always adamantly looking for research works where we have a simulation that relates to and tells us something new about the world around us. This is a beautiful example of that. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.1000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir."}, {"start": 5.1000000000000005, "end": 6.8, "text": " What are caustics?"}, {"start": 6.8, "end": 13.6, "text": " A caustic is a beautiful phenomenon in nature, where curved surfaces reflect or refract light"}, {"start": 13.6, "end": 17.46, "text": " thereby concentrating it to a relatively small area."}, {"start": 17.46, "end": 23.46, "text": " It looks majestic and it is the favorite effect of most light transport researchers."}, {"start": 23.46, "end": 28.44, "text": " You can witness it around rings, plastic bottles or when you're underwater just to name"}, {"start": 28.44, "end": 29.92, "text": " a few examples."}, {"start": 29.92, "end": 34.84, "text": " If you have a powerful algorithm at hand that can simulate many light transport effects,"}, {"start": 34.84, "end": 40.56, "text": " then you can expect to get some caustics forming in the presence of curved, refractive or"}, {"start": 40.56, "end": 43.84, "text": " reflective surfaces and small light sources."}, {"start": 43.84, "end": 48.56, "text": " If you would like to know more about caustics, I am holding an entire university course"}, {"start": 48.56, "end": 54.2, "text": " at the Technical University of Vienna, the entirety of which we have recorded live on video"}, {"start": 54.2, "end": 55.2, "text": " for you."}, {"start": 55.2, "end": 60.6, "text": " It is available for everyone free of charge if you're interested, check it out, as always,"}, {"start": 60.6, "end": 62.92, "text": " a link is available in the description box."}, {"start": 62.92, "end": 68.24000000000001, "text": " Now, the laws that lead to caustics are well understood by physicists."}, {"start": 68.24000000000001, "end": 73.44, "text": " Therefore we can not only put some objects on the table and just enjoy the imagery of"}, {"start": 73.44, "end": 76.68, "text": " the caustics, but we can turn the whole thing around."}, {"start": 76.68, "end": 81.52000000000001, "text": " This technique makes it possible to essentially imagine any kind of caustic pattern."}, {"start": 81.52, "end": 86.36, "text": " For instance, this brain pattern and it will create the model that will cast caustics that"}, {"start": 86.36, "end": 88.52, "text": " look exactly like that."}, {"start": 88.52, "end": 96.84, "text": " We can thereby design an object by its caustics."}, {"start": 96.84, "end": 103.64, "text": " It also works with sunlight."}, {"start": 103.64, "end": 112.92, "text": " And you can also choose different colors for your caustics."}, {"start": 112.92, "end": 117.68, "text": " This result with an extremely high fidelity image of Albert Einstein and his signature"}, {"start": 117.68, "end": 123.44, "text": " shows that first, a light transport simulation is run and then the final solution can be 3D"}, {"start": 123.44, "end": 127.52, "text": " printed."}, {"start": 127.52, "end": 132.68, "text": " I am always adamantly looking for research works where we have a simulation that relates"}, {"start": 132.68, "end": 136.8, "text": " to and tells us something new about the world around us."}, {"start": 136.8, "end": 149.76000000000002, "text": " This is a beautiful example of that."}, {"start": 149.76, "end": 179.72, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ImIaoKsjgUE
Designing 3D Printable Robotic Creatures | Two Minute Papers #37
This episode covers a paper from Disney Research on how to design 3D printable robots. In order to get a robot from A to B, one has to specify scientific attributes like trajectories and angular velocities. But people don't think in angular velocities, they think in intuitive actions, like moving forward, sideways, or even the style of a desired movement. Specifying these things instead would be much more useful, but also, scientifically quite challenging. One can specify the design of the robot, for instance, different shapes, motor positions, and joints can be added, and the technique finds out a physically plausible way for them to walk and move around. ____________________________ The paper "Interactive Design of 3D Printable Robotic Creatures" is available here: https://www.disneyresearch.com/publication/interactive-design-of-3d-printable-robotic-creatures/ Recommended for you: - Hydrographic Printing (in 3D) https://www.youtube.com/watch?v=kLnG073NYtw - 3D Printing a Glockenspiel https://www.youtube.com/watch?v=2kOCTf8jIik Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was taken from the paper linked above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. 3D printing is a rapidly progressing research field. One can create colorful patterns that we call textures on different figures with computational hydrographic printing, just to name one of many recent inventions. We can 3D print teeth, action figures, prosthetics, you name it. Ever thought of how cool it would be to design robots on your computer digitally and simply printing them? And it's a Disney research had just made this dream come true. I fondly remember my time working at Disney Research where robots like these were walking about. I remember a specific guy that, well, wasn't really kind and waved at me, it has blocked the path to one of the labs I had to enter. It might have been one of these guys in this project. Disney Research has an incredible atmosphere with so many talented people, it's an absolutely amazing place. So in order to get a robot from A to B, one has to specify scientific attributes like trajectories and angular velocities. But people don't really think in angular velocities they think in intuitive actions, like moving forward sideways or even the style of a desired movement. Specifying these things instead would be much more useful. That sounds great and all, but this is a quite difficult task. If one specifies a high level action, like walking sideways, then the algorithm has to find out what body parts to move, how, which motors should be turned on and when, which joins to turn, where is the center of pressure, the center of mass and many other factors have to be taken into consideration. This technique offers a really slick solution to this, where we don't just get a good result, but we can also have our say on what should the order of steps be. And even more, our stylistic suggestions are taken into consideration. One can also change the design of the robot, for instance, different shapes, motor positions and joints can be specified. The authors run a simulation for these designs and constraints and found them to be in good agreement with reality. This means that whatever you design digitally can be 3D printed with off-the-shelf parts and brought to life just as you see them on the screen. The technique supports an arbitrary number of legs and is robust to a number of different robot designs. Amazing is as good of a word as I can find. The kids of the future will be absolutely spoiled with their toys. That's for sure and I'm perfectly convinced that there will be many other applications and these guys will help us solve problems that are currently absolutely inconceivable for us. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 8.120000000000001, "text": " 3D printing is a rapidly progressing research field."}, {"start": 8.120000000000001, "end": 13.44, "text": " One can create colorful patterns that we call textures on different figures with computational"}, {"start": 13.44, "end": 17.76, "text": " hydrographic printing, just to name one of many recent inventions."}, {"start": 17.76, "end": 22.72, "text": " We can 3D print teeth, action figures, prosthetics, you name it."}, {"start": 22.72, "end": 28.32, "text": " Ever thought of how cool it would be to design robots on your computer digitally and simply"}, {"start": 28.32, "end": 29.84, "text": " printing them?"}, {"start": 29.84, "end": 33.28, "text": " And it's a Disney research had just made this dream come true."}, {"start": 33.28, "end": 38.92, "text": " I fondly remember my time working at Disney Research where robots like these were walking"}, {"start": 38.92, "end": 39.92, "text": " about."}, {"start": 39.92, "end": 45.82, "text": " I remember a specific guy that, well, wasn't really kind and waved at me, it has blocked"}, {"start": 45.82, "end": 48.92, "text": " the path to one of the labs I had to enter."}, {"start": 48.92, "end": 51.92, "text": " It might have been one of these guys in this project."}, {"start": 51.92, "end": 57.24, "text": " Disney Research has an incredible atmosphere with so many talented people, it's an absolutely"}, {"start": 57.24, "end": 59.2, "text": " amazing place."}, {"start": 59.2, "end": 64.8, "text": " So in order to get a robot from A to B, one has to specify scientific attributes like"}, {"start": 64.8, "end": 67.92, "text": " trajectories and angular velocities."}, {"start": 67.92, "end": 73.84, "text": " But people don't really think in angular velocities they think in intuitive actions, like"}, {"start": 73.84, "end": 79.44, "text": " moving forward sideways or even the style of a desired movement."}, {"start": 79.44, "end": 83.12, "text": " Specifying these things instead would be much more useful."}, {"start": 83.12, "end": 86.96000000000001, "text": " That sounds great and all, but this is a quite difficult task."}, {"start": 86.96, "end": 92.55999999999999, "text": " If one specifies a high level action, like walking sideways, then the algorithm has to"}, {"start": 92.55999999999999, "end": 98.67999999999999, "text": " find out what body parts to move, how, which motors should be turned on and when, which"}, {"start": 98.67999999999999, "end": 104.63999999999999, "text": " joins to turn, where is the center of pressure, the center of mass and many other factors"}, {"start": 104.63999999999999, "end": 107.11999999999999, "text": " have to be taken into consideration."}, {"start": 107.11999999999999, "end": 112.28, "text": " This technique offers a really slick solution to this, where we don't just get a good result,"}, {"start": 112.28, "end": 116.72, "text": " but we can also have our say on what should the order of steps be."}, {"start": 116.72, "end": 129.28, "text": " And even more, our stylistic suggestions are taken into consideration."}, {"start": 129.28, "end": 134.56, "text": " One can also change the design of the robot, for instance, different shapes, motor positions"}, {"start": 134.56, "end": 137.0, "text": " and joints can be specified."}, {"start": 137.0, "end": 141.96, "text": " The authors run a simulation for these designs and constraints and found them to be in good"}, {"start": 141.96, "end": 143.84, "text": " agreement with reality."}, {"start": 143.84, "end": 149.8, "text": " This means that whatever you design digitally can be 3D printed with off-the-shelf parts"}, {"start": 149.8, "end": 153.76, "text": " and brought to life just as you see them on the screen."}, {"start": 153.76, "end": 158.48000000000002, "text": " The technique supports an arbitrary number of legs and is robust to a number of different"}, {"start": 158.48000000000002, "end": 160.28, "text": " robot designs."}, {"start": 160.28, "end": 164.0, "text": " Amazing is as good of a word as I can find."}, {"start": 164.0, "end": 168.68, "text": " The kids of the future will be absolutely spoiled with their toys."}, {"start": 168.68, "end": 173.04, "text": " That's for sure and I'm perfectly convinced that there will be many other applications"}, {"start": 173.04, "end": 178.4, "text": " and these guys will help us solve problems that are currently absolutely inconceivable for"}, {"start": 178.4, "end": 179.4, "text": " us."}, {"start": 179.4, "end": 208.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=kMa_B3wLxAM
Designing Cities and Furnitures With Machine Learning | Two Minute Papers #36
Creating geometry for a computer game or a movie is a very long and arduous task. For instance, if we would like to populate a virtual city with buildings, it would cost a ton of time and money and of course, we would need quite a few artists. This piece of work solves this problem in a very elegant and convenient way: it learns the preference of the user, then creates and recommends a set of solutions that are expected to be desirable. The weapon of choice to accomplish this was Gaussian Process Regression. ___________________________________ The paper "Interactive Design of Probability Density Functions for Shape Grammars" is available here: http://lgg.epfl.ch/publications/2015/proman/index.php The thumbnail image was created by See-ming Lee (nice name, btw!) (CC BY 2.0) - https://flic.kr/p/oewqwn Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fahir. Creating geometry for a computer game or a movie is a very long and arduous task. For instance, if we would like to populate a virtual city with buildings, it would cost a ton of time and money, and of course, we would need quite a few artists. This piece of work solves this problem in a very elegant and convenient way. It learns the preference of the user then creates and recommends a set of solutions that are expected to be desirable. In this example, we are looking for tables with either one leg or crossing legs. It should also be properly balanced, therefore if we see any of these criteria, we'll assign a high score to these models. These are the preferences that the algorithm should try to learn. The orange bars show the predicted score for new models created by the algorithm. A larger value means that the system expects the user to score this high and the blue bars mean uncertainty. Generally, we are looking for solutions with a large orange and small blue bars. This means that the algorithm is confident that a given model is in line with our preferences, and we get exactly what we were looking for. Another balanced table designs with one leg or crossed legs. Interestingly, since we have these uncertainty values, one can also visualize country examples where the algorithm is not so sure but would guess that we wouldn't like the model. It's super cool that it is aware how horrendous these designs look. It may have a better eye than many of the contemporary art curators out there. There are also examples where the algorithm is very confident that we are going to hate a given example because of its legs or unbalancedness and would never recommend such a model. So indirectly, it also learns how a balanced piece of furniture should look like without ever learning the concept of gravity or doing any kind of architectural computation. The algorithm also works on buildings and after learning our preferences, it can populate entire cities with geometry that is in line with our artistic vision. Excellent piece of work. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.9, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Fahir."}, {"start": 4.9, "end": 10.34, "text": " Creating geometry for a computer game or a movie is a very long and arduous task."}, {"start": 10.34, "end": 15.24, "text": " For instance, if we would like to populate a virtual city with buildings, it would cost"}, {"start": 15.24, "end": 20.04, "text": " a ton of time and money, and of course, we would need quite a few artists."}, {"start": 20.04, "end": 24.5, "text": " This piece of work solves this problem in a very elegant and convenient way."}, {"start": 24.5, "end": 29.7, "text": " It learns the preference of the user then creates and recommends a set of solutions that"}, {"start": 29.7, "end": 32.0, "text": " are expected to be desirable."}, {"start": 32.0, "end": 37.6, "text": " In this example, we are looking for tables with either one leg or crossing legs."}, {"start": 37.6, "end": 43.519999999999996, "text": " It should also be properly balanced, therefore if we see any of these criteria, we'll assign"}, {"start": 43.519999999999996, "end": 46.28, "text": " a high score to these models."}, {"start": 46.28, "end": 56.0, "text": " These are the preferences that the algorithm should try to learn."}, {"start": 56.0, "end": 61.28, "text": " The orange bars show the predicted score for new models created by the algorithm."}, {"start": 61.28, "end": 66.94, "text": " A larger value means that the system expects the user to score this high and the blue bars"}, {"start": 66.94, "end": 68.58, "text": " mean uncertainty."}, {"start": 68.58, "end": 73.82, "text": " Generally, we are looking for solutions with a large orange and small blue bars."}, {"start": 73.82, "end": 79.32, "text": " This means that the algorithm is confident that a given model is in line with our preferences,"}, {"start": 79.32, "end": 82.1, "text": " and we get exactly what we were looking for."}, {"start": 82.1, "end": 87.69999999999999, "text": " Another balanced table designs with one leg or crossed legs."}, {"start": 87.69999999999999, "end": 94.82, "text": " Interestingly, since we have these uncertainty values, one can also visualize country examples"}, {"start": 94.82, "end": 100.1, "text": " where the algorithm is not so sure but would guess that we wouldn't like the model."}, {"start": 100.1, "end": 104.06, "text": " It's super cool that it is aware how horrendous these designs look."}, {"start": 104.06, "end": 110.53999999999999, "text": " It may have a better eye than many of the contemporary art curators out there."}, {"start": 110.54, "end": 115.62, "text": " There are also examples where the algorithm is very confident that we are going to hate"}, {"start": 115.62, "end": 121.98, "text": " a given example because of its legs or unbalancedness and would never recommend such a model."}, {"start": 121.98, "end": 127.62, "text": " So indirectly, it also learns how a balanced piece of furniture should look like without"}, {"start": 127.62, "end": 133.18, "text": " ever learning the concept of gravity or doing any kind of architectural computation."}, {"start": 133.18, "end": 138.38, "text": " The algorithm also works on buildings and after learning our preferences, it can populate"}, {"start": 138.38, "end": 144.14, "text": " entire cities with geometry that is in line with our artistic vision."}, {"start": 144.14, "end": 145.98, "text": " Excellent piece of work."}, {"start": 145.98, "end": 175.54, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Bui3DWs02h4
9 Cool Deep Learning Applications | Two Minute Papers #35
Machine learning provides us an incredible set of tools. If you have a difficult problem at hand, you don't need to hand craft an algorithm for it. It finds out by itself what is important about the problem and tries to solve it on its own. In this video, you'll see a number of incredible applications of different machine learning techniques (neural networks, deep learning, convolutional neural networks and more). Note: the fluid simulation paper is using regression forests, which is a machine learning technique, but not strictly deep learning. There are variants of it that are though (e.g., Deep Neural Decision Forests). ________________________ The paper "Toxicity Prediction using Deep Learning" and "Prediction of human population responses to toxic compounds by a collaborative competition" are available here: http://arxiv.org/pdf/1503.01445.pdf http://www.nature.com/nbt/journal/v33/n9/full/nbt.3299.html The paper "A Comparison of Algorithms and Humans For Mitosis Detection" is available here: http://people.idsia.ch/~juergen/deeplearningwinsMICCAIgrandchallenge.html http://people.idsia.ch/~ciresan/data/isbi2014.pdf Kaggle-related things: http://kaggle.com https://www.kaggle.com/c/dato-native http://blog.kaggle.com/2015/12/03/dato-winners-interview-1st-place-mad-professors/ The paper "Deep AutoRegressive Networks" is available here: http://arxiv.org/pdf/1310.8499v2.pdf https://www.youtube.com/watch?v=-yX1SYeDHbg&feature=youtu.be&t=2976 The furniture completion paper, "Data-driven Structural Priors for Shape Completion" is available here: http://cs.stanford.edu/~mhsung/projects/structure-completion Data-driven fluid simulations using regression forests: https://graphics.ethz.ch/~sobarbar/papers/Lad15/DatadrivenFluids.mov https://www.inf.ethz.ch/personal/ladickyl/fluid_sigasia15.pdf Selfies and convolutional neural networks: http://karpathy.github.io/2015/10/25/selfie/ Multiagent Cooperation and Competition with Deep Reinforcement Learning: http://arxiv.org/abs/1511.08779 https://www.youtube.com/watch?v=Gb9DprIgdGw&index=2&list=PLfLv_F3r0TwyaZPe50OOUx8tRf0HwdR_u https://github.com/NeuroCSUT/DeepMind-Atari-Deep-Q-Learner-2Player Kaggle automatic essay scoring contest: https://www.kaggle.com/c/asap-aes http://www.vikparuchuri.com/blog/on-the-automated-scoring-of-essays/ Great talks on Kaggle: https://www.youtube.com/watch?v=9Zag7uhjdYo https://www.youtube.com/watch?v=OKOlO9nIHUE https://www.youtube.com/watch?v=R9QxucPzicQ The thumbnail image was created by Barn Images - https://flic.kr/p/xxBc94 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. There are so many applications of deep learning, I was really excited to put together a short, but really cool list of some of the more recent results for you Fellow Scholars to enjoy. Machine Learning provides us an incredible set of tools. If you have a difficult problem at hand, you don't need to handcraft an algorithm for it. It finds out by itself what is important about the problem and tries to solve it on its own. If the problem domains, they perform better than human experts. What's more, some of these algorithms find out things that you could earn a PhD with 10 years ago. Here goes the first stunning application. Toxicity detection for different chemical structures by means of deep learning. It is so efficient that it could find toxic properties that previously required decades of work by humans who are experts of their field. First one, mitosis detection from large images. Mitosis means that cell nuclei are undergoing different transformations that are quite harmful and quite difficult to detect. The best techniques out there are using convolutional neural networks and are outperforming professional radiologists at their own task. Unbelievably. Kaggle is a company that is dedicated to connecting companies with large data sets and data scientists who write algorithms to extract insight from all this data. If you take only a brief look, you see an incredibly large swath of applications for learning algorithms. Almost all of these were believed to be only for humans, very smart humans. And learning algorithms, again, emerge triumphant on many of these. For instance, they had a great competition where learning algorithms would read a website and find out whether paid content is disguised there as real content. Next up on the list, hallucination or sequence generation. It looks at different video games, tries to learn how they work and generates new footage out of thin air by using a recurrent neural network. Because of the imperfection of the 3D scanning procedures, many 3D scan furnitures are too noisy to be used as is. However, there are techniques to look at these really noisy models and try to figure out how they should look by learning the symmetries and other properties of real furnitures. These algorithms can also do an excellent job at predicting how different fluids behave in time and are therefore expected to be super useful in physical simulation in the following years. And on the list of highly sophisticated scientific topics, there is this application that can find out what makes a good selfie and how good your photos are if you really want to know the truth. Here is another application where a computer algorithm that we call deep-queue learning plays pong against itself and eventually achieves expertise. The machines are also grading student essays. At first one would think that this cannot possibly be a good idea. And as it turns out, their judgment is more consistent with the reference grades than any of the teachers who were tested. This could be an awesome tool for saving a lot of time and assisting the teachers to help their students learn. This kind of blows my mind. It would be great to take a look at an actual dataset if it is public and the issued grades. So if any of you fellow scholars have seen it somewhere, please let me know in the comment section. And these results are only from the last few years and it's really just scratching the surface. There are literally hundreds of more applications we haven't even talked about. We are living extremely exciting times indeed. I am eager to see and perhaps be a small part of this progress. There are tons of reading and viewing materials in the description box. Check them out. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.8, "end": 10.120000000000001, "text": " There are so many applications of deep learning, I was really excited to put together a short,"}, {"start": 10.120000000000001, "end": 15.0, "text": " but really cool list of some of the more recent results for you Fellow Scholars to enjoy."}, {"start": 15.0, "end": 18.6, "text": " Machine Learning provides us an incredible set of tools."}, {"start": 18.6, "end": 22.96, "text": " If you have a difficult problem at hand, you don't need to handcraft an algorithm for"}, {"start": 22.96, "end": 23.96, "text": " it."}, {"start": 23.96, "end": 28.560000000000002, "text": " It finds out by itself what is important about the problem and tries to solve it on its"}, {"start": 28.560000000000002, "end": 29.560000000000002, "text": " own."}, {"start": 29.56, "end": 33.199999999999996, "text": " If the problem domains, they perform better than human experts."}, {"start": 33.199999999999996, "end": 38.6, "text": " What's more, some of these algorithms find out things that you could earn a PhD with"}, {"start": 38.6, "end": 39.8, "text": " 10 years ago."}, {"start": 39.8, "end": 42.76, "text": " Here goes the first stunning application."}, {"start": 42.76, "end": 46.879999999999995, "text": " Toxicity detection for different chemical structures by means of deep learning."}, {"start": 46.879999999999995, "end": 52.8, "text": " It is so efficient that it could find toxic properties that previously required decades"}, {"start": 52.8, "end": 56.68, "text": " of work by humans who are experts of their field."}, {"start": 56.68, "end": 60.44, "text": " First one, mitosis detection from large images."}, {"start": 60.44, "end": 66.0, "text": " Mitosis means that cell nuclei are undergoing different transformations that are quite harmful"}, {"start": 66.0, "end": 67.96, "text": " and quite difficult to detect."}, {"start": 67.96, "end": 73.88, "text": " The best techniques out there are using convolutional neural networks and are outperforming professional"}, {"start": 73.88, "end": 77.64, "text": " radiologists at their own task."}, {"start": 77.64, "end": 79.4, "text": " Unbelievably."}, {"start": 79.4, "end": 85.56, "text": " Kaggle is a company that is dedicated to connecting companies with large data sets and data scientists"}, {"start": 85.56, "end": 89.68, "text": " who write algorithms to extract insight from all this data."}, {"start": 89.68, "end": 95.08, "text": " If you take only a brief look, you see an incredibly large swath of applications for learning"}, {"start": 95.08, "end": 96.8, "text": " algorithms."}, {"start": 96.8, "end": 101.76, "text": " Almost all of these were believed to be only for humans, very smart humans."}, {"start": 101.76, "end": 106.6, "text": " And learning algorithms, again, emerge triumphant on many of these."}, {"start": 106.6, "end": 111.80000000000001, "text": " For instance, they had a great competition where learning algorithms would read a website"}, {"start": 111.8, "end": 121.75999999999999, "text": " and find out whether paid content is disguised there as real content."}, {"start": 121.75999999999999, "end": 125.24, "text": " Next up on the list, hallucination or sequence generation."}, {"start": 125.24, "end": 130.68, "text": " It looks at different video games, tries to learn how they work and generates new footage"}, {"start": 130.68, "end": 139.44, "text": " out of thin air by using a recurrent neural network."}, {"start": 139.44, "end": 147.88, "text": " Because of the imperfection of the 3D scanning procedures, many 3D scan furnitures are too"}, {"start": 147.88, "end": 150.2, "text": " noisy to be used as is."}, {"start": 150.2, "end": 154.96, "text": " However, there are techniques to look at these really noisy models and try to figure out"}, {"start": 154.96, "end": 162.52, "text": " how they should look by learning the symmetries and other properties of real furnitures."}, {"start": 162.52, "end": 167.07999999999998, "text": " These algorithms can also do an excellent job at predicting how different fluids behave"}, {"start": 167.08, "end": 172.36, "text": " in time and are therefore expected to be super useful in physical simulation in the following"}, {"start": 172.36, "end": 178.72000000000003, "text": " years."}, {"start": 178.72000000000003, "end": 184.12, "text": " And on the list of highly sophisticated scientific topics, there is this application that can"}, {"start": 184.12, "end": 191.04000000000002, "text": " find out what makes a good selfie and how good your photos are if you really want to know"}, {"start": 191.04000000000002, "end": 196.68, "text": " the truth."}, {"start": 196.68, "end": 201.16, "text": " Here is another application where a computer algorithm that we call deep-queue learning"}, {"start": 201.16, "end": 217.20000000000002, "text": " plays pong against itself and eventually achieves expertise."}, {"start": 217.20000000000002, "end": 220.24, "text": " The machines are also grading student essays."}, {"start": 220.24, "end": 226.44, "text": " At first one would think that this cannot possibly be a good idea."}, {"start": 226.44, "end": 231.0, "text": " And as it turns out, their judgment is more consistent with the reference grades than"}, {"start": 231.0, "end": 233.48, "text": " any of the teachers who were tested."}, {"start": 233.48, "end": 237.88, "text": " This could be an awesome tool for saving a lot of time and assisting the teachers to"}, {"start": 237.88, "end": 240.12, "text": " help their students learn."}, {"start": 240.12, "end": 242.64, "text": " This kind of blows my mind."}, {"start": 242.64, "end": 248.0, "text": " It would be great to take a look at an actual dataset if it is public and the issued grades."}, {"start": 248.0, "end": 252.07999999999998, "text": " So if any of you fellow scholars have seen it somewhere, please let me know in the comment"}, {"start": 252.07999999999998, "end": 253.16, "text": " section."}, {"start": 253.16, "end": 257.96, "text": " And these results are only from the last few years and it's really just scratching"}, {"start": 257.96, "end": 258.96, "text": " the surface."}, {"start": 258.96, "end": 263.32, "text": " There are literally hundreds of more applications we haven't even talked about."}, {"start": 263.32, "end": 266.71999999999997, "text": " We are living extremely exciting times indeed."}, {"start": 266.71999999999997, "end": 270.84, "text": " I am eager to see and perhaps be a small part of this progress."}, {"start": 270.84, "end": 274.24, "text": " There are tons of reading and viewing materials in the description box."}, {"start": 274.24, "end": 275.24, "text": " Check them out."}, {"start": 275.24, "end": 283.36, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=B70tT4WMyJk
Neural Programmer-Interpreters Learn To Write Programs | Two Minute Papers #34
In machine learning, we usually have a set of problems for which we are looking for solutions. For instance, "here is an image, please tell me what is seen on it". Or, "here is a computer game, please beat level three". One problem, one solution. In this case, we are not looking for one solution, we are looking for a computer program, an algorithm, that can solve any number of problems of the same kind. It can also learn how to rotate images of different cars around to obtain a frontal pose. This technique can learn from someone how to sort a set of 20 numbers and generalize its knowledge to much longer sequences. ______________________ The paper "Neural Programmer-Interpreters" is available here: http://www-personal.umich.edu/~reedscot/iclr_project.html The thumbnail image was created by Iwan Gabovitch (CC BY 2.0) - https://flic.kr/p/paxzB9 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. What could be a more delightful way to celebrate New Year's Eve than reading about new breakthroughs in machine learning research? Let's talk about an excellent new paper from the Google DeepMind Guys. In machine learning, we usually have a set of problems for which we are looking for solutions. For instance, here's an image, please tell me what is seen on it. Here's a computer game, please beat level 3. One problem, one solution. In this case, we are not looking for one solution, we are looking for a computer program, an algorithm that can solve any number of problems of the same kind. This work is based on a recurrent neural network, which we discussed in a previous episode. In short, it means that it tries to learn not to want something, but the sequence of things. And in this example, it learns to add two large numbers together. As a big number can be imagined as a sequence of digits. This can be done through a sequence of operations. It first reads the two input numbers and then carries out the addition, keeps track of the carrying digits and goes on to the next digit. On the right, you can see the individual comments executed in the computer program it came up with. It can also learn how to rotate images of different cars around to obtain a frontal pose. This is also a sequence of rotation actions until the desired output is reached. Learning more rudimentary sorting algorithms to put numbers in a sending order is also possible. One key difference between recurrent neural networks and this is that these neural programmer interpreters are able to generalize better. What does this mean? This means that if the technique can learn from someone how to sort a set of 20 numbers, it can generalize its knowledge to much longer sequences. So it essentially tries to learn the algorithm behind sorting from a few examples. Previous techniques were unable to achieve this and as we can see, it can deal with a variety of problems. I am absolutely spellbound by this kind of learning because it really behaves like a novice human user would. Making it what experts do and trying to learn and understand the logic behind their actions. Happy new year to all of you fellow scholars. May it be ample, enjoy and beautiful papers. May our knowledge grow according to Moore's law and of course may the force be with you. Thanks for watching and for your generous support and I'll see you next year.
[{"start": 0.0, "end": 5.12, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.12, "end": 10.28, "text": " What could be a more delightful way to celebrate New Year's Eve than reading about new breakthroughs"}, {"start": 10.28, "end": 12.0, "text": " in machine learning research?"}, {"start": 12.0, "end": 16.02, "text": " Let's talk about an excellent new paper from the Google DeepMind Guys."}, {"start": 16.02, "end": 21.48, "text": " In machine learning, we usually have a set of problems for which we are looking for solutions."}, {"start": 21.48, "end": 25.44, "text": " For instance, here's an image, please tell me what is seen on it."}, {"start": 25.44, "end": 28.36, "text": " Here's a computer game, please beat level 3."}, {"start": 28.36, "end": 30.12, "text": " One problem, one solution."}, {"start": 30.12, "end": 35.24, "text": " In this case, we are not looking for one solution, we are looking for a computer program,"}, {"start": 35.24, "end": 39.68, "text": " an algorithm that can solve any number of problems of the same kind."}, {"start": 39.68, "end": 44.36, "text": " This work is based on a recurrent neural network, which we discussed in a previous episode."}, {"start": 44.36, "end": 50.2, "text": " In short, it means that it tries to learn not to want something, but the sequence of things."}, {"start": 50.2, "end": 54.68, "text": " And in this example, it learns to add two large numbers together."}, {"start": 54.68, "end": 58.44, "text": " As a big number can be imagined as a sequence of digits."}, {"start": 58.44, "end": 61.24, "text": " This can be done through a sequence of operations."}, {"start": 61.24, "end": 66.12, "text": " It first reads the two input numbers and then carries out the addition, keeps track of the"}, {"start": 66.12, "end": 69.2, "text": " carrying digits and goes on to the next digit."}, {"start": 69.2, "end": 77.76, "text": " On the right, you can see the individual comments executed in the computer program it came up with."}, {"start": 77.76, "end": 83.4, "text": " It can also learn how to rotate images of different cars around to obtain a frontal pose."}, {"start": 83.4, "end": 92.12, "text": " This is also a sequence of rotation actions until the desired output is reached."}, {"start": 92.12, "end": 98.08000000000001, "text": " Learning more rudimentary sorting algorithms to put numbers in a sending order is also possible."}, {"start": 98.08000000000001, "end": 103.16000000000001, "text": " One key difference between recurrent neural networks and this is that these neural programmer"}, {"start": 103.16000000000001, "end": 106.56, "text": " interpreters are able to generalize better."}, {"start": 106.56, "end": 108.04, "text": " What does this mean?"}, {"start": 108.04, "end": 113.60000000000001, "text": " This means that if the technique can learn from someone how to sort a set of 20 numbers,"}, {"start": 113.60000000000001, "end": 117.4, "text": " it can generalize its knowledge to much longer sequences."}, {"start": 117.4, "end": 123.32000000000001, "text": " So it essentially tries to learn the algorithm behind sorting from a few examples."}, {"start": 123.32000000000001, "end": 128.48000000000002, "text": " Previous techniques were unable to achieve this and as we can see, it can deal with a variety"}, {"start": 128.48000000000002, "end": 129.72, "text": " of problems."}, {"start": 129.72, "end": 134.8, "text": " I am absolutely spellbound by this kind of learning because it really behaves like a"}, {"start": 134.8, "end": 137.0, "text": " novice human user would."}, {"start": 137.0, "end": 143.52, "text": " Making it what experts do and trying to learn and understand the logic behind their actions."}, {"start": 143.52, "end": 146.08, "text": " Happy new year to all of you fellow scholars."}, {"start": 146.08, "end": 149.44, "text": " May it be ample, enjoy and beautiful papers."}, {"start": 149.44, "end": 155.4, "text": " May our knowledge grow according to Moore's law and of course may the force be with you."}, {"start": 155.4, "end": 176.8, "text": " Thanks for watching and for your generous support and I'll see you next year."}]
Two Minute Papers
https://www.youtube.com/watch?v=zzwCbhI2iOA
Peer Review #1 [Audio only] | Two Minute Papers
I wish a merry Christmas to all of you Fellow Scholars! __________________ The technique "Separable Subsurface Scattering" was used to create this thumbnail image: https://cg.tuwien.ac.at/~zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ The thumbnail image was created by Christian Freude. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fahir. This is a quick report on what is going on with Two Minute Papers. We are living extremely busy times as I am still working full time as a doctoral researcher and we also have a baby on the way. We are currently a bit over 30 episodes in and I am having an amazing time explaining these concepts and enjoying the right tremendously. One of the most beautiful aspects of Two Minute Papers is the community forming around it, with extremely high quality comments and lots of civil, respectful discussions. I learned a lot from you Fellow Scholars. Thanks for that. Really awesome. The growth numbers are looking amazing for a YouTube channel of this size and of course any help in publicity is greatly appreciated. If you are a journalist and you feel that this is a worthy cause, please write about Two Minute Papers. If you are not a journalist, please try showing the series to them. Or just show it to your friends. I am sure that many, many more people would be interested in this and sharing is a great way to reach out to new people. The Patreon page is also getting lots of generous support that I would only expect from much bigger channels. I don't even know if I deserve it. But thanks for hanging in there, I feel really privileged to have supporters like you Fellow Scholars. You're the best. And we have some amazing times ahead of us. So thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.04, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Fahir."}, {"start": 5.04, "end": 8.88, "text": " This is a quick report on what is going on with Two Minute Papers."}, {"start": 8.88, "end": 14.76, "text": " We are living extremely busy times as I am still working full time as a doctoral researcher"}, {"start": 14.76, "end": 16.8, "text": " and we also have a baby on the way."}, {"start": 16.8, "end": 22.240000000000002, "text": " We are currently a bit over 30 episodes in and I am having an amazing time explaining"}, {"start": 22.240000000000002, "end": 25.28, "text": " these concepts and enjoying the right tremendously."}, {"start": 25.28, "end": 30.44, "text": " One of the most beautiful aspects of Two Minute Papers is the community forming around it,"}, {"start": 30.44, "end": 35.52, "text": " with extremely high quality comments and lots of civil, respectful discussions."}, {"start": 35.52, "end": 37.52, "text": " I learned a lot from you Fellow Scholars."}, {"start": 37.52, "end": 38.52, "text": " Thanks for that."}, {"start": 38.52, "end": 39.68, "text": " Really awesome."}, {"start": 39.68, "end": 44.36, "text": " The growth numbers are looking amazing for a YouTube channel of this size and of course"}, {"start": 44.36, "end": 47.6, "text": " any help in publicity is greatly appreciated."}, {"start": 47.6, "end": 52.0, "text": " If you are a journalist and you feel that this is a worthy cause, please write about"}, {"start": 52.0, "end": 53.32, "text": " Two Minute Papers."}, {"start": 53.32, "end": 56.88, "text": " If you are not a journalist, please try showing the series to them."}, {"start": 56.88, "end": 58.4, "text": " Or just show it to your friends."}, {"start": 58.4, "end": 63.28, "text": " I am sure that many, many more people would be interested in this and sharing is a great"}, {"start": 63.28, "end": 65.36, "text": " way to reach out to new people."}, {"start": 65.36, "end": 70.36, "text": " The Patreon page is also getting lots of generous support that I would only expect from much"}, {"start": 70.36, "end": 71.36, "text": " bigger channels."}, {"start": 71.36, "end": 73.84, "text": " I don't even know if I deserve it."}, {"start": 73.84, "end": 78.8, "text": " But thanks for hanging in there, I feel really privileged to have supporters like you Fellow"}, {"start": 78.8, "end": 79.8, "text": " Scholars."}, {"start": 79.8, "end": 81.03999999999999, "text": " You're the best."}, {"start": 81.04, "end": 83.64, "text": " And we have some amazing times ahead of us."}, {"start": 83.64, "end": 111.76, "text": " So thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1aVSb-UbYWc
Painting with Fluid Simulations | Two Minute Papers #33
As there is a lot of progress in simulating the motion of fluids, and paint is a fluid, then why not simulate the process of painting on a canvas? The simulations with this technique are so detailed that even the bristle interactions are taken into consideration, therefore one can capture artistic brush stroke effects like stabbing. Traditional techniques cannot even come close to simulating such sophisticated effects. ______________________ The paper "Wetbrush: GPU-based 3D painting simulation at the bristle level" is available here: http://web.cse.ohio-state.edu/~whmin/publications.html Recommended for you: Adaptive Fluid Simulations - https://www.youtube.com/watch?v=dH1s49-lrBk&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=1 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image was taken from the mentioned paper. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is too many papers with Karojona Ifei. Some people say that the most boring thing is watching paint dry. They have clearly not seen this amazing research work that makes it possible to simulate the entire process of painting on a canvas. We have covered plenty of papers in fluid simulations and this is no exception. I admit that I am completely addicted and just can't help it. Maybe I should seek professional assistance. Although as there is a lot of progress in simulating the motion of fluids and paint is a fluid, then why not simulate the process of painting on a canvas? The simulations with this technique are so detailed that even the bristle interactions are taken into consideration, therefore one can capture artistic brush stroke effects like stabbing. Stabbing despite the horrifying name basically means shoving the brush into the canvas and rotating it around to get a cool effect. The fluid simulation part includes paint adhesion and is so detailed that it can capture the well-known impasto style where paint is applied to the canvas in such large chunks. They are so thick that one can see all the strokes that have been made and all this is done in real time. Amazing results. Traditional techniques cannot even come close to simulating such sophisticated effects. And as it happened many times before in computer graphics, just put a powerful algorithm into the hands of great artists and enjoy the majestic creations they give birth to. Wow, a two minute paper sapisote that's actually on time. Great. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.08, "text": " Dear Fellow Scholars, this is too many papers with Karojona Ifei."}, {"start": 5.08, "end": 9.78, "text": " Some people say that the most boring thing is watching paint dry."}, {"start": 9.78, "end": 14.48, "text": " They have clearly not seen this amazing research work that makes it possible to simulate the"}, {"start": 14.48, "end": 17.68, "text": " entire process of painting on a canvas."}, {"start": 17.68, "end": 22.0, "text": " We have covered plenty of papers in fluid simulations and this is no exception."}, {"start": 22.0, "end": 25.92, "text": " I admit that I am completely addicted and just can't help it."}, {"start": 25.92, "end": 28.36, "text": " Maybe I should seek professional assistance."}, {"start": 28.36, "end": 34.519999999999996, "text": " Although as there is a lot of progress in simulating the motion of fluids and paint is a fluid,"}, {"start": 34.519999999999996, "end": 38.08, "text": " then why not simulate the process of painting on a canvas?"}, {"start": 38.08, "end": 43.12, "text": " The simulations with this technique are so detailed that even the bristle interactions"}, {"start": 43.12, "end": 47.84, "text": " are taken into consideration, therefore one can capture artistic brush stroke effects"}, {"start": 47.84, "end": 49.32, "text": " like stabbing."}, {"start": 49.32, "end": 54.8, "text": " Stabbing despite the horrifying name basically means shoving the brush into the canvas and"}, {"start": 54.8, "end": 57.32, "text": " rotating it around to get a cool effect."}, {"start": 57.32, "end": 62.12, "text": " The fluid simulation part includes paint adhesion and is so detailed that it can capture"}, {"start": 62.12, "end": 67.96000000000001, "text": " the well-known impasto style where paint is applied to the canvas in such large chunks."}, {"start": 67.96000000000001, "end": 73.68, "text": " They are so thick that one can see all the strokes that have been made and all this is done"}, {"start": 73.68, "end": 75.72, "text": " in real time."}, {"start": 75.72, "end": 77.92, "text": " Amazing results."}, {"start": 77.92, "end": 84.0, "text": " Traditional techniques cannot even come close to simulating such sophisticated effects."}, {"start": 84.0, "end": 88.92, "text": " And as it happened many times before in computer graphics, just put a powerful algorithm into"}, {"start": 88.92, "end": 96.96000000000001, "text": " the hands of great artists and enjoy the majestic creations they give birth to."}, {"start": 96.96000000000001, "end": 109.76, "text": " Wow, a two minute paper sapisote that's actually on time."}, {"start": 109.76, "end": 113.56, "text": " Great."}, {"start": 113.56, "end": 117.08, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ziMHaGQJuSI
How Do Genetic Algorithms Work? | Two Minute Papers #32
Genetic algorithms are in the class of evolutionary algorithms that build on the principle of "survival of the fittest". By recombining the best solutions of a population and every now and then mutating them, one can solve remarkably difficult problems that would otherwise be hopelessly difficult to write programs for. One of the first works of genetic algorithms, "Adaptation in Natural and Artificial Systems" by John H. Holland: https://mitpress.mit.edu/books/adaptation-natural-and-artificial-systems _____________________ A parallel genetic algorithm for the Mona Lisa problem: https://cg.tuwien.ac.at/~zsolnai/gfx/mona_lisa_parallel_genetic_algorithm/ A parallel, console genetic algorithm for the 0-1 knapsack problem: https://cg.tuwien.ac.at/~zsolnai/gfx/knapsack_genetic/ John Henry Holland, the father of genetic algorithms: https://en.wikipedia.org/wiki/John_Henry_Holland Try this out, it's really fun! - http://boxcar2d.com The mentioned book is called "The Blind Watchmaker" by Richard Dawkins. The thumbnail background image was created by Karen Roe (CC BY 2.0) - https://flic.kr/p/ezxAbk Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir. Genetic algorithms help us solve problems that are very difficult if not impossible to otherwise write programs for. For instance, in this application we have to build a simple car model to traverse this terrain. Put some number of wheels on it somewhere, head a set of triangles as a chassis and off you go. This is essentially the DNA of a solution. The farther it goes, the better the car is and the goal is to design the best car you possibly can. First, the algorithm will try random solutions and as it has no idea about the concept of a car or gravity, it will create a lot of bad solutions that don't work at all. However, after a point it will create something that is at least remotely similar to a car which will immediately perform so much better than the other solutions in the population. A genetic algorithm then creates a new set of solutions, however, now, not randomly. It respects a rule that we call survival of the fittest, which means that the best existing solutions are taken and mixed together to breed new solutions that are also expected to do well. Like in evolution in nature, mutations can also happen, which means random changes are also applied to the DNA code of a solution. We know from nature that evolution works extraordinarily well and the more we run this genetic optimization program, the better the solutions get. It's quite delightful for a programmer to see their own children trying vigorously and succeeding at solving a difficult task, even more so if the programmer wouldn't be able to solve this problem by himself. Let's run a quick example. We start with a set of solutions. The DNA of a solution is a set of zeros and ones which can encode some decision about the solution whether we turn left or right in a maze or it can also be an integer or an unreal number. We then compute how good these solutions are according to our taste in the example with cars how far these designs can get. Then we take, for instance, the best three solutions and combine them together to create a new DNA. Some of the better solutions may remain in the population unchanged. Then, probabilistically, random mutations happen to some of the solutions which help us explore the vast search space better. Reans and repeat and there you have it, genetic algorithms. I have also coded up a version of Roger Allsings' Evo Liza problem where the famous Monalisa painting is to be reproduced by a computer program with a few tens of triangles. The goal is to paint a version that is as faithful to the original as possible. This would be quite a difficult problem for humans but apparently a genetic algorithm can deal with this really well. The code is available for everyone to learn, experiment and play with and it's super fun. And if you're interested in the concept of evolution, maybe read the excellent book, The Blind Watchmaker by Richard Dawkins. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir."}, {"start": 5.0600000000000005, "end": 9.98, "text": " Genetic algorithms help us solve problems that are very difficult if not impossible to"}, {"start": 9.98, "end": 12.120000000000001, "text": " otherwise write programs for."}, {"start": 12.120000000000001, "end": 16.72, "text": " For instance, in this application we have to build a simple car model to traverse this"}, {"start": 16.72, "end": 17.72, "text": " terrain."}, {"start": 17.72, "end": 22.28, "text": " Put some number of wheels on it somewhere, head a set of triangles as a chassis and off"}, {"start": 22.28, "end": 23.28, "text": " you go."}, {"start": 23.28, "end": 25.92, "text": " This is essentially the DNA of a solution."}, {"start": 25.92, "end": 30.720000000000002, "text": " The farther it goes, the better the car is and the goal is to design the best car you"}, {"start": 30.720000000000002, "end": 32.04, "text": " possibly can."}, {"start": 32.04, "end": 37.84, "text": " First, the algorithm will try random solutions and as it has no idea about the concept of"}, {"start": 37.84, "end": 43.040000000000006, "text": " a car or gravity, it will create a lot of bad solutions that don't work at all."}, {"start": 43.040000000000006, "end": 49.44, "text": " However, after a point it will create something that is at least remotely similar to a car"}, {"start": 49.44, "end": 54.480000000000004, "text": " which will immediately perform so much better than the other solutions in the population."}, {"start": 54.48, "end": 60.48, "text": " A genetic algorithm then creates a new set of solutions, however, now, not randomly."}, {"start": 60.48, "end": 65.84, "text": " It respects a rule that we call survival of the fittest, which means that the best existing"}, {"start": 65.84, "end": 70.84, "text": " solutions are taken and mixed together to breed new solutions that are also expected to"}, {"start": 70.84, "end": 72.03999999999999, "text": " do well."}, {"start": 72.03999999999999, "end": 77.03999999999999, "text": " Like in evolution in nature, mutations can also happen, which means random changes are"}, {"start": 77.03999999999999, "end": 79.92, "text": " also applied to the DNA code of a solution."}, {"start": 79.92, "end": 85.76, "text": " We know from nature that evolution works extraordinarily well and the more we run this genetic optimization"}, {"start": 85.76, "end": 88.08, "text": " program, the better the solutions get."}, {"start": 88.08, "end": 92.72, "text": " It's quite delightful for a programmer to see their own children trying vigorously and"}, {"start": 92.72, "end": 97.68, "text": " succeeding at solving a difficult task, even more so if the programmer wouldn't be able"}, {"start": 97.68, "end": 99.72, "text": " to solve this problem by himself."}, {"start": 99.72, "end": 101.56, "text": " Let's run a quick example."}, {"start": 101.56, "end": 103.68, "text": " We start with a set of solutions."}, {"start": 103.68, "end": 108.88, "text": " The DNA of a solution is a set of zeros and ones which can encode some decision about"}, {"start": 108.88, "end": 114.32, "text": " the solution whether we turn left or right in a maze or it can also be an integer or"}, {"start": 114.32, "end": 115.52, "text": " an unreal number."}, {"start": 115.52, "end": 120.36, "text": " We then compute how good these solutions are according to our taste in the example with"}, {"start": 120.36, "end": 123.36, "text": " cars how far these designs can get."}, {"start": 123.36, "end": 128.35999999999999, "text": " Then we take, for instance, the best three solutions and combine them together to create"}, {"start": 128.35999999999999, "end": 134.6, "text": " a new DNA."}, {"start": 134.6, "end": 138.8, "text": " Some of the better solutions may remain in the population unchanged."}, {"start": 138.8, "end": 143.68, "text": " Then, probabilistically, random mutations happen to some of the solutions which help us"}, {"start": 143.68, "end": 146.56, "text": " explore the vast search space better."}, {"start": 146.56, "end": 150.88000000000002, "text": " Reans and repeat and there you have it, genetic algorithms."}, {"start": 150.88000000000002, "end": 156.36, "text": " I have also coded up a version of Roger Allsings' Evo Liza problem where the famous Monalisa"}, {"start": 156.36, "end": 161.92000000000002, "text": " painting is to be reproduced by a computer program with a few tens of triangles."}, {"start": 161.92000000000002, "end": 166.60000000000002, "text": " The goal is to paint a version that is as faithful to the original as possible."}, {"start": 166.6, "end": 171.04, "text": " This would be quite a difficult problem for humans but apparently a genetic algorithm"}, {"start": 171.04, "end": 172.88, "text": " can deal with this really well."}, {"start": 172.88, "end": 178.72, "text": " The code is available for everyone to learn, experiment and play with and it's super fun."}, {"start": 178.72, "end": 183.07999999999998, "text": " And if you're interested in the concept of evolution, maybe read the excellent book,"}, {"start": 183.07999999999998, "end": 185.84, "text": " The Blind Watchmaker by Richard Dawkins."}, {"start": 185.84, "end": 197.92000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=AbcRlDBnwjM
OpenAI - Non-profit AI company by Elon Musk and Sam Altman
Elon Musk and Sam Altman founded a non-profit artificial intelligence research company that they call OpenAI. The funders have committed over one billion dollars for this cause. Their goal with OpenAI is to make progress towards superintelligence, leveraging their non-profit nature to make sure that such a breakthrough will be done in a controlled and beneficial way. ______________________ News source: https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a OpenAI website: https://openai.com/blog/introducing-openai/ Recommended for you: Artificial Superintelligence - https://www.youtube.com/watch?v=08V_F19HUfI&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=22 Are We Living In a Computer Simulation? https://www.youtube.com/watch?v=ATN9oqMF_qk&index=10&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e The thumbnail image background was created by Steve Jurvetson(CC BY 2.0) - https://flic.kr/p/5uzuFL Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, we have some delightful news. Elon Musk and Sam Altman founded a nonprofit artificial intelligence research company that they call OpenAI. The funders have committed over $1 billion for this cause. Their goal is to make progress towards super intelligence, leveraging their nonprofit nature to make sure that such a breakthrough will be done in the controlled and beneficial way. As of the current state of things, most of the bigger companies with strong AI groups publish their work regularly, but as we get closer to artificial general intelligence and super intelligence, it is a question how much they will share. We have talked already about how enormously powerful a super intelligence could become, and how important it is to make sure that it is developed in a safe way, make sure to check that video out, a link is in the description box. It is really mind blowing. So everything they create will be open, therefore the first question that came to my mind, is it really good that anyone will be able to create an AI? What about users who are interested in doing it in a way that is harmful and dangerous to others? This was subject to a lot of debate and one of the conclusions is that at the same time most people are sensible and they expect that the number of friendly AI's will overpower the bad guys. We don't know if it's the best case scenario, but it is definitely better than the case of one company owning the only super intelligence. At OpenAI they already have researchers of their own and their research projects will be completely open. This is amazing because it rarely happens with companies as they usually want to retain the intellectual property of their projects. Amazon web services are also donating a huge amount of resources for the company. The fact that just like at research institutions, the researchers can publicly share their work may be a big deciding factor when recruiting, which is according to the founders already going really well. So, the light will know indeed. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 6.82, "text": " Dear Fellow Scholars, we have some delightful news. Elon Musk and Sam Altman founded a nonprofit"}, {"start": 6.82, "end": 12.26, "text": " artificial intelligence research company that they call OpenAI. The funders have committed"}, {"start": 12.26, "end": 18.96, "text": " over $1 billion for this cause. Their goal is to make progress towards super intelligence,"}, {"start": 18.96, "end": 22.78, "text": " leveraging their nonprofit nature to make sure that such a breakthrough will be done in"}, {"start": 22.78, "end": 27.48, "text": " the controlled and beneficial way. As of the current state of things, most of the bigger"}, {"start": 27.48, "end": 33.0, "text": " companies with strong AI groups publish their work regularly, but as we get closer to artificial"}, {"start": 33.0, "end": 38.32, "text": " general intelligence and super intelligence, it is a question how much they will share."}, {"start": 38.32, "end": 43.08, "text": " We have talked already about how enormously powerful a super intelligence could become,"}, {"start": 43.08, "end": 47.6, "text": " and how important it is to make sure that it is developed in a safe way, make sure to"}, {"start": 47.6, "end": 52.36, "text": " check that video out, a link is in the description box. It is really mind blowing."}, {"start": 52.36, "end": 56.88, "text": " So everything they create will be open, therefore the first question that came to my mind,"}, {"start": 56.88, "end": 61.88, "text": " is it really good that anyone will be able to create an AI? What about users who are"}, {"start": 61.88, "end": 67.16, "text": " interested in doing it in a way that is harmful and dangerous to others? This was subject"}, {"start": 67.16, "end": 72.48, "text": " to a lot of debate and one of the conclusions is that at the same time most people are"}, {"start": 72.48, "end": 78.08, "text": " sensible and they expect that the number of friendly AI's will overpower the bad guys."}, {"start": 78.08, "end": 82.12, "text": " We don't know if it's the best case scenario, but it is definitely better than the case"}, {"start": 82.12, "end": 88.28, "text": " of one company owning the only super intelligence. At OpenAI they already have researchers of"}, {"start": 88.28, "end": 93.52000000000001, "text": " their own and their research projects will be completely open. This is amazing because"}, {"start": 93.52000000000001, "end": 98.12, "text": " it rarely happens with companies as they usually want to retain the intellectual property"}, {"start": 98.12, "end": 103.48, "text": " of their projects. Amazon web services are also donating a huge amount of resources for"}, {"start": 103.48, "end": 109.16, "text": " the company. The fact that just like at research institutions, the researchers can publicly"}, {"start": 109.16, "end": 113.8, "text": " share their work may be a big deciding factor when recruiting, which is according to the"}, {"start": 113.8, "end": 119.8, "text": " founders already going really well. So, the light will know indeed. Thanks for watching"}, {"start": 119.8, "end": 148.32, "text": " and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=674DL39dOOQ
Randomness and Bell's Inequality [Audio only] | Two Minute Papers #31
In this episode, we discuss what makes an event random, and how incredible it is what Bell's theorem (or inequality) has to say about truly random events. Note: "local" means that information from the hidden variable doesn't travel faster than light. __________________________ The paper "On the Einstein Podolsky Rosen Paradox" is available here: http://www.drchinese.com/David/Bell_Compact.pdf http://homepages.physik.uni-muenchen.de/~vondelft/Lehre/09qm/lec21-22-BellInequalities/Bell1964.pdf Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image was created by Giovanni Arteaga (CC BY 2.0) - https://flic.kr/p/8M11b6 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Carlos Zona Ifehir. When using cryptography, we'd like to safely communicate over the internet in the presence of third parties. To be able to do this and many other important applications, we need random numbers. But what does it exactly mean that something is random? Randomness is the lack of any patterns and predictability. People usually use coin flips as random events. That is a coin flip really random. If we had a really smart physicist who can model all the forces that act upon the coin, he would easily find out whether it's going to be heads or tails. Strictly speaking, a coin flip is therefore not random. What about random numbers generated with computers? Computers are a collection of processing units that run programs. If one knows the program code that generates the random numbers, they are not random anymore because it doesn't happen by chance and it is possible to predict. John von Neumann famously said, Anyone who considers erythematical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number. There are only methods to produce random numbers and a strict erythmatic procedure, of course, is not such a method. Some websites offer high quality random numbers that are generated from atmospheric noise. Practically speaking, this, of course, sounds adequate enough. If someone wants to break the encryption of our communications, they would have to be able to model the physics and initial conditions of every single thunderbolt, which means processing millions of discharges per day. This is practically impossible. So it seems reasonable to say that random events are considered random because of our ignorance, but because they are, strictly speaking, unpredictable. You just need to be smart enough and the notion of randomness fades away in the light of your intelligence. Or so it seemed for physicists for a long time. Imagine if someone who has never heard about magnetism would see many magnets attracting each other and some added magnet powder. This person would most definitely say it's magic happening. However, if you know about magnetism, you know that things don't happen randomly, there are very simple laws that can predict all this movement. In this case, magnetic forces we can loosely call a hidden variable. So we have a phenomenon that we cannot predict and we are keen to say it's random. In reality, it is not. There is just a hidden variable that we don't know of that is responsible for this behavior. We have the very same phenomenon if we look inside of an atom. Quantum level effects happen according to the physics of extremely small things and we again find behaviors that seem completely random. We know some of the trends just like we know which roads in our city are expected to have a huge traffic jam every morning, but we cannot predict where every single individual car is heading. We have it the same way with extremely small particles. We are keen to say that a behavior seems completely random because nothing that we know or measure would explain it. Other people would immediately say, wait, you don't know everything. Maybe these quantum effects are not random as there may be hidden things, hidden variables that you don't know of which make up for the behavior. We can't just say this or that is random. It is much, much more likely that our knowledge is insufficient to predict what is happening as electromagnetic forces seemed magical to scientists a few hundred years ago. So is quantum mechanics completely random or does it only seem random? It is probably one of the most difficult questions ever asked. How can you find out that something you measure that seems random is really completely random and not just the act of forces that you don't know of? And hold on to your chair because this is going to blow your mind. A simple and intuitive statement of Bell's theorem states that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. This means that he proved that the behavior scientist experience in quantum mechanics are really random. They cannot be explained by any theory you could possibly make up. Simple one or complicated doesn't matter. This discovery is absolutely insane. You can definitely prove that the crappy theory someone quickly made up doesn't explain a behavior. But how can you prove that it is completely impossible to build such a theory that does? No matter how hard you try, how smart you are, you can't do it. This is such a mind-bogglingly awesome theorem. And please note that we definitely lose out on some details and generality because of the fact that we use intuitive words to discuss these results as opposed to the original derivation with covariances between measurements. On our imaginary list of the wonders of the world, monuments created not by the hands, but the minds of humans. This should definitely be among the best of them. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Carlos Zona Ifehir."}, {"start": 4.6000000000000005, "end": 9.34, "text": " When using cryptography, we'd like to safely communicate over the internet in the presence"}, {"start": 9.34, "end": 11.040000000000001, "text": " of third parties."}, {"start": 11.040000000000001, "end": 16.32, "text": " To be able to do this and many other important applications, we need random numbers."}, {"start": 16.32, "end": 20.080000000000002, "text": " But what does it exactly mean that something is random?"}, {"start": 20.080000000000002, "end": 23.92, "text": " Randomness is the lack of any patterns and predictability."}, {"start": 23.92, "end": 27.04, "text": " People usually use coin flips as random events."}, {"start": 27.04, "end": 29.32, "text": " That is a coin flip really random."}, {"start": 29.32, "end": 34.64, "text": " If we had a really smart physicist who can model all the forces that act upon the coin, he"}, {"start": 34.64, "end": 38.64, "text": " would easily find out whether it's going to be heads or tails."}, {"start": 38.64, "end": 42.04, "text": " Strictly speaking, a coin flip is therefore not random."}, {"start": 42.04, "end": 45.519999999999996, "text": " What about random numbers generated with computers?"}, {"start": 45.519999999999996, "end": 50.0, "text": " Computers are a collection of processing units that run programs."}, {"start": 50.0, "end": 54.68, "text": " If one knows the program code that generates the random numbers, they are not random anymore"}, {"start": 54.68, "end": 59.48, "text": " because it doesn't happen by chance and it is possible to predict."}, {"start": 59.48, "end": 61.6, "text": " John von Neumann famously said,"}, {"start": 61.6, "end": 68.22, "text": " Anyone who considers erythematical methods of producing random digits is, of course, in"}, {"start": 68.22, "end": 69.44, "text": " a state of sin."}, {"start": 69.44, "end": 75.0, "text": " For, as has been pointed out several times, there is no such thing as a random number."}, {"start": 75.0, "end": 80.56, "text": " There are only methods to produce random numbers and a strict erythmatic procedure, of course,"}, {"start": 80.56, "end": 82.32, "text": " is not such a method."}, {"start": 82.32, "end": 87.88, "text": " Some websites offer high quality random numbers that are generated from atmospheric noise."}, {"start": 87.88, "end": 91.47999999999999, "text": " Practically speaking, this, of course, sounds adequate enough."}, {"start": 91.47999999999999, "end": 95.36, "text": " If someone wants to break the encryption of our communications, they would have to be"}, {"start": 95.36, "end": 101.24, "text": " able to model the physics and initial conditions of every single thunderbolt, which means processing"}, {"start": 101.24, "end": 104.0, "text": " millions of discharges per day."}, {"start": 104.0, "end": 106.35999999999999, "text": " This is practically impossible."}, {"start": 106.35999999999999, "end": 111.8, "text": " So it seems reasonable to say that random events are considered random because of our ignorance,"}, {"start": 111.8, "end": 115.28, "text": " but because they are, strictly speaking, unpredictable."}, {"start": 115.28, "end": 120.44, "text": " You just need to be smart enough and the notion of randomness fades away in the light of your"}, {"start": 120.44, "end": 121.44, "text": " intelligence."}, {"start": 121.44, "end": 124.92, "text": " Or so it seemed for physicists for a long time."}, {"start": 124.92, "end": 130.16, "text": " Imagine if someone who has never heard about magnetism would see many magnets attracting"}, {"start": 130.16, "end": 133.07999999999998, "text": " each other and some added magnet powder."}, {"start": 133.07999999999998, "end": 136.64, "text": " This person would most definitely say it's magic happening."}, {"start": 136.64, "end": 141.6, "text": " However, if you know about magnetism, you know that things don't happen randomly, there"}, {"start": 141.6, "end": 145.12, "text": " are very simple laws that can predict all this movement."}, {"start": 145.12, "end": 150.16, "text": " In this case, magnetic forces we can loosely call a hidden variable."}, {"start": 150.16, "end": 154.79999999999998, "text": " So we have a phenomenon that we cannot predict and we are keen to say it's random."}, {"start": 154.79999999999998, "end": 156.44, "text": " In reality, it is not."}, {"start": 156.44, "end": 161.24, "text": " There is just a hidden variable that we don't know of that is responsible for this behavior."}, {"start": 161.24, "end": 165.24, "text": " We have the very same phenomenon if we look inside of an atom."}, {"start": 165.24, "end": 170.24, "text": " Quantum level effects happen according to the physics of extremely small things and we"}, {"start": 170.24, "end": 174.12, "text": " again find behaviors that seem completely random."}, {"start": 174.12, "end": 179.12, "text": " We know some of the trends just like we know which roads in our city are expected to have"}, {"start": 179.12, "end": 183.8, "text": " a huge traffic jam every morning, but we cannot predict where every single individual"}, {"start": 183.8, "end": 185.0, "text": " car is heading."}, {"start": 185.0, "end": 188.48000000000002, "text": " We have it the same way with extremely small particles."}, {"start": 188.48000000000002, "end": 193.60000000000002, "text": " We are keen to say that a behavior seems completely random because nothing that we know"}, {"start": 193.60000000000002, "end": 195.76000000000002, "text": " or measure would explain it."}, {"start": 195.76000000000002, "end": 199.88, "text": " Other people would immediately say, wait, you don't know everything."}, {"start": 199.88, "end": 204.76, "text": " Maybe these quantum effects are not random as there may be hidden things, hidden variables"}, {"start": 204.76, "end": 207.79999999999998, "text": " that you don't know of which make up for the behavior."}, {"start": 207.79999999999998, "end": 210.32, "text": " We can't just say this or that is random."}, {"start": 210.32, "end": 216.04, "text": " It is much, much more likely that our knowledge is insufficient to predict what is happening"}, {"start": 216.04, "end": 221.35999999999999, "text": " as electromagnetic forces seemed magical to scientists a few hundred years ago."}, {"start": 221.35999999999999, "end": 226.32, "text": " So is quantum mechanics completely random or does it only seem random?"}, {"start": 226.32, "end": 230.07999999999998, "text": " It is probably one of the most difficult questions ever asked."}, {"start": 230.07999999999998, "end": 235.35999999999999, "text": " How can you find out that something you measure that seems random is really completely random"}, {"start": 235.35999999999999, "end": 239.2, "text": " and not just the act of forces that you don't know of?"}, {"start": 239.2, "end": 242.88, "text": " And hold on to your chair because this is going to blow your mind."}, {"start": 242.88, "end": 249.56, "text": " A simple and intuitive statement of Bell's theorem states that no physical theory of local"}, {"start": 249.56, "end": 255.16, "text": " hidden variables can ever reproduce all of the predictions of quantum mechanics."}, {"start": 255.16, "end": 260.12, "text": " This means that he proved that the behavior scientist experience in quantum mechanics"}, {"start": 260.12, "end": 262.12, "text": " are really random."}, {"start": 262.12, "end": 266.68, "text": " They cannot be explained by any theory you could possibly make up."}, {"start": 266.68, "end": 269.6, "text": " Simple one or complicated doesn't matter."}, {"start": 269.6, "end": 272.88, "text": " This discovery is absolutely insane."}, {"start": 272.88, "end": 277.24, "text": " You can definitely prove that the crappy theory someone quickly made up doesn't explain"}, {"start": 277.24, "end": 278.24, "text": " a behavior."}, {"start": 278.24, "end": 284.15999999999997, "text": " But how can you prove that it is completely impossible to build such a theory that does?"}, {"start": 284.16, "end": 288.64000000000004, "text": " No matter how hard you try, how smart you are, you can't do it."}, {"start": 288.64000000000004, "end": 291.84000000000003, "text": " This is such a mind-bogglingly awesome theorem."}, {"start": 291.84000000000003, "end": 296.24, "text": " And please note that we definitely lose out on some details and generality because of"}, {"start": 296.24, "end": 301.84000000000003, "text": " the fact that we use intuitive words to discuss these results as opposed to the original derivation"}, {"start": 301.84000000000003, "end": 304.32000000000005, "text": " with covariances between measurements."}, {"start": 304.32000000000005, "end": 310.08000000000004, "text": " On our imaginary list of the wonders of the world, monuments created not by the hands,"}, {"start": 310.08000000000004, "end": 311.88, "text": " but the minds of humans."}, {"start": 311.88, "end": 314.48, "text": " This should definitely be among the best of them."}, {"start": 314.48, "end": 344.04, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9wOBkJJ-w2s
Automatic Parameter Control for Metropolis Light Transport | Two Minute Papers #30
Photorealistic rendering (also called global illumination) enables us to see how digital objects would look like in real life. It is an amazingly powerful tool in the hands of a professional artist, who can create breathtaking images or animations with. Metropolis light transport is an advanced photorealistic rendering technique that is remarkably effective at finding the brighter regions of a scene and building many light paths that target these regions. The resulting algorithm is more efficient than traditional random path building algorithms, such as path tracing. This algorithm endeavors to choose an optimal mixture between naive random path sampling techniques (such as path tracing and bidirectional path tracing) and Metropolis Light Transport. ___________________ The paper "Automatic Parameter Control for Metropolis Light Transport" is available here: https://cg.tuwien.ac.at/~zsolnai/gfx/adaptive_metropolis/ We thank Kai Schwebke for providing LuxTime, Vlad Miller for the Spheres, Giulio Jiang for the Chess, Aaron Hill for the Cornell Box, Andreas Burmberger for the Cherry Splash and Glass Ball scenes. I held a course on photorealistic rendering at the Technical University of Vienna. Here you can learn how the physics of light works and to write programs like this: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehair. We'll start with a quick recap on metropolis light transport and then discuss a cool technique that builds on top of it. If we would like to see how digitally modeled objects would look like in real life, we would create a 3D model of the desired scene, assign material models to the objects within, and use a photorealistic rendering algorithm to finish the job. It simulates rays of light that connect the camera to the light sources in the scene and compute the flow of energy between them. Initially, after a few rays will only have a rough idea on how the image should look like, therefore our initial results will contain a substantial amount of noise. We can get rid of this by simulating the path of millions and millions of rays that will eventually clean up our image. This process, where a noisy image gets clearer and clearer, we call convergence, and the problem is that this can take excruciatingly long, even up to hours, to get a perfectly clear image. With the simple algorithms out there, we generate these light paths randomly. This technique we call path tracing. However, in the scene that you see here, most random paths can't connect the camera and the light source because this wall is in the way obstructing many of them. Light paths like these don't contribute anything to our calculations and are ultimately a waste of time and resources. After generating hundreds of random light paths, we finally found a path that connects the camera with the light source without any obstructions. When generating the next path, it would be a crime not to use this knowledge to our advantage. A technique called metropolis light transport will make sure to use this valuable knowledge and upon finding a bright light path, it will explore other paths that are nearby to have the best shot at creating valid, unobstructed connections. If we have a difficult scene at hand, metropolis light transport gives us way better results than traditional, completely random paths sampling techniques such as path tracing. This scene is extremely difficult in a sense that the only source of light is coming from the upper left and after the light goes through multiple glass spheres, most of the light paths that we generate will be invalid. As you can see, this is a valiant effort with random path tracing that yields really dreadful results. Metropolis light transport is extremely useful in these cases and therefore should always be the weapon of choice. However, it is more expensive to compute than traditional random sampling. This means that if we have an easy scene on our hands, this smart metropolis sampling doesn't pay off and performs worse than a naive technique in the same amount of time. So, on easy scenes, traditional random sampling, difficult scenes, metropolis sampling, super simple, super intuitive, but the million dollar question is how to mathematically formulate and measure what an easy and what a difficult scene is. This problem is considered extremely difficult and was left open in the metropolis light transport paper in 2002. Even if we knew what to look for, we would likely get an answer by creating a converged image of the scene, which, without the knowledge of what algorithm to use, may take up to days to complete. But, if we have created the image, it's too late, we would need this information before we start this rendering process. This way we can choose the right algorithm on the first try. With this technique that came more than 10 years after the metropolis paper, it is possible to mathematically formalize and quickly decide whether a scene is easy or difficult. The key insight is that in a difficult scene, we often experience that a completely random ray is very likely to be invalid. This insight, with two other simple metrics, gives us all the knowledge we need to decide whether a scene is easy or difficult. And the algorithm tells us what mixture of the two sampling techniques we exactly need to use to get beautiful images quickly. The more complex light transport algorithms get, the more efficient they become, but at the same time, we are wallowing in parameters that we need to set up correctly to get adequate results quickly. This way we have an algorithm that doesn't take any parameters, you just fire it up and forget about it. Like a good employee, it knows when to work smart and when a dumb solution with a lot of firepower is better. And it was tested on a variety of scenes and found close to optimal settings. Implementing this technique is remarkably easy. Someone who is familiar with the basics of light transport can do it in less than half an hour. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehair."}, {"start": 4.32, "end": 9.0, "text": " We'll start with a quick recap on metropolis light transport and then discuss a cool technique"}, {"start": 9.0, "end": 10.52, "text": " that builds on top of it."}, {"start": 10.52, "end": 14.56, "text": " If we would like to see how digitally modeled objects would look like in real life, we"}, {"start": 14.56, "end": 20.0, "text": " would create a 3D model of the desired scene, assign material models to the objects within,"}, {"start": 20.0, "end": 23.44, "text": " and use a photorealistic rendering algorithm to finish the job."}, {"start": 23.44, "end": 28.16, "text": " It simulates rays of light that connect the camera to the light sources in the scene and"}, {"start": 28.16, "end": 30.96, "text": " compute the flow of energy between them."}, {"start": 30.96, "end": 35.0, "text": " Initially, after a few rays will only have a rough idea on how the image should look"}, {"start": 35.0, "end": 39.76, "text": " like, therefore our initial results will contain a substantial amount of noise."}, {"start": 39.76, "end": 44.84, "text": " We can get rid of this by simulating the path of millions and millions of rays that will"}, {"start": 44.84, "end": 47.04, "text": " eventually clean up our image."}, {"start": 47.04, "end": 52.28, "text": " This process, where a noisy image gets clearer and clearer, we call convergence, and the problem"}, {"start": 52.28, "end": 57.879999999999995, "text": " is that this can take excruciatingly long, even up to hours, to get a perfectly clear"}, {"start": 57.88, "end": 59.04, "text": " image."}, {"start": 59.04, "end": 63.2, "text": " With the simple algorithms out there, we generate these light paths randomly."}, {"start": 63.2, "end": 65.36, "text": " This technique we call path tracing."}, {"start": 65.36, "end": 70.4, "text": " However, in the scene that you see here, most random paths can't connect the camera"}, {"start": 70.4, "end": 75.24000000000001, "text": " and the light source because this wall is in the way obstructing many of them."}, {"start": 75.24000000000001, "end": 80.44, "text": " Light paths like these don't contribute anything to our calculations and are ultimately a waste"}, {"start": 80.44, "end": 82.88, "text": " of time and resources."}, {"start": 82.88, "end": 88.16, "text": " After generating hundreds of random light paths, we finally found a path that connects the"}, {"start": 88.16, "end": 91.6, "text": " camera with the light source without any obstructions."}, {"start": 91.6, "end": 95.88, "text": " When generating the next path, it would be a crime not to use this knowledge to our"}, {"start": 95.88, "end": 96.88, "text": " advantage."}, {"start": 96.88, "end": 101.52, "text": " A technique called metropolis light transport will make sure to use this valuable knowledge"}, {"start": 101.52, "end": 106.44, "text": " and upon finding a bright light path, it will explore other paths that are nearby to"}, {"start": 106.44, "end": 111.08, "text": " have the best shot at creating valid, unobstructed connections."}, {"start": 111.08, "end": 116.16, "text": " If we have a difficult scene at hand, metropolis light transport gives us way better results"}, {"start": 116.16, "end": 121.4, "text": " than traditional, completely random paths sampling techniques such as path tracing."}, {"start": 121.4, "end": 126.44, "text": " This scene is extremely difficult in a sense that the only source of light is coming from"}, {"start": 126.44, "end": 131.16, "text": " the upper left and after the light goes through multiple glass spheres, most of the light"}, {"start": 131.16, "end": 135.4, "text": " paths that we generate will be invalid."}, {"start": 135.4, "end": 140.56, "text": " As you can see, this is a valiant effort with random path tracing that yields really dreadful"}, {"start": 140.56, "end": 141.92000000000002, "text": " results."}, {"start": 141.92000000000002, "end": 146.2, "text": " Metropolis light transport is extremely useful in these cases and therefore should always"}, {"start": 146.2, "end": 147.96, "text": " be the weapon of choice."}, {"start": 147.96, "end": 152.4, "text": " However, it is more expensive to compute than traditional random sampling."}, {"start": 152.4, "end": 156.96, "text": " This means that if we have an easy scene on our hands, this smart metropolis sampling"}, {"start": 156.96, "end": 162.56, "text": " doesn't pay off and performs worse than a naive technique in the same amount of time."}, {"start": 162.56, "end": 168.48000000000002, "text": " So, on easy scenes, traditional random sampling, difficult scenes, metropolis sampling, super"}, {"start": 168.48, "end": 174.2, "text": " simple, super intuitive, but the million dollar question is how to mathematically formulate"}, {"start": 174.2, "end": 178.51999999999998, "text": " and measure what an easy and what a difficult scene is."}, {"start": 178.51999999999998, "end": 183.28, "text": " This problem is considered extremely difficult and was left open in the metropolis light transport"}, {"start": 183.28, "end": 185.23999999999998, "text": " paper in 2002."}, {"start": 185.23999999999998, "end": 189.32, "text": " Even if we knew what to look for, we would likely get an answer by creating a converged"}, {"start": 189.32, "end": 195.0, "text": " image of the scene, which, without the knowledge of what algorithm to use, may take up to days"}, {"start": 195.0, "end": 196.0, "text": " to complete."}, {"start": 196.0, "end": 201.56, "text": " But, if we have created the image, it's too late, we would need this information before"}, {"start": 201.56, "end": 203.76, "text": " we start this rendering process."}, {"start": 203.76, "end": 207.32, "text": " This way we can choose the right algorithm on the first try."}, {"start": 207.32, "end": 211.92, "text": " With this technique that came more than 10 years after the metropolis paper, it is possible"}, {"start": 211.92, "end": 217.48, "text": " to mathematically formalize and quickly decide whether a scene is easy or difficult."}, {"start": 217.48, "end": 222.36, "text": " The key insight is that in a difficult scene, we often experience that a completely random"}, {"start": 222.36, "end": 225.04, "text": " ray is very likely to be invalid."}, {"start": 225.04, "end": 229.67999999999998, "text": " This insight, with two other simple metrics, gives us all the knowledge we need to decide"}, {"start": 229.67999999999998, "end": 232.2, "text": " whether a scene is easy or difficult."}, {"start": 232.2, "end": 237.04, "text": " And the algorithm tells us what mixture of the two sampling techniques we exactly need"}, {"start": 237.04, "end": 240.2, "text": " to use to get beautiful images quickly."}, {"start": 240.2, "end": 244.84, "text": " The more complex light transport algorithms get, the more efficient they become, but at"}, {"start": 244.84, "end": 250.6, "text": " the same time, we are wallowing in parameters that we need to set up correctly to get adequate"}, {"start": 250.6, "end": 252.16, "text": " results quickly."}, {"start": 252.16, "end": 256.96, "text": " This way we have an algorithm that doesn't take any parameters, you just fire it up and"}, {"start": 256.96, "end": 258.2, "text": " forget about it."}, {"start": 258.2, "end": 262.84, "text": " Like a good employee, it knows when to work smart and when a dumb solution with a lot of"}, {"start": 262.84, "end": 265.04, "text": " firepower is better."}, {"start": 265.04, "end": 270.64, "text": " And it was tested on a variety of scenes and found close to optimal settings."}, {"start": 270.64, "end": 273.2, "text": " Implementing this technique is remarkably easy."}, {"start": 273.2, "end": 276.8, "text": " Someone who is familiar with the basics of light transport can do it in less than half"}, {"start": 276.8, "end": 277.8, "text": " an hour."}, {"start": 277.8, "end": 281.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=08V_F19HUfI
Artificial Superintelligence [Audio only] | Two Minute Papers #29
Humanity is getting closer and closer to creating human-level intelligence. The question nowadays is not if it will happen, but when it will be happen. Through recursive self-improvement, machine intelligence may quickly surpass the level of humans, creating an artificial superintelligent entity. The intelligence of such entity is so unfathomable, that we cannot even wrap our head around what it would be capable of, just as ants cannot grasp the concept of radio waves. Elon Musk compares creating an artificial superintelligence to "summoning the demon", and he offered 10 million dollars to research a safe way to develop this technology. ___________________________ Recommended for you: Are We Living In a Computer Simulation? - https://www.youtube.com/watch?v=ATN9oqMF_qk&index=9&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e A great article on Superintelligence on Wait But Why (there are two parts): http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html A talk from Tim Urban, author of Wait But Why: https://www.youtube.com/watch?v=O7xfJVvlqdE One more excellent article reflecting on the article above: http://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/ Nick Bostrom - Artificial Superintelligence: http://www.amazon.com/gp/product/0199678111?ref_=cm_sw_r_awd_fkm-tb0J07SSW Elon Musk's $10 million for ethical AI research: http://www.forbes.com/sites/ericmack/2015/01/15/elon-musk-puts-down-10-million-to-fight-skynet/ A neat study from the Machine Intelligence Research Institute (MIRI): https://intelligence.org/files/CEV.pdf Nick Bostrom's poll on when we will achieve superintelligence: http://sophia.de/pdf/2014_PT-AI_polls.pdf A science paper claims that our knowledge about the genetic human-mammal differences may be misguided: http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1004525 Excellent discussions on superintelligence: https://www.youtube.com/watch?v=MnT1xgZgkpk https://www.youtube.com/watch?v=pywF6ZzsghI https://www.youtube.com/watch?v=h9NB0EQ9iQg Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Two CC0 images were edited together for the thumbnail screen. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zonai-Fehir. Now I'll eat my head if this is going to be two minutes, but I really hope you fellow scholars are going to like this little discussion. Neil deGress Tyson described a cool thought experiment in one of his talks. He mentioned that the difference of the human and the monkey DNA is really small, a one-digit percentage. For simplicity, let's say it is 1%. For this 1% difference, there's a huge difference in the intellect of humans and apes. The smartest chimpanzee you can imagine can do tasks like clapping his hands to a given simple rhythm or strike a match. Compared to the average chimpanzee, such an animal would be an equivalent of Einstein or John von Neumann. He can clap his hands. What is that for humans? Children can do that. Even before they start studying, they can effortlessly do something that rivals the brightest minds monkeys could ever produce. Imagine if there were species that is the same 1% difference away from us humans in the same direction. What could they be capable of? Their small children would be composing beautiful symphonies, perfect harmonization for hundreds of instruments. Or they would be deriving everything in the history of physics from Newton's laws to quantum electrodynamics. And their parents would be like, oh, look at what little Jimmy did. That's adorable. And they would put it on the fridge with a magnet, just like we do with the adorable little scribbles of our children. Just thinking about the possibilities gives me chills. Now, let's transition into neural networks. An artificial neural network is a crude approximation of the human brain that we can simulate on a computer to recognize images, paint in the style of famous artists or learn to play video games and a number of different very useful things. The number of connections that we can simulate on a graphical card of our computer grows closely to what's predicted in Moore's law, which means that the computing capacity that we have in our home computer doubles every few years. It's pretty crazy if you think about it, but most of your fellow scholars have phones in your pockets that have more computing capacity than NASA had to lend on the moon. As years go by, there will be more and more connections in these artificial neural networks, and they don't have to adhere to stringent constraints like our brains do, such as fitting into the human cranium. A computer can be the size of a building or even bigger. Computers also transmit data with a speed of light, which is way faster than the transfer capabilities of the human brain. Nick Bostrom asked a lot of leading AI researchers on the speed of progress in this field, and the conclusion of the study was basically that the question is not can we achieve human level intelligence, but when we will achieve it. However, the number of connections is not everything, as an artificial neural network is by far not a one-on-one copy of the human brain. We need something more than this. A very promising possible next frontier to conquer is called recursive self-improvement. Recursive self-improvement means that we tell the program to instead of work on an ordinary task like do better image recognition, we would order it to work on improving its own intelligence. Ask the program itself to rewrite its code to be more efficient and more general. So we have a program with a ton of computational resources working on getting smarter, and as it suddenly gets just a bit smarter, we then have a smarter machine that can again be asked to improve its own intelligence. But it is now more capable of doing that, therefore if we do this many times, leaps are going to get bigger and bigger as an intelligent mind can do more to improve itself than an insect can. This way, we may end up with an intelligence explosion, which means a possible exponential increase in capabilities. And if this is the case, talking about human level intelligence is completely irrelevant. During this process, given enough resources, the system may go from the intelligence of an insect to something way beyond the capabilities of the most intelligent person who ever lived in about a second or less. It would come up with way better solutions in milliseconds than anything you've seen on two minute papers and there's plenty of brilliant works out there. And of course, it could also develop never before seeing superweapons to unleash an unprecedented destruction on Earth. We wouldn't know if it would do it, but it is capable of doing that, which is quite alarming. I am not surprised that Elon Musk compares creating an artificial superintelligence to summoning the demon. And he offered $10 million to research a safe way to develop this technology, which is obviously not nearly enough, but it is an excellent way to raise awareness. Now the classical argument on how to curb such a superintelligence if one recognizes that it is up to no good, people say that, well, I'll unplug it, or maybe lock it away from the internet. The problem is that people assume that they can do it. We can lock it up in any way we can think of, but there's only so much we can do because as Neil deGrasse Tyson argued, even the smartest human who ever lived would be a blabbering, drooling idiot compared to such an intelligence. How easy is it for a grown adult to fool a child? A piece of cake. The intelligence gap between us and the superintelligence is more than a thousand times that. It's even more pathetic than a child or even a dog who tries to fool us. We humans can anticipate threats like wielding weapons or locking dangerous animals into cages. And so can superintelligent beings also anticipate our threats. Only way better. It can trick you by pretending to be broken and when the engineer goes there to fix the code, the manipulation can begin. It could also communicate with gravitational waves or any kind of thing that we cannot even fathom, just as an ant has no idea about our radio waves. And we don't need to characterize superintelligent beings as an adversary. The road to hell is paved with good intentions. It may very well be possible that we assign it a completely benign task that anyone could agree with and it would end up in a disaster in a way we cannot anticipate. Imagine assigning it the task of maximizing the number of paperclips. Nick Basrum argues that it would at first maybe create better blueprints and factory lines. And after some point it may run out of resources on earth. Then in order to maximize the number of paperclips, it would recognize that humans contain lots of useful atoms. So eradicating humanity would only be logical to maximize the number of paperclips. Think about another task, creating the best approximation of the number pi. One can approximate to the most decimals by using more resources, to have more resources one builds more and bigger computers. At some point it runs out of space and eradicate humans because they are in the way of creating more computers. Or it may eradicate humans way before that because it knows that they are capable of shutting you down. And if you get shut down, there's going to be less digits or paperclips. So again, it's only logical to kill them. The task will be done but no one will be there anymore to say thank you. It is a bit like a movie where there's an intelligent car and the driver is in a car-chase situation, shouting, we're too slow and fuel is running out. Please throw out all excessive useless weights. And along some empty bottles, the person would be subsequently ejected from the vehicle. We don't know what is going to be the next invention of mankind, but we know what's going to be the last one, artificial superintelligence. It has the potential to either eradicate humanity or solve all of its problems. It is both the deadliest weapon that will ever exist and the key to eternal life. We need to be vigilant about the fact that we have tons of money invested in artificial intelligence research, but barely any to make sure we are doing it in a controlled and ethical way. This task needs some of the brightest minds of our generation and perhaps even the next one. And this needs to happen before we get there. When we are there, it's already too late. I highly recommend an absolutely fantastic article on Wade Batwai about this or Nick Bastram's amazing book, Superintelligence. There are tons of other reading materials in the description box for the more curious fellow scholars out there. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 7.32, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zonai-Fehir."}, {"start": 7.32, "end": 11.72, "text": " Now I'll eat my head if this is going to be two minutes, but I really hope you fellow"}, {"start": 11.72, "end": 14.4, "text": " scholars are going to like this little discussion."}, {"start": 14.4, "end": 18.56, "text": " Neil deGress Tyson described a cool thought experiment in one of his talks."}, {"start": 18.56, "end": 23.96, "text": " He mentioned that the difference of the human and the monkey DNA is really small, a one-digit"}, {"start": 23.96, "end": 25.16, "text": " percentage."}, {"start": 25.16, "end": 28.16, "text": " For simplicity, let's say it is 1%."}, {"start": 28.16, "end": 33.92, "text": " For this 1% difference, there's a huge difference in the intellect of humans and apes."}, {"start": 33.92, "end": 39.36, "text": " The smartest chimpanzee you can imagine can do tasks like clapping his hands to a given"}, {"start": 39.36, "end": 41.88, "text": " simple rhythm or strike a match."}, {"start": 41.88, "end": 46.519999999999996, "text": " Compared to the average chimpanzee, such an animal would be an equivalent of Einstein"}, {"start": 46.519999999999996, "end": 48.0, "text": " or John von Neumann."}, {"start": 48.0, "end": 49.6, "text": " He can clap his hands."}, {"start": 49.6, "end": 51.36, "text": " What is that for humans?"}, {"start": 51.36, "end": 52.6, "text": " Children can do that."}, {"start": 52.6, "end": 57.24, "text": " Even before they start studying, they can effortlessly do something that rivals the brightest"}, {"start": 57.24, "end": 59.92, "text": " minds monkeys could ever produce."}, {"start": 59.92, "end": 65.68, "text": " Imagine if there were species that is the same 1% difference away from us humans in the"}, {"start": 65.68, "end": 67.16, "text": " same direction."}, {"start": 67.16, "end": 68.8, "text": " What could they be capable of?"}, {"start": 68.8, "end": 74.48, "text": " Their small children would be composing beautiful symphonies, perfect harmonization for hundreds"}, {"start": 74.48, "end": 75.48, "text": " of instruments."}, {"start": 75.48, "end": 80.4, "text": " Or they would be deriving everything in the history of physics from Newton's laws to quantum"}, {"start": 80.4, "end": 82.0, "text": " electrodynamics."}, {"start": 82.0, "end": 85.64, "text": " And their parents would be like, oh, look at what little Jimmy did."}, {"start": 85.64, "end": 86.96000000000001, "text": " That's adorable."}, {"start": 86.96, "end": 91.24, "text": " And they would put it on the fridge with a magnet, just like we do with the adorable little"}, {"start": 91.24, "end": 93.39999999999999, "text": " scribbles of our children."}, {"start": 93.39999999999999, "end": 96.6, "text": " Just thinking about the possibilities gives me chills."}, {"start": 96.6, "end": 99.83999999999999, "text": " Now, let's transition into neural networks."}, {"start": 99.83999999999999, "end": 104.47999999999999, "text": " An artificial neural network is a crude approximation of the human brain that we can simulate on"}, {"start": 104.47999999999999, "end": 109.6, "text": " a computer to recognize images, paint in the style of famous artists or learn to play"}, {"start": 109.6, "end": 113.52, "text": " video games and a number of different very useful things."}, {"start": 113.52, "end": 118.24, "text": " The number of connections that we can simulate on a graphical card of our computer grows closely"}, {"start": 118.24, "end": 122.8, "text": " to what's predicted in Moore's law, which means that the computing capacity that we have"}, {"start": 122.8, "end": 126.24, "text": " in our home computer doubles every few years."}, {"start": 126.24, "end": 130.28, "text": " It's pretty crazy if you think about it, but most of your fellow scholars have phones"}, {"start": 130.28, "end": 135.4, "text": " in your pockets that have more computing capacity than NASA had to lend on the moon."}, {"start": 135.4, "end": 140.12, "text": " As years go by, there will be more and more connections in these artificial neural networks,"}, {"start": 140.12, "end": 144.96, "text": " and they don't have to adhere to stringent constraints like our brains do, such as fitting"}, {"start": 144.96, "end": 146.28, "text": " into the human cranium."}, {"start": 146.28, "end": 150.28, "text": " A computer can be the size of a building or even bigger."}, {"start": 150.28, "end": 154.68, "text": " Computers also transmit data with a speed of light, which is way faster than the transfer"}, {"start": 154.68, "end": 156.64000000000001, "text": " capabilities of the human brain."}, {"start": 156.64000000000001, "end": 162.24, "text": " Nick Bostrom asked a lot of leading AI researchers on the speed of progress in this field, and"}, {"start": 162.24, "end": 167.16, "text": " the conclusion of the study was basically that the question is not can we achieve human"}, {"start": 167.16, "end": 170.64, "text": " level intelligence, but when we will achieve it."}, {"start": 170.64, "end": 175.44, "text": " However, the number of connections is not everything, as an artificial neural network"}, {"start": 175.44, "end": 179.32, "text": " is by far not a one-on-one copy of the human brain."}, {"start": 179.32, "end": 181.12, "text": " We need something more than this."}, {"start": 181.12, "end": 187.84, "text": " A very promising possible next frontier to conquer is called recursive self-improvement."}, {"start": 187.84, "end": 192.56, "text": " Recursive self-improvement means that we tell the program to instead of work on an ordinary"}, {"start": 192.56, "end": 198.0, "text": " task like do better image recognition, we would order it to work on improving its own"}, {"start": 198.0, "end": 199.6, "text": " intelligence."}, {"start": 199.6, "end": 205.16, "text": " Ask the program itself to rewrite its code to be more efficient and more general."}, {"start": 205.16, "end": 211.04, "text": " So we have a program with a ton of computational resources working on getting smarter, and"}, {"start": 211.04, "end": 216.88, "text": " as it suddenly gets just a bit smarter, we then have a smarter machine that can again"}, {"start": 216.88, "end": 219.56, "text": " be asked to improve its own intelligence."}, {"start": 219.56, "end": 225.0, "text": " But it is now more capable of doing that, therefore if we do this many times, leaps are going"}, {"start": 225.0, "end": 229.6, "text": " to get bigger and bigger as an intelligent mind can do more to improve itself than an"}, {"start": 229.6, "end": 231.2, "text": " insect can."}, {"start": 231.2, "end": 236.52, "text": " This way, we may end up with an intelligence explosion, which means a possible exponential"}, {"start": 236.52, "end": 238.44, "text": " increase in capabilities."}, {"start": 238.44, "end": 244.28, "text": " And if this is the case, talking about human level intelligence is completely irrelevant."}, {"start": 244.28, "end": 249.08, "text": " During this process, given enough resources, the system may go from the intelligence of an"}, {"start": 249.08, "end": 255.84, "text": " insect to something way beyond the capabilities of the most intelligent person who ever lived"}, {"start": 255.84, "end": 258.52, "text": " in about a second or less."}, {"start": 258.52, "end": 263.08, "text": " It would come up with way better solutions in milliseconds than anything you've seen"}, {"start": 263.08, "end": 267.0, "text": " on two minute papers and there's plenty of brilliant works out there."}, {"start": 267.0, "end": 273.2, "text": " And of course, it could also develop never before seeing superweapons to unleash an unprecedented"}, {"start": 273.2, "end": 275.4, "text": " destruction on Earth."}, {"start": 275.4, "end": 279.92, "text": " We wouldn't know if it would do it, but it is capable of doing that, which is quite alarming."}, {"start": 279.92, "end": 285.36, "text": " I am not surprised that Elon Musk compares creating an artificial superintelligence to summoning"}, {"start": 285.36, "end": 286.64, "text": " the demon."}, {"start": 286.64, "end": 291.8, "text": " And he offered $10 million to research a safe way to develop this technology, which is"}, {"start": 291.8, "end": 297.0, "text": " obviously not nearly enough, but it is an excellent way to raise awareness."}, {"start": 297.0, "end": 302.03999999999996, "text": " Now the classical argument on how to curb such a superintelligence if one recognizes that"}, {"start": 302.04, "end": 307.16, "text": " it is up to no good, people say that, well, I'll unplug it, or maybe lock it away from"}, {"start": 307.16, "end": 308.16, "text": " the internet."}, {"start": 308.16, "end": 311.28000000000003, "text": " The problem is that people assume that they can do it."}, {"start": 311.28000000000003, "end": 316.16, "text": " We can lock it up in any way we can think of, but there's only so much we can do because"}, {"start": 316.16, "end": 321.76, "text": " as Neil deGrasse Tyson argued, even the smartest human who ever lived would be a blabbering,"}, {"start": 321.76, "end": 325.44, "text": " drooling idiot compared to such an intelligence."}, {"start": 325.44, "end": 328.88, "text": " How easy is it for a grown adult to fool a child?"}, {"start": 328.88, "end": 330.20000000000005, "text": " A piece of cake."}, {"start": 330.2, "end": 335.76, "text": " The intelligence gap between us and the superintelligence is more than a thousand times that."}, {"start": 335.76, "end": 340.92, "text": " It's even more pathetic than a child or even a dog who tries to fool us."}, {"start": 340.92, "end": 346.4, "text": " We humans can anticipate threats like wielding weapons or locking dangerous animals into"}, {"start": 346.4, "end": 347.48, "text": " cages."}, {"start": 347.48, "end": 351.91999999999996, "text": " And so can superintelligent beings also anticipate our threats."}, {"start": 351.91999999999996, "end": 353.2, "text": " Only way better."}, {"start": 353.2, "end": 357.88, "text": " It can trick you by pretending to be broken and when the engineer goes there to fix the"}, {"start": 357.88, "end": 360.88, "text": " code, the manipulation can begin."}, {"start": 360.88, "end": 365.68, "text": " It could also communicate with gravitational waves or any kind of thing that we cannot"}, {"start": 365.68, "end": 371.08, "text": " even fathom, just as an ant has no idea about our radio waves."}, {"start": 371.08, "end": 375.2, "text": " And we don't need to characterize superintelligent beings as an adversary."}, {"start": 375.2, "end": 378.68, "text": " The road to hell is paved with good intentions."}, {"start": 378.68, "end": 383.6, "text": " It may very well be possible that we assign it a completely benign task that anyone could"}, {"start": 383.6, "end": 389.28000000000003, "text": " agree with and it would end up in a disaster in a way we cannot anticipate."}, {"start": 389.28000000000003, "end": 393.52000000000004, "text": " Imagine assigning it the task of maximizing the number of paperclips."}, {"start": 393.52000000000004, "end": 399.48, "text": " Nick Basrum argues that it would at first maybe create better blueprints and factory lines."}, {"start": 399.48, "end": 402.96000000000004, "text": " And after some point it may run out of resources on earth."}, {"start": 402.96000000000004, "end": 407.44, "text": " Then in order to maximize the number of paperclips, it would recognize that humans contain"}, {"start": 407.44, "end": 409.12, "text": " lots of useful atoms."}, {"start": 409.12, "end": 414.28000000000003, "text": " So eradicating humanity would only be logical to maximize the number of paperclips."}, {"start": 414.28000000000003, "end": 419.68, "text": " Think about another task, creating the best approximation of the number pi."}, {"start": 419.68, "end": 424.68, "text": " One can approximate to the most decimals by using more resources, to have more resources"}, {"start": 424.68, "end": 427.64, "text": " one builds more and bigger computers."}, {"start": 427.64, "end": 432.92, "text": " At some point it runs out of space and eradicate humans because they are in the way of creating"}, {"start": 432.92, "end": 434.36, "text": " more computers."}, {"start": 434.36, "end": 439.44, "text": " Or it may eradicate humans way before that because it knows that they are capable of shutting"}, {"start": 439.44, "end": 440.44, "text": " you down."}, {"start": 440.44, "end": 444.44, "text": " And if you get shut down, there's going to be less digits or paperclips."}, {"start": 444.44, "end": 447.08000000000004, "text": " So again, it's only logical to kill them."}, {"start": 447.08000000000004, "end": 451.24, "text": " The task will be done but no one will be there anymore to say thank you."}, {"start": 451.24, "end": 455.88, "text": " It is a bit like a movie where there's an intelligent car and the driver is in a car-chase"}, {"start": 455.88, "end": 459.92, "text": " situation, shouting, we're too slow and fuel is running out."}, {"start": 459.92, "end": 463.04, "text": " Please throw out all excessive useless weights."}, {"start": 463.04, "end": 470.52000000000004, "text": " And along some empty bottles, the person would be subsequently ejected from the vehicle."}, {"start": 470.52000000000004, "end": 474.8, "text": " We don't know what is going to be the next invention of mankind, but we know what's"}, {"start": 474.8, "end": 479.16, "text": " going to be the last one, artificial superintelligence."}, {"start": 479.16, "end": 484.84000000000003, "text": " It has the potential to either eradicate humanity or solve all of its problems."}, {"start": 484.84000000000003, "end": 491.40000000000003, "text": " It is both the deadliest weapon that will ever exist and the key to eternal life."}, {"start": 491.4, "end": 495.76, "text": " We need to be vigilant about the fact that we have tons of money invested in artificial"}, {"start": 495.76, "end": 500.52, "text": " intelligence research, but barely any to make sure we are doing it in a controlled and"}, {"start": 500.52, "end": 502.03999999999996, "text": " ethical way."}, {"start": 502.03999999999996, "end": 506.71999999999997, "text": " This task needs some of the brightest minds of our generation and perhaps even the next"}, {"start": 506.71999999999997, "end": 507.71999999999997, "text": " one."}, {"start": 507.71999999999997, "end": 510.23999999999995, "text": " And this needs to happen before we get there."}, {"start": 510.23999999999995, "end": 513.36, "text": " When we are there, it's already too late."}, {"start": 513.36, "end": 519.0, "text": " I highly recommend an absolutely fantastic article on Wade Batwai about this or Nick"}, {"start": 519.0, "end": 522.12, "text": " Bastram's amazing book, Superintelligence."}, {"start": 522.12, "end": 525.84, "text": " There are tons of other reading materials in the description box for the more curious"}, {"start": 525.84, "end": 527.44, "text": " fellow scholars out there."}, {"start": 527.44, "end": 557.4000000000001, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ATN9oqMF_qk
Are We Living In a Computer Simulation? | Two Minute Papers #28
It's time to set foot in the wonderful landscape of philosophy in Two Minute Papers! We never discussed a philosophy paper before, so what would be a better opportunity to talk about the possibility whether we're living in a computer simulation? There are many interesting debates among philosophers on crazy elusive topics, like "prove to me that I'm not in a dream", or "I'm not just a brain in a bottle somewhere that is being fed sensory inputs. In his paper, Nick Bostrom, philosopher offers us a refreshing take on the simulation argument, and argues that at least one of the three propositions is true: - almost all advanced civilizations go extinct before achieving technological maturity, - there is a strong convergence among technologically mature civilizations in that none of them are interested in creating ancestor simulations, - we are living in a simulation There is no conclusion to the simulation argument at the moment - no one really knows what the answer is, this is open to debate, and this is what makes it super interesting. ____________________________ The paper "Are we living in a computer simulation?" from Nick Bostrom is available here: http://www.simulation-argument.com/simulation.pdf http://www.simulation-argument.com/simulation.html Is War Over? — A Paradox Explained by Kurzgesagt: https://www.youtube.com/watch?v=NbuUW9i-mHs Recommended for you: Google DeepMind's Deep Q-Learning & Superhuman Atari Gameplays - https://www.youtube.com/watch?v=Ih8EfvOzBOY&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=10 The cover image was made by Tyler Hebert (CC BY 2.0, modifications: flipped, darkened, added lens flare and content-aware fill) - https://flic.kr/p/fo8vBn Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir, and it is time for some minds to be blown. We're going to talk about a philosophy paper. Before we start, a quick definition, an ancestor simulation is a hypothetical computer simulation that is detailed enough that the entities living within are conscious. Imagine a computer game that you play that doesn't contain mere digital characters, but fully conscious beings with feelings, aspirations, and memories. There are many interesting debates among philosophers on crazy, elusive topics like prove to me that I'm not in a dream, or I'm not just a brain in a bottle, somewhere that is being fed sensory inputs. Well, good luck. In his paper, Nick Bostrom, philosopher, offers us a refreshing take on this topic and argues that at least one of these three propositions is true. Almost all advanced civilizations go extinct before achieving technological maturity. There's a strong convergence among technologically mature civilizations in that none of them are interested in creating ancestor simulations. And here's the bomb. We are living in a simulation. At least one of these propositions is true, so if you say no to the first two, then the third is automatically true. You cannot categorically reject all three of these because if two are false, then the third follows. Also, the theory doesn't tell which of the three is true. Let's talk briefly about the first one. The argument is not that we go extinct before being technologically advanced enough to create such simulations. It means that all civilizations do. This is a very sad case, and even though there is research on the fact that war is receding, there's a clear trend that we have less warfare than we've had hundreds of years ago. I've linked a video on this here from Kurzgesagt. It is still possible that humanity eradicates itself before reaching technological maturity. We have an even more powerful argument that maybe all civilizations do. Such a crazy proposition. Second point, all technologically mature civilizations categorically reject ancestor simulations. Maybe they have laws against it because it's too cruel and unethical to play with sentient beings. But the fact that there is not one person in any civilization in any age who creates such a simulation, not one criminal mastermind anywhere ever. This also sounds pretty crazy. And if none of these are true, then there is at least one civilization that can run a stupendously large number of ancestor simulations. The future nerd guy just goes home, grabs a beer, starts his computer in the basement and fires up not a simple computer game, but a complete universe. If so, then there are many more simulated universes than real ones, and then with a really large probability, we're one of the simulated ones. Richard Dawkins says that if this is the case, we have a really disciplined nerd guy, because the laws of physics are not changing at a whim, we have no experience of everyone suddenly being able to fly. And as the closing words of the paper states with graceful eloquence, in the dark forest of our current ignorance, it seems sensible to apportioned one's credence roughly evenly between 1, 2 and 3. Please note that this discussion is a slightly simplified version of the manuscript, so it's definitely worth reading the paper if you're interested. Give it a go. As always, I've put a link in the description box. There is no conclusion here, no one really knows what the answer is. This is open to debate, and this is what makes it super interesting. And now, my personal opinion. It's just an opinion, it may not be true, it may not make sense, and may not even matter. Just my opinion. I'd go with the second. The reason for that is that we already have artificial neural networks that outperform humans on some tasks. They are still not general enough, which means that they are good at doing something like the deep blue is good at chess, but it's not really useful for anything else. However, the algorithms are getting more and more general, and the number of neurons that are being simulated on a graphical card in your computer are doubling every few years. They will soon be able to simulate so many more connections than we have, and I feel that creating an artificial super intelligent being should be possible in the future that is so potent that it makes a universe simulation pale in comparison. What such a thing could be capable of? It's already getting too long, I just can't help myself. You know what? Let's discuss it in a future 2 minute papers episode. I'd love to hear what you fellow scholars think about these things. If you feel like it, please leave your thoughts in the comments section below. I'd love to read it. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 9.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir, and it is time for some minds to be blown."}, {"start": 9.0, "end": 12.0, "text": " We're going to talk about a philosophy paper."}, {"start": 12.0, "end": 23.0, "text": " Before we start, a quick definition, an ancestor simulation is a hypothetical computer simulation that is detailed enough that the entities living within are conscious."}, {"start": 23.0, "end": 33.0, "text": " Imagine a computer game that you play that doesn't contain mere digital characters, but fully conscious beings with feelings, aspirations, and memories."}, {"start": 33.0, "end": 46.0, "text": " There are many interesting debates among philosophers on crazy, elusive topics like prove to me that I'm not in a dream, or I'm not just a brain in a bottle, somewhere that is being fed sensory inputs."}, {"start": 46.0, "end": 57.0, "text": " Well, good luck. In his paper, Nick Bostrom, philosopher, offers us a refreshing take on this topic and argues that at least one of these three propositions is true."}, {"start": 57.0, "end": 64.0, "text": " Almost all advanced civilizations go extinct before achieving technological maturity."}, {"start": 64.0, "end": 73.0, "text": " There's a strong convergence among technologically mature civilizations in that none of them are interested in creating ancestor simulations."}, {"start": 73.0, "end": 77.0, "text": " And here's the bomb. We are living in a simulation."}, {"start": 77.0, "end": 84.0, "text": " At least one of these propositions is true, so if you say no to the first two, then the third is automatically true."}, {"start": 84.0, "end": 90.0, "text": " You cannot categorically reject all three of these because if two are false, then the third follows."}, {"start": 90.0, "end": 94.0, "text": " Also, the theory doesn't tell which of the three is true."}, {"start": 94.0, "end": 103.0, "text": " Let's talk briefly about the first one. The argument is not that we go extinct before being technologically advanced enough to create such simulations."}, {"start": 103.0, "end": 115.0, "text": " It means that all civilizations do. This is a very sad case, and even though there is research on the fact that war is receding, there's a clear trend that we have less warfare than we've had hundreds of years ago."}, {"start": 115.0, "end": 118.0, "text": " I've linked a video on this here from Kurzgesagt."}, {"start": 118.0, "end": 124.0, "text": " It is still possible that humanity eradicates itself before reaching technological maturity."}, {"start": 124.0, "end": 131.0, "text": " We have an even more powerful argument that maybe all civilizations do. Such a crazy proposition."}, {"start": 131.0, "end": 138.0, "text": " Second point, all technologically mature civilizations categorically reject ancestor simulations."}, {"start": 138.0, "end": 153.0, "text": " Maybe they have laws against it because it's too cruel and unethical to play with sentient beings. But the fact that there is not one person in any civilization in any age who creates such a simulation, not one criminal mastermind anywhere ever."}, {"start": 153.0, "end": 164.0, "text": " This also sounds pretty crazy. And if none of these are true, then there is at least one civilization that can run a stupendously large number of ancestor simulations."}, {"start": 164.0, "end": 174.0, "text": " The future nerd guy just goes home, grabs a beer, starts his computer in the basement and fires up not a simple computer game, but a complete universe."}, {"start": 174.0, "end": 182.0, "text": " If so, then there are many more simulated universes than real ones, and then with a really large probability, we're one of the simulated ones."}, {"start": 182.0, "end": 193.0, "text": " Richard Dawkins says that if this is the case, we have a really disciplined nerd guy, because the laws of physics are not changing at a whim, we have no experience of everyone suddenly being able to fly."}, {"start": 193.0, "end": 206.0, "text": " And as the closing words of the paper states with graceful eloquence, in the dark forest of our current ignorance, it seems sensible to apportioned one's credence roughly evenly between 1, 2 and 3."}, {"start": 206.0, "end": 215.0, "text": " Please note that this discussion is a slightly simplified version of the manuscript, so it's definitely worth reading the paper if you're interested. Give it a go."}, {"start": 215.0, "end": 225.0, "text": " As always, I've put a link in the description box. There is no conclusion here, no one really knows what the answer is. This is open to debate, and this is what makes it super interesting."}, {"start": 225.0, "end": 235.0, "text": " And now, my personal opinion. It's just an opinion, it may not be true, it may not make sense, and may not even matter. Just my opinion."}, {"start": 235.0, "end": 238.0, "text": " I'd go with the second."}, {"start": 238.0, "end": 253.0, "text": " The reason for that is that we already have artificial neural networks that outperform humans on some tasks. They are still not general enough, which means that they are good at doing something like the deep blue is good at chess, but it's not really useful for anything else."}, {"start": 253.0, "end": 263.0, "text": " However, the algorithms are getting more and more general, and the number of neurons that are being simulated on a graphical card in your computer are doubling every few years."}, {"start": 263.0, "end": 277.0, "text": " They will soon be able to simulate so many more connections than we have, and I feel that creating an artificial super intelligent being should be possible in the future that is so potent that it makes a universe simulation pale in comparison."}, {"start": 277.0, "end": 280.0, "text": " What such a thing could be capable of?"}, {"start": 280.0, "end": 283.0, "text": " It's already getting too long, I just can't help myself."}, {"start": 283.0, "end": 294.0, "text": " You know what? Let's discuss it in a future 2 minute papers episode. I'd love to hear what you fellow scholars think about these things. If you feel like it, please leave your thoughts in the comments section below. I'd love to read it."}, {"start": 294.0, "end": 322.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Ih8EfvOzBOY
Google DeepMind's Deep Q-Learning & Superhuman Atari Gameplays | Two Minute Papers #27
Google DeepMind implemented an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level. The technique is called deep Q-learning, it uses a combination of deep neural networks and reinforcement learning, and it is capable of playing many Atari games as good or better than humans. After presenting their initial results with the algorithm, Google almost immediately acquired the company for several hundred million dollars, hence the name Google DeepMind. I am sure that this is one of the biggest triumphs of deep learning, especially given the fact that now the first few successful experiments for 3D games are out there! ________________________ The Nature paper "Human-level control through deep reinforcement learning" is available here: http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html http://www.cs.swarthmore.edu/~meeden/cs63/s15/nature15b.pdf The code is available here: https://sites.google.com/a/deepmind.com/dqn/ Ilya Kuzovkin's fork with visualization: https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner This configuration file will run Ilya Kuzovkin's version with less than 1GB of VRAM: http://cg.tuwien.ac.at/~zsolnai/wp/wp-content/uploads/2015/03/run_gpu Recommended for you: Artificial Neural Networks and Deep Learning - https://www.youtube.com/watch?v=rCWTOOgVXyE&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=13 Recurrent Neural Network Writes Sentences About Images - https://www.youtube.com/watch?v=e-WB4lfg30M&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=15 Deep Neural Network Learns Van Gogh's Art - https://www.youtube.com/watch?v=-R9bJGNHltQ&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=22 Terrain Traversal with Reinforcement Learning - https://www.youtube.com/watch?v=_yjHPu1aYCY&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=9 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail was made by moparx - https://flic.kr/p/76foMV Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejol and Ifehir. This one is going to be huge, certainly one of my favorites. This work is a combination of several techniques that we have talked about earlier. If you don't know some of these terms, it's perfectly okay. You can remedy this by clicking on the pop-ups or checking the description box, but you'll get the idea even watching only this episode. So first, we have a convolutional neural network. This helps processing images and understanding what is depicted on an image. And a reinforcement learning algorithm. This helps creating strategies or to be more exact, it decides what the next action we make should be, what buttons we push on a joystick. So this technique mixes together these two concepts and we call it deep-queue learning, and it is able to learn to play games the same way as a human would. It is not exposed to any additional information in the code. All it sees is the screen and the current score. When it starts learning to play an old game, Atari Breakout, at first, the algorithm loses all of its lives without any signs of intelligent action. If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an adapt player. But here's the catch. If we wait for longer, we get something absolutely spectacular. It finds out that the best way to win the game is digging a tunnel through the bricks and hit them from behind. I really didn't know this, and this is an incredible moment. I can use my computer, this box next to me that is able to create new knowledge, find out new things I haven't known before. This is completely absurd. Science fiction is not the future, it is already here. It also plays many other games. The percentages show the relation of the gamescores compared to a human player. Half 70% means it's great, and above 100% it's superhuman. As a follow-up work, scientists at DeepMind started experimenting with 3D games, and after a few days of training, it could learn to drive on ideal racing lines and pass others with ease. I've had my driving license for a while now, but I still don't always get the ideal racing lines right. Bravo. I have heard the complaint that this is not really intelligence because it doesn't know the concept of a ball or what it is exactly doing. Edgar Dijkstra once said, the question of whether machines can think is about as relevant as the question of whether submarines can swim. Beyond the fact that rigorously defining intelligence leans more into the domain of philosophy than science, I'd like to add that I am perfectly happy with effective algorithms. We use these techniques to accomplish different tasks, and they are really good problem solvers. In the breakout game, you, as a person, learn the concept of a ball in order to be able to use this knowledge as a machinery to perform better. If this is not the case, whoever knows a lot, but can't use it to achieve anything useful, is not an intelligent being but an encyclopedia. What about the future? There are two major unexplored directions. The algorithm doesn't have long-term memory, and even if it had, it wouldn't be able to generalize its knowledge to other similar tasks. Super exciting directions for future work. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.24, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejol and Ifehir."}, {"start": 5.24, "end": 9.16, "text": " This one is going to be huge, certainly one of my favorites."}, {"start": 9.16, "end": 13.24, "text": " This work is a combination of several techniques that we have talked about earlier."}, {"start": 13.24, "end": 16.6, "text": " If you don't know some of these terms, it's perfectly okay."}, {"start": 16.6, "end": 20.96, "text": " You can remedy this by clicking on the pop-ups or checking the description box, but you'll"}, {"start": 20.96, "end": 23.88, "text": " get the idea even watching only this episode."}, {"start": 23.88, "end": 27.080000000000002, "text": " So first, we have a convolutional neural network."}, {"start": 27.08, "end": 31.88, "text": " This helps processing images and understanding what is depicted on an image."}, {"start": 31.88, "end": 34.199999999999996, "text": " And a reinforcement learning algorithm."}, {"start": 34.199999999999996, "end": 39.16, "text": " This helps creating strategies or to be more exact, it decides what the next action we"}, {"start": 39.16, "end": 42.4, "text": " make should be, what buttons we push on a joystick."}, {"start": 42.4, "end": 47.56, "text": " So this technique mixes together these two concepts and we call it deep-queue learning,"}, {"start": 47.56, "end": 51.959999999999994, "text": " and it is able to learn to play games the same way as a human would."}, {"start": 51.959999999999994, "end": 55.32, "text": " It is not exposed to any additional information in the code."}, {"start": 55.32, "end": 58.44, "text": " All it sees is the screen and the current score."}, {"start": 58.44, "end": 64.12, "text": " When it starts learning to play an old game, Atari Breakout, at first, the algorithm loses"}, {"start": 64.12, "end": 75.68, "text": " all of its lives without any signs of intelligent action."}, {"start": 75.68, "end": 80.03999999999999, "text": " If we wait a bit, it becomes better at playing the game, roughly matching the skill level"}, {"start": 80.04, "end": 89.48, "text": " of an adapt player."}, {"start": 89.48, "end": 90.48, "text": " But here's the catch."}, {"start": 90.48, "end": 95.84, "text": " If we wait for longer, we get something absolutely spectacular."}, {"start": 95.84, "end": 100.32000000000001, "text": " It finds out that the best way to win the game is digging a tunnel through the bricks and"}, {"start": 100.32000000000001, "end": 101.76, "text": " hit them from behind."}, {"start": 101.76, "end": 105.64000000000001, "text": " I really didn't know this, and this is an incredible moment."}, {"start": 105.64, "end": 111.32, "text": " I can use my computer, this box next to me that is able to create new knowledge, find"}, {"start": 111.32, "end": 114.16, "text": " out new things I haven't known before."}, {"start": 114.16, "end": 116.08, "text": " This is completely absurd."}, {"start": 116.08, "end": 124.16, "text": " Science fiction is not the future, it is already here."}, {"start": 124.16, "end": 126.12, "text": " It also plays many other games."}, {"start": 126.12, "end": 130.88, "text": " The percentages show the relation of the gamescores compared to a human player."}, {"start": 130.88, "end": 137.0, "text": " Half 70% means it's great, and above 100% it's superhuman."}, {"start": 137.0, "end": 142.44, "text": " As a follow-up work, scientists at DeepMind started experimenting with 3D games, and after"}, {"start": 142.44, "end": 147.4, "text": " a few days of training, it could learn to drive on ideal racing lines and pass others"}, {"start": 147.4, "end": 148.4, "text": " with ease."}, {"start": 148.4, "end": 154.0, "text": " I've had my driving license for a while now, but I still don't always get the ideal racing"}, {"start": 154.0, "end": 155.0, "text": " lines right."}, {"start": 155.0, "end": 156.4, "text": " Bravo."}, {"start": 156.4, "end": 160.2, "text": " I have heard the complaint that this is not really intelligence because it doesn't know"}, {"start": 160.2, "end": 163.67999999999998, "text": " the concept of a ball or what it is exactly doing."}, {"start": 163.67999999999998, "end": 170.11999999999998, "text": " Edgar Dijkstra once said, the question of whether machines can think is about as relevant"}, {"start": 170.11999999999998, "end": 173.39999999999998, "text": " as the question of whether submarines can swim."}, {"start": 173.39999999999998, "end": 178.83999999999997, "text": " Beyond the fact that rigorously defining intelligence leans more into the domain of philosophy"}, {"start": 178.83999999999997, "end": 184.23999999999998, "text": " than science, I'd like to add that I am perfectly happy with effective algorithms."}, {"start": 184.23999999999998, "end": 189.32, "text": " We use these techniques to accomplish different tasks, and they are really good problem solvers."}, {"start": 189.32, "end": 194.44, "text": " In the breakout game, you, as a person, learn the concept of a ball in order to be able"}, {"start": 194.44, "end": 198.04, "text": " to use this knowledge as a machinery to perform better."}, {"start": 198.04, "end": 203.64, "text": " If this is not the case, whoever knows a lot, but can't use it to achieve anything useful,"}, {"start": 203.64, "end": 206.88, "text": " is not an intelligent being but an encyclopedia."}, {"start": 206.88, "end": 207.88, "text": " What about the future?"}, {"start": 207.88, "end": 210.92, "text": " There are two major unexplored directions."}, {"start": 210.92, "end": 215.51999999999998, "text": " The algorithm doesn't have long-term memory, and even if it had, it wouldn't be able"}, {"start": 215.51999999999998, "end": 219.0, "text": " to generalize its knowledge to other similar tasks."}, {"start": 219.0, "end": 221.4, "text": " Super exciting directions for future work."}, {"start": 221.4, "end": 250.96, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JtBTffVVa-c
Multiple-Scattering Microfacet BSDFs with the Smith Model
The paper "Multiple-Scattering Microfacet BSDFs with the Smith Model" is available here: https://eheitzresearch.wordpress.com/240-2/ Update: it is being added to Blender's Cycles! - https://developer.blender.org/D2002 Modeling multiple scattering in microfacet theory is considered an important open problem because a non-negligible portion of the energy leaving rough surfaces is due to paths that bounce multiple times. In this paper we derive the missing multiple-scattering components of the popular family of BSDFs based on the Smith microsurface model. Our derivations are based solely on the original assumptions of the Smith model. We validate our BSDFs using raytracing simulations of explicit random Beckmann surfaces. ______________________________ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
In this paper, we address an important open problem in material modeling. What happens when light scatters multiple times on rough material surfaces? In image synthesis, accurately describing light matter interactions is important for materials to obtain a realistic look. However, the multiple light matter interactions that we can see in this figure are absent from many surface appearance models. Here's an example of this problem. Rendering white glass should be simple, but we can see that the rougher the glass is, the darker its appearance becomes. Even though it should be simple, modeling the appearance of glass that is at the same time rough and white is almost impossible with the current material models. Many material models, such as those rough dielectric plates, are called micro-facet materials, because the underlying mathematical model assumes that their interfaces are made of microscopic imperfections that we call facets. Those facets are too small to be visible, but the way they are statistically oriented changed the way light interacts with the material causing its rough appearance. Many rendering systems model only the contribution of the first bounds of light. The contribution of multiple bounces is unknown and it is simply set to zero as if it were neglectable. However, on very rough microsurface, the amount of light that scatters multiple times is significant and should not be neglected to avoid energy loss and the noticeable darkening of the material appearance. In summary, modeling rough materials correctly with multiple scattering is a challenging problem. Our multiple scattering model presented in this paper opens up the possibility of modeling rough materials correctly in a practical manner. Beyond fixing the darkening problem, our goal is to derive a physically based model that is able to make accurate predictions compared to reference data. More specifically, we derive the multiple scattering component of a specific kind of microsurface, the Smith microsurface model, because it is based on simple assumptions and makes accurate predictions for single scattering, it has received widespread industrial adoption and is considered the academic state of the art in computer graphics for modeling many materials. But can we extend this model for multiple scattering and could it be practically incorporated into a classic BSDF plugin? These are the questions we are interested in. Our main insight is to transform this surface scattering problem into a volume scattering problem which is easier to solve. To achieve that, we show that the Smith microsurface model can be derived as a special case of the microflake theory for volumes. We deformulate the Smith microsurface as a volume, with additional constraints to enforce the presence of a sharp interface. This volumetric analogy is very convenient because we know how to compute the light scattering in volumes. It depends on two functions that we derive for this new kind of volume. The first one is the free path distribution which tells us how long array can travel in a medium before finding an intersection. On the microsurface, the equivalent question is what is the height of the next intersection? Once an intersection is found, we need to know in which direction the light is scattering again. This is given by a volumetric phase function which depends on both the base material of the surface and the distribution of the microfacets. We derive the phase function for three different surface materials, diffuse, conductive, and dielectric, and common microfacet distributions such as Beckman and GGX. Now that we know the free path and the phase function of this volumetric model, we know exactly how the light scatters in the medium. From the light propagated in this medium emerges a distribution that has all the expected properties of a classic surface BSDF. It is energy-conserving and reciprocal. Furthermore, it is exactly the classic single scattering BSDF based on the Smith-Microsoft surface model but with the addition of higher order scattering. Now that we know that the model is mathematically correct, we are interested in its predictive power. How accurate is this new model? To answer this question, we need some reference data to compare the predictions of the model to. A common way to validate models is to compare their predictions to simulated data obtained by ray tracing triangulated surfaces. On contrary to real-world acquisition, the surface used in the simulation has known material and statistics and the collected data are free of noise. There is thus no degrees of freedom left to match the parameters of the model to the simulation. This is why this validation procedure is widely used in the field of optical physics and therefore we chose this to validate our model. We generated random surfaces with known Beckman statistics and did the ray tracing simulation on them. By comparing the predictions of our multiple scattering model to the results of the ray tracing simulation, we found our BSDF model to accurately predict both the albedo and angular distribution of the accident energy among the scattering orders and this for a large variety of materials, roughnesses, ni-sortropy and inclinations. In our supplemental material, we provide an exhaustive set of such validation results. To make the model practical, we implement two procedures, evaluation and important sampling. Since the BSDF is the expectation of all the paths that can be traced on the microsurface, important sampling can be done straightforwardly by generating one path. We construct an unbiased, tohastic estimate by tracing one path and evaluating the face functions at each intersection with next event estimation as in classical path tracing. With important sampling and this tohastic evaluation, we have everything required to implement a classic BSDF plugin. Furthermore, our implementation is analytic and does not use per BSDF pre-computed data, which makes our BSDFs usable with textured orbitals, roughness and ni-sortropy. In the supplemental materials, we provide a document describing a tutorial implementation for various materials and ready to use plugins for the Mitsuba Physically-Based Render. Now let's have a look at some results. This image shows a collection of bottles with micro-faceted materials. The energy loss is significant if multiple scattering is neglected, especially on dielectrics. Without multiple scattering, rough transmittance appears unnatural, which is hard to compensate for by tuning parameters. With our multiple scattering model, we simulate the expected appearance of rough glass and metals without tuning any parameters. Our model is robust and behaves as expected even with high roughness values. We can see that the model avoids the darkening effects and even produces interesting emerging effects like color saturation. This can be observed on this rough diffuse material. Since the absorption spectrum of the material is repeatedly multiplied after each bounce on the microsurface, the reflected color appears more saturated after multiple bounces. This emerging effect can also be seen on this gold conductor material. The unsaturated single scattering gold conductor appears strangely dull. Thanks to our model, the introduction of multiple scattering restores the shiny appearance expected from gold. Note that since our model is parametric and does not depend on any pre-computed data, we fully support textured input, which is important for creating visually rich images. As an example, this is a dielectric with textured roughness and anisotropy. Thanks for watching.
[{"start": 0.0, "end": 4.76, "text": " In this paper, we address an important open problem in material modeling."}, {"start": 4.76, "end": 9.24, "text": " What happens when light scatters multiple times on rough material surfaces?"}, {"start": 9.24, "end": 13.64, "text": " In image synthesis, accurately describing light matter interactions is important for"}, {"start": 13.64, "end": 16.12, "text": " materials to obtain a realistic look."}, {"start": 16.12, "end": 21.240000000000002, "text": " However, the multiple light matter interactions that we can see in this figure are absent"}, {"start": 21.240000000000002, "end": 23.8, "text": " from many surface appearance models."}, {"start": 23.8, "end": 25.6, "text": " Here's an example of this problem."}, {"start": 25.6, "end": 30.200000000000003, "text": " Rendering white glass should be simple, but we can see that the rougher the glass is,"}, {"start": 30.200000000000003, "end": 32.68, "text": " the darker its appearance becomes."}, {"start": 32.68, "end": 37.400000000000006, "text": " Even though it should be simple, modeling the appearance of glass that is at the same time"}, {"start": 37.400000000000006, "end": 42.24, "text": " rough and white is almost impossible with the current material models."}, {"start": 42.24, "end": 48.08, "text": " Many material models, such as those rough dielectric plates, are called micro-facet materials,"}, {"start": 48.08, "end": 53.040000000000006, "text": " because the underlying mathematical model assumes that their interfaces are made of microscopic"}, {"start": 53.04, "end": 55.8, "text": " imperfections that we call facets."}, {"start": 55.8, "end": 60.18, "text": " Those facets are too small to be visible, but the way they are statistically oriented"}, {"start": 60.18, "end": 64.64, "text": " changed the way light interacts with the material causing its rough appearance."}, {"start": 64.64, "end": 69.2, "text": " Many rendering systems model only the contribution of the first bounds of light."}, {"start": 69.2, "end": 74.2, "text": " The contribution of multiple bounces is unknown and it is simply set to zero as if it were"}, {"start": 74.2, "end": 75.2, "text": " neglectable."}, {"start": 75.2, "end": 80.48, "text": " However, on very rough microsurface, the amount of light that scatters multiple times is"}, {"start": 80.48, "end": 85.64, "text": " significant and should not be neglected to avoid energy loss and the noticeable darkening"}, {"start": 85.64, "end": 87.32000000000001, "text": " of the material appearance."}, {"start": 87.32000000000001, "end": 92.84, "text": " In summary, modeling rough materials correctly with multiple scattering is a challenging problem."}, {"start": 92.84, "end": 97.36, "text": " Our multiple scattering model presented in this paper opens up the possibility of modeling"}, {"start": 97.36, "end": 100.28, "text": " rough materials correctly in a practical manner."}, {"start": 100.28, "end": 104.84, "text": " Beyond fixing the darkening problem, our goal is to derive a physically based model that"}, {"start": 104.84, "end": 108.52000000000001, "text": " is able to make accurate predictions compared to reference data."}, {"start": 108.52, "end": 112.6, "text": " More specifically, we derive the multiple scattering component of a specific kind of"}, {"start": 112.6, "end": 118.11999999999999, "text": " microsurface, the Smith microsurface model, because it is based on simple assumptions and"}, {"start": 118.11999999999999, "end": 123.36, "text": " makes accurate predictions for single scattering, it has received widespread industrial adoption"}, {"start": 123.36, "end": 127.56, "text": " and is considered the academic state of the art in computer graphics for modeling many"}, {"start": 127.56, "end": 128.76, "text": " materials."}, {"start": 128.76, "end": 133.6, "text": " But can we extend this model for multiple scattering and could it be practically incorporated into"}, {"start": 133.6, "end": 135.96, "text": " a classic BSDF plugin?"}, {"start": 135.96, "end": 138.0, "text": " These are the questions we are interested in."}, {"start": 138.0, "end": 142.4, "text": " Our main insight is to transform this surface scattering problem into a volume scattering"}, {"start": 142.4, "end": 144.56, "text": " problem which is easier to solve."}, {"start": 144.56, "end": 149.32, "text": " To achieve that, we show that the Smith microsurface model can be derived as a special case of"}, {"start": 149.32, "end": 151.68, "text": " the microflake theory for volumes."}, {"start": 151.68, "end": 156.16, "text": " We deformulate the Smith microsurface as a volume, with additional constraints to enforce"}, {"start": 156.16, "end": 158.44, "text": " the presence of a sharp interface."}, {"start": 158.44, "end": 163.16, "text": " This volumetric analogy is very convenient because we know how to compute the light scattering"}, {"start": 163.16, "end": 164.4, "text": " in volumes."}, {"start": 164.4, "end": 168.20000000000002, "text": " It depends on two functions that we derive for this new kind of volume."}, {"start": 168.20000000000002, "end": 172.8, "text": " The first one is the free path distribution which tells us how long array can travel in"}, {"start": 172.8, "end": 175.68, "text": " a medium before finding an intersection."}, {"start": 175.68, "end": 181.0, "text": " On the microsurface, the equivalent question is what is the height of the next intersection?"}, {"start": 181.0, "end": 185.20000000000002, "text": " Once an intersection is found, we need to know in which direction the light is scattering"}, {"start": 185.20000000000002, "end": 186.20000000000002, "text": " again."}, {"start": 186.20000000000002, "end": 190.68, "text": " This is given by a volumetric phase function which depends on both the base material of"}, {"start": 190.68, "end": 194.0, "text": " the surface and the distribution of the microfacets."}, {"start": 194.0, "end": 199.16, "text": " We derive the phase function for three different surface materials, diffuse, conductive, and"}, {"start": 199.16, "end": 204.72, "text": " dielectric, and common microfacet distributions such as Beckman and GGX."}, {"start": 204.72, "end": 208.72, "text": " Now that we know the free path and the phase function of this volumetric model, we know"}, {"start": 208.72, "end": 211.88, "text": " exactly how the light scatters in the medium."}, {"start": 211.88, "end": 216.48, "text": " From the light propagated in this medium emerges a distribution that has all the expected"}, {"start": 216.48, "end": 219.52, "text": " properties of a classic surface BSDF."}, {"start": 219.52, "end": 222.0, "text": " It is energy-conserving and reciprocal."}, {"start": 222.0, "end": 227.2, "text": " Furthermore, it is exactly the classic single scattering BSDF based on the Smith-Microsoft"}, {"start": 227.2, "end": 230.44, "text": " surface model but with the addition of higher order scattering."}, {"start": 230.44, "end": 234.92, "text": " Now that we know that the model is mathematically correct, we are interested in its predictive"}, {"start": 234.92, "end": 235.92, "text": " power."}, {"start": 235.92, "end": 237.76, "text": " How accurate is this new model?"}, {"start": 237.76, "end": 241.88, "text": " To answer this question, we need some reference data to compare the predictions of the model"}, {"start": 241.88, "end": 242.88, "text": " to."}, {"start": 242.88, "end": 247.4, "text": " A common way to validate models is to compare their predictions to simulated data obtained"}, {"start": 247.4, "end": 250.12, "text": " by ray tracing triangulated surfaces."}, {"start": 250.12, "end": 253.88, "text": " On contrary to real-world acquisition, the surface used in the simulation has known"}, {"start": 253.88, "end": 258.0, "text": " material and statistics and the collected data are free of noise."}, {"start": 258.0, "end": 262.16, "text": " There is thus no degrees of freedom left to match the parameters of the model to the"}, {"start": 262.16, "end": 263.16, "text": " simulation."}, {"start": 263.16, "end": 267.68, "text": " This is why this validation procedure is widely used in the field of optical physics and"}, {"start": 267.68, "end": 270.16, "text": " therefore we chose this to validate our model."}, {"start": 270.16, "end": 275.04, "text": " We generated random surfaces with known Beckman statistics and did the ray tracing simulation"}, {"start": 275.04, "end": 276.04, "text": " on them."}, {"start": 276.04, "end": 279.92, "text": " By comparing the predictions of our multiple scattering model to the results of the ray"}, {"start": 279.92, "end": 285.32, "text": " tracing simulation, we found our BSDF model to accurately predict both the albedo and"}, {"start": 285.32, "end": 290.48, "text": " angular distribution of the accident energy among the scattering orders and this for a large"}, {"start": 290.48, "end": 295.36, "text": " variety of materials, roughnesses, ni-sortropy and inclinations."}, {"start": 295.36, "end": 300.24, "text": " In our supplemental material, we provide an exhaustive set of such validation results."}, {"start": 300.24, "end": 305.96000000000004, "text": " To make the model practical, we implement two procedures, evaluation and important sampling."}, {"start": 305.96, "end": 311.2, "text": " Since the BSDF is the expectation of all the paths that can be traced on the microsurface,"}, {"start": 311.2, "end": 315.12, "text": " important sampling can be done straightforwardly by generating one path."}, {"start": 315.12, "end": 320.08, "text": " We construct an unbiased, tohastic estimate by tracing one path and evaluating the face"}, {"start": 320.08, "end": 325.84, "text": " functions at each intersection with next event estimation as in classical path tracing."}, {"start": 325.84, "end": 330.67999999999995, "text": " With important sampling and this tohastic evaluation, we have everything required to implement"}, {"start": 330.67999999999995, "end": 332.96, "text": " a classic BSDF plugin."}, {"start": 332.96, "end": 338.96, "text": " Furthermore, our implementation is analytic and does not use per BSDF pre-computed data,"}, {"start": 338.96, "end": 344.56, "text": " which makes our BSDFs usable with textured orbitals, roughness and ni-sortropy."}, {"start": 344.56, "end": 349.0, "text": " In the supplemental materials, we provide a document describing a tutorial implementation"}, {"start": 349.0, "end": 354.2, "text": " for various materials and ready to use plugins for the Mitsuba Physically-Based Render."}, {"start": 354.2, "end": 356.08, "text": " Now let's have a look at some results."}, {"start": 356.08, "end": 359.88, "text": " This image shows a collection of bottles with micro-faceted materials."}, {"start": 359.88, "end": 365.6, "text": " The energy loss is significant if multiple scattering is neglected, especially on dielectrics."}, {"start": 365.6, "end": 370.48, "text": " Without multiple scattering, rough transmittance appears unnatural, which is hard to compensate"}, {"start": 370.48, "end": 372.56, "text": " for by tuning parameters."}, {"start": 372.56, "end": 376.84, "text": " With our multiple scattering model, we simulate the expected appearance of rough glass and"}, {"start": 376.84, "end": 379.68, "text": " metals without tuning any parameters."}, {"start": 379.68, "end": 384.48, "text": " Our model is robust and behaves as expected even with high roughness values."}, {"start": 384.48, "end": 389.24, "text": " We can see that the model avoids the darkening effects and even produces interesting emerging"}, {"start": 389.24, "end": 391.44, "text": " effects like color saturation."}, {"start": 391.44, "end": 395.04, "text": " This can be observed on this rough diffuse material."}, {"start": 395.04, "end": 399.44, "text": " Since the absorption spectrum of the material is repeatedly multiplied after each bounce"}, {"start": 399.44, "end": 405.44, "text": " on the microsurface, the reflected color appears more saturated after multiple bounces."}, {"start": 405.44, "end": 409.24, "text": " This emerging effect can also be seen on this gold conductor material."}, {"start": 409.24, "end": 413.68, "text": " The unsaturated single scattering gold conductor appears strangely dull."}, {"start": 413.68, "end": 418.08, "text": " Thanks to our model, the introduction of multiple scattering restores the shiny appearance"}, {"start": 418.08, "end": 419.8, "text": " expected from gold."}, {"start": 419.8, "end": 424.24, "text": " Note that since our model is parametric and does not depend on any pre-computed data,"}, {"start": 424.24, "end": 429.0, "text": " we fully support textured input, which is important for creating visually rich images."}, {"start": 429.0, "end": 435.76, "text": " As an example, this is a dielectric with textured roughness and anisotropy."}, {"start": 435.76, "end": 450.76, "text": " Thanks for watching."}]
Two Minute Papers
https://www.youtube.com/watch?v=_yjHPu1aYCY
Terrain Traversal with Reinforcement Learning | Two Minute Papers #26
Reinforcement learning is a technique that can learn how to play computer games, or any kind of activity that requires a sequence of actions. In this case, we would like a digital dog to run, and leap over and onto obstacles by choosing the optimal next action. It is quite difficult as there are a lot of body parts to control in harmony. And what is really amazing is that if it has learned everything properly, it will come up with exactly the same movements as we'd expect animals to do in real life! In this technique, dogs were used to demonstrate that reinforcement learning works well in this context, but it's worth noting that it also works with bipeds. _____________________________ The paper "Dynamic Terrain Traversal Skills Using Reinforcement Learning " is available here: http://www.cs.ubc.ca/~van/papers/2015-TOG-terrainRL/ Recommended for you: Digital Creatures Learn To Walk - https://www.youtube.com/watch?v=kQ2bqz3HPJE Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Thumbnail image by localpups (CC BY 2.0). It was slightly edited (flipped, color adjustments, content aware filling) - https://flic.kr/p/wXfFt1 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifaher. Reinforcement learning is a technique that can learn how to play computer games or any kind of activity that requires a sequence of actions. We are not interested in figuring out what we see on an image because the answer is one thing. We are always interested in a sequence of actions. The input for reinforcement learning is a state that describes where we are and how the world looks around us and the algorithm outputs the optimal next action to take. In this case, we would like a digital doctor run and leap over and onto obstacles by choosing the optimal next action. It is quite difficult as there are a lot of body parts to control in harmony. The algorithm has to be able to decide how to control leg forces, spine curvature, angles for the shoulder, elbow, hip and knees. And what is really amazing is that if it has learned everything properly, it will come up with exactly the same movements as we would expect animals to do in real life. So this is how reinforcement learning works. If you do well, you get a reward and if you don't, you get some kind of punishment. These rewards and punishments are usually encoded in the score. If your score is increasing, you know you've done something right and you try to self-reflect and analyze the last few actions to find out which of them were responsible for this positive change. The score would be, for instance, how far the dog could run on the map without falling and at the same time it also makes sense to minimize the amount of effort to make it happen. So, reinforcement learning in a nutshell, it is very similar to how a real-world animal or even a human would learn. If you're not doing well, try something new and if you're succeeding, remember what you did that led to your success and keep doing that. In this technique, dogs were used to demonstrate the concept, but it's worth noting that it also works with bipeds. Reinforcement learning is typically used in many control situations that are extremely difficult to solve otherwise, like controlling a quadrocopter properly. It's quite delightful to see such a cool work, especially given that there are not so many uses of reinforcement learning in computer graphics yet. I wonder why that is. Is it that not so many graphical tasks require a sequence of actions? Or maybe we just need to shift our mindset and get used to the idea of formalizing problems in a different way so we can use such powerful techniques to solve them. It is definitely worth the effort. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.18, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifaher."}, {"start": 5.18, "end": 9.5, "text": " Reinforcement learning is a technique that can learn how to play computer games or any"}, {"start": 9.5, "end": 13.06, "text": " kind of activity that requires a sequence of actions."}, {"start": 13.06, "end": 17.62, "text": " We are not interested in figuring out what we see on an image because the answer is"}, {"start": 17.62, "end": 18.78, "text": " one thing."}, {"start": 18.78, "end": 22.26, "text": " We are always interested in a sequence of actions."}, {"start": 22.26, "end": 27.060000000000002, "text": " The input for reinforcement learning is a state that describes where we are and how the"}, {"start": 27.06, "end": 32.22, "text": " world looks around us and the algorithm outputs the optimal next action to take."}, {"start": 32.22, "end": 38.1, "text": " In this case, we would like a digital doctor run and leap over and onto obstacles by choosing"}, {"start": 38.1, "end": 39.78, "text": " the optimal next action."}, {"start": 39.78, "end": 44.019999999999996, "text": " It is quite difficult as there are a lot of body parts to control in harmony."}, {"start": 44.019999999999996, "end": 49.46, "text": " The algorithm has to be able to decide how to control leg forces, spine curvature,"}, {"start": 49.46, "end": 53.099999999999994, "text": " angles for the shoulder, elbow, hip and knees."}, {"start": 53.1, "end": 58.02, "text": " And what is really amazing is that if it has learned everything properly, it will come"}, {"start": 58.02, "end": 63.22, "text": " up with exactly the same movements as we would expect animals to do in real life."}, {"start": 63.22, "end": 65.42, "text": " So this is how reinforcement learning works."}, {"start": 65.42, "end": 70.34, "text": " If you do well, you get a reward and if you don't, you get some kind of punishment."}, {"start": 70.34, "end": 74.06, "text": " These rewards and punishments are usually encoded in the score."}, {"start": 74.06, "end": 78.82, "text": " If your score is increasing, you know you've done something right and you try to self-reflect"}, {"start": 78.82, "end": 83.69999999999999, "text": " and analyze the last few actions to find out which of them were responsible for this"}, {"start": 83.69999999999999, "end": 85.02, "text": " positive change."}, {"start": 85.02, "end": 89.82, "text": " The score would be, for instance, how far the dog could run on the map without falling"}, {"start": 89.82, "end": 95.02, "text": " and at the same time it also makes sense to minimize the amount of effort to make it happen."}, {"start": 95.02, "end": 100.17999999999999, "text": " So, reinforcement learning in a nutshell, it is very similar to how a real-world animal"}, {"start": 100.17999999999999, "end": 101.94, "text": " or even a human would learn."}, {"start": 101.94, "end": 105.97999999999999, "text": " If you're not doing well, try something new and if you're succeeding, remember what"}, {"start": 105.98, "end": 109.62, "text": " you did that led to your success and keep doing that."}, {"start": 109.62, "end": 114.42, "text": " In this technique, dogs were used to demonstrate the concept, but it's worth noting that it"}, {"start": 114.42, "end": 116.98, "text": " also works with bipeds."}, {"start": 116.98, "end": 121.58, "text": " Reinforcement learning is typically used in many control situations that are extremely difficult"}, {"start": 121.58, "end": 125.34, "text": " to solve otherwise, like controlling a quadrocopter properly."}, {"start": 125.34, "end": 130.34, "text": " It's quite delightful to see such a cool work, especially given that there are not so many"}, {"start": 130.34, "end": 133.7, "text": " uses of reinforcement learning in computer graphics yet."}, {"start": 133.7, "end": 135.5, "text": " I wonder why that is."}, {"start": 135.5, "end": 140.06, "text": " Is it that not so many graphical tasks require a sequence of actions?"}, {"start": 140.06, "end": 145.38, "text": " Or maybe we just need to shift our mindset and get used to the idea of formalizing problems"}, {"start": 145.38, "end": 149.62, "text": " in a different way so we can use such powerful techniques to solve them."}, {"start": 149.62, "end": 151.54, "text": " It is definitely worth the effort."}, {"start": 151.54, "end": 181.1, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Q-XKOPNIDAg
Cryptography, Perfect Secrecy and One Time Pads | Two Minute Papers #25
Cryptography helps us to communicate securely with someone in the presence of third parties. We use this when we do for instance, online banking or even as mundane tasks as reading our gmail. In this episode, we review some cipher techniques such as the Caesar cipher, rot13, and as we find out how easy they are to break, we transition to the only known technique to yield perfect secrecy: one time pads. Are they practical enough for everyday use? How do our findings relate to extraterrestrial communications? Both questions get answered in the video. Additional comment: "In modern certification cryptanalysis, if a cipher output can be distinguished from a PRF (pseudo random functions), it's enough to deem it broken." - Source: https://twitter.com/cryptoland/status/666721478675668993 ______________________ The paper "Cipher printing telegraph systems: For secret wire and radio telegraphic communications" is available here: http://math.boisestate.edu/~liljanab/Math509Spring10/vernam.pdf You can try encrypting your own messages on these websites: http://practicalcryptography.com/ciphers/caesar-cipher/ http://rot13.com/index.php http://www.braingle.com/brainteasers/codes/onetimepad.php Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background was created by Adam Foster (CC BY 2.0) - https://flic.kr/p/b99vsi Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolenei Fehr. Cryptography helps us communicate securely with someone in the presence of third parties. We use this one we do for instance online banking or even as mundane tasks as reading our Gmail. One of the simplest ways of doing cryptography is using the Caesar Cypher. We have a message and each letter we shift with the same amount. Okay, wait. What does shifting mean? Changing the letter A by 1 becomes B and shifting E by 1 becomes F and so on. The amount of shifting doesn't have to be exactly 1. It can be anything as long as we shift all letters in the message with the same amount. If we would run out of the alphabet for instance by shifting the last letter Z by 1, we get A the first letter back. There's a special case of Caesar Cypher's that we call Roth 13 that has an interesting property. It means that we shift the entirety of the message by 13 letters. Let's encrypt a message with Roth 13. We obtain some gibberish. Okay. Now let's pretend that this gibberish is again a message that we would like to encrypt. We get the original message back. Why is that? Since there is 26 letters in the basic Latin alphabet, we first shift by 13, then doing it again, we shift by 13 letters, which is a total of 26, therefore we went around the clock and ended up where we started. Metematicians like to describe this concisely by saying that the inverse of the Roth 13 function is itself. If you call it again, you end up with the same message. We know the statistical probabilities of different letters in the English language. For instance, we know that the letter E is relatively common and Z is pretty rare. If we shift our alphabet by a fixed amount, the probabilities will remain the same only for different letters. Therefore this cipher is quite easy to break, even automatically, with a computer. This is anything but secure communication. The one-time pad encryption is one step beyond this, where we don't shift each letter with the same amount, but with different amounts. This list of numbers to use for shifting is called a pad, because it can be written on a pad of paper, and it has to be as long as the message itself. Why one time? Why paper? No worries, we're going to find out soon enough. If we use this technique, we'll enjoy a number of beneficial properties. For instance, take a look at this example with a one-time pad. We have two Vs in the encrypted output, but the first V corresponds to an H and the second V corresponds to a P. Therefore if I see a V in the encrypted output, I have no idea which letter it was in the input. Computing statistical probabilities doesn't make any sense here, and we're powerless in breaking this. So even if you can intercept this message as a third party, you have no idea what it is about. It's very easy to prove mathematically that the probability of the message being happy is the very same probability as hello, or A, B, C, D, E, or actually any gibberish. The one-time pad is the only known technique that has optimal perfect secrecy, meaning that it is impossible to crack as long as it is used correctly. This is mathematically proven. It is not a surprise that it had seen plenty of use during the Second World War. So what does it mean to use it correctly? Several things. Pads need to be delivered separately from the message itself. For instance, you walk up to the recipient and give them the pad in person. The exchange of the pads is a huge problem if you are on the internet or at war. Now you must also be worried that the pad must not be damaged if you lose just one number. The remainder of your message is going to be completely garbled up. You're done. The key in the pad needs perfectly random numbers, no shortcuts. Getting perfectly random numbers is anything but a trivial task and is subject to lots of discussion. One-time pads have actually been broken because of this. There's an excellent episode on a well-known channel called V-SOS on what random really means. Make sure to check it out. The pad has to be destroyed upon use and should never be reused. So if you do all this, you're using it correctly. In the age of the internet, it is not really practical because you cannot send a delivery guy with the secret pad next to every message you send on the internet. So in a nutshell, one-time pad is great, but it is not practical for large-scale real-time communication from afar. And as crazy as it sounds, if a civilization can find a method to do practical communication with perfect cryptography, their communication will look indistinguishable from noise. This is amazing. There's tons of ongoing debates on the fact that we're being exposed to tons of radio signals around the earth. Why can we still not find any signs of extraterrestrial communication? Well, there you have the answer. And this is going to blow your mind. If practical, perfect cryptography is mathematically possible, the communication of any sufficiently advanced civilization is indistinguishable from noise. They may be transmitting their diabolical plans through us this very moment, and all we would hear is white noise. Crazy isn't it? Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolenei Fehr."}, {"start": 4.6000000000000005, "end": 9.64, "text": " Cryptography helps us communicate securely with someone in the presence of third parties."}, {"start": 9.64, "end": 14.88, "text": " We use this one we do for instance online banking or even as mundane tasks as reading"}, {"start": 14.88, "end": 16.04, "text": " our Gmail."}, {"start": 16.04, "end": 20.240000000000002, "text": " One of the simplest ways of doing cryptography is using the Caesar Cypher."}, {"start": 20.240000000000002, "end": 24.04, "text": " We have a message and each letter we shift with the same amount."}, {"start": 24.04, "end": 25.6, "text": " Okay, wait."}, {"start": 25.6, "end": 27.400000000000002, "text": " What does shifting mean?"}, {"start": 27.4, "end": 34.68, "text": " Changing the letter A by 1 becomes B and shifting E by 1 becomes F and so on."}, {"start": 34.68, "end": 37.72, "text": " The amount of shifting doesn't have to be exactly 1."}, {"start": 37.72, "end": 42.72, "text": " It can be anything as long as we shift all letters in the message with the same amount."}, {"start": 42.72, "end": 48.480000000000004, "text": " If we would run out of the alphabet for instance by shifting the last letter Z by 1, we get"}, {"start": 48.480000000000004, "end": 50.84, "text": " A the first letter back."}, {"start": 50.84, "end": 55.84, "text": " There's a special case of Caesar Cypher's that we call Roth 13 that has an interesting"}, {"start": 55.84, "end": 56.84, "text": " property."}, {"start": 56.84, "end": 61.080000000000005, "text": " It means that we shift the entirety of the message by 13 letters."}, {"start": 61.080000000000005, "end": 63.68000000000001, "text": " Let's encrypt a message with Roth 13."}, {"start": 63.68000000000001, "end": 65.84, "text": " We obtain some gibberish."}, {"start": 65.84, "end": 66.84, "text": " Okay."}, {"start": 66.84, "end": 72.84, "text": " Now let's pretend that this gibberish is again a message that we would like to encrypt."}, {"start": 72.84, "end": 75.16, "text": " We get the original message back."}, {"start": 75.16, "end": 76.64, "text": " Why is that?"}, {"start": 76.64, "end": 82.60000000000001, "text": " Since there is 26 letters in the basic Latin alphabet, we first shift by 13, then doing"}, {"start": 82.6, "end": 88.28, "text": " it again, we shift by 13 letters, which is a total of 26, therefore we went around the"}, {"start": 88.28, "end": 91.11999999999999, "text": " clock and ended up where we started."}, {"start": 91.11999999999999, "end": 96.03999999999999, "text": " Metematicians like to describe this concisely by saying that the inverse of the Roth 13"}, {"start": 96.03999999999999, "end": 97.83999999999999, "text": " function is itself."}, {"start": 97.83999999999999, "end": 101.03999999999999, "text": " If you call it again, you end up with the same message."}, {"start": 101.03999999999999, "end": 105.36, "text": " We know the statistical probabilities of different letters in the English language."}, {"start": 105.36, "end": 110.72, "text": " For instance, we know that the letter E is relatively common and Z is pretty rare."}, {"start": 110.72, "end": 115.36, "text": " If we shift our alphabet by a fixed amount, the probabilities will remain the same only"}, {"start": 115.36, "end": 117.0, "text": " for different letters."}, {"start": 117.0, "end": 122.24, "text": " Therefore this cipher is quite easy to break, even automatically, with a computer."}, {"start": 122.24, "end": 124.84, "text": " This is anything but secure communication."}, {"start": 124.84, "end": 129.52, "text": " The one-time pad encryption is one step beyond this, where we don't shift each letter with"}, {"start": 129.52, "end": 133.12, "text": " the same amount, but with different amounts."}, {"start": 133.12, "end": 137.4, "text": " This list of numbers to use for shifting is called a pad, because it can be written on"}, {"start": 137.4, "end": 142.08, "text": " a pad of paper, and it has to be as long as the message itself."}, {"start": 142.08, "end": 143.28, "text": " Why one time?"}, {"start": 143.28, "end": 144.28, "text": " Why paper?"}, {"start": 144.28, "end": 147.16, "text": " No worries, we're going to find out soon enough."}, {"start": 147.16, "end": 151.76, "text": " If we use this technique, we'll enjoy a number of beneficial properties."}, {"start": 151.76, "end": 159.32, "text": " For instance, take a look at this example with a one-time pad."}, {"start": 159.32, "end": 164.56, "text": " We have two Vs in the encrypted output, but the first V corresponds to an H and the"}, {"start": 164.56, "end": 171.16, "text": " second V corresponds to a P. Therefore if I see a V in the encrypted output, I have no"}, {"start": 171.16, "end": 173.96, "text": " idea which letter it was in the input."}, {"start": 173.96, "end": 178.32, "text": " Computing statistical probabilities doesn't make any sense here, and we're powerless in"}, {"start": 178.32, "end": 179.64000000000001, "text": " breaking this."}, {"start": 179.64000000000001, "end": 184.24, "text": " So even if you can intercept this message as a third party, you have no idea what it is"}, {"start": 184.24, "end": 185.24, "text": " about."}, {"start": 185.24, "end": 189.88, "text": " It's very easy to prove mathematically that the probability of the message being happy"}, {"start": 189.88, "end": 197.24, "text": " is the very same probability as hello, or A, B, C, D, E, or actually any gibberish."}, {"start": 197.24, "end": 202.35999999999999, "text": " The one-time pad is the only known technique that has optimal perfect secrecy, meaning"}, {"start": 202.35999999999999, "end": 206.92, "text": " that it is impossible to crack as long as it is used correctly."}, {"start": 206.92, "end": 208.88, "text": " This is mathematically proven."}, {"start": 208.88, "end": 213.51999999999998, "text": " It is not a surprise that it had seen plenty of use during the Second World War."}, {"start": 213.51999999999998, "end": 216.6, "text": " So what does it mean to use it correctly?"}, {"start": 216.6, "end": 217.6, "text": " Several things."}, {"start": 217.6, "end": 222.07999999999998, "text": " Pads need to be delivered separately from the message itself."}, {"start": 222.07999999999998, "end": 226.4, "text": " For instance, you walk up to the recipient and give them the pad in person."}, {"start": 226.4, "end": 231.35999999999999, "text": " The exchange of the pads is a huge problem if you are on the internet or at war."}, {"start": 231.35999999999999, "end": 237.12, "text": " Now you must also be worried that the pad must not be damaged if you lose just one number."}, {"start": 237.12, "end": 240.79999999999998, "text": " The remainder of your message is going to be completely garbled up."}, {"start": 240.79999999999998, "end": 242.07999999999998, "text": " You're done."}, {"start": 242.07999999999998, "end": 246.95999999999998, "text": " The key in the pad needs perfectly random numbers, no shortcuts."}, {"start": 246.96, "end": 251.8, "text": " Getting perfectly random numbers is anything but a trivial task and is subject to lots of"}, {"start": 251.8, "end": 252.8, "text": " discussion."}, {"start": 252.8, "end": 255.64000000000001, "text": " One-time pads have actually been broken because of this."}, {"start": 255.64000000000001, "end": 260.40000000000003, "text": " There's an excellent episode on a well-known channel called V-SOS on what random really"}, {"start": 260.40000000000003, "end": 261.40000000000003, "text": " means."}, {"start": 261.40000000000003, "end": 263.0, "text": " Make sure to check it out."}, {"start": 263.0, "end": 267.6, "text": " The pad has to be destroyed upon use and should never be reused."}, {"start": 267.6, "end": 270.48, "text": " So if you do all this, you're using it correctly."}, {"start": 270.48, "end": 274.84000000000003, "text": " In the age of the internet, it is not really practical because you cannot send a delivery"}, {"start": 274.84, "end": 279.28, "text": " guy with the secret pad next to every message you send on the internet."}, {"start": 279.28, "end": 284.76, "text": " So in a nutshell, one-time pad is great, but it is not practical for large-scale real-time"}, {"start": 284.76, "end": 287.03999999999996, "text": " communication from afar."}, {"start": 287.03999999999996, "end": 292.47999999999996, "text": " And as crazy as it sounds, if a civilization can find a method to do practical communication"}, {"start": 292.47999999999996, "end": 298.28, "text": " with perfect cryptography, their communication will look indistinguishable from noise."}, {"start": 298.28, "end": 299.67999999999995, "text": " This is amazing."}, {"start": 299.67999999999995, "end": 304.34, "text": " There's tons of ongoing debates on the fact that we're being exposed to tons of radio"}, {"start": 304.34, "end": 306.03999999999996, "text": " signals around the earth."}, {"start": 306.03999999999996, "end": 310.79999999999995, "text": " Why can we still not find any signs of extraterrestrial communication?"}, {"start": 310.79999999999995, "end": 312.88, "text": " Well, there you have the answer."}, {"start": 312.88, "end": 314.91999999999996, "text": " And this is going to blow your mind."}, {"start": 314.91999999999996, "end": 320.96, "text": " If practical, perfect cryptography is mathematically possible, the communication of any sufficiently"}, {"start": 320.96, "end": 324.55999999999995, "text": " advanced civilization is indistinguishable from noise."}, {"start": 324.55999999999995, "end": 329.44, "text": " They may be transmitting their diabolical plans through us this very moment, and all"}, {"start": 329.44, "end": 334.68, "text": " we would hear is white noise."}, {"start": 334.68, "end": 336.2, "text": " Crazy isn't it?"}, {"start": 336.2, "end": 365.76, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=He4t7Zekob0
How Does Deep Learning Work? | Two Minute Papers #24
Artificial neural networks provide us incredibly powerful tools in machine learning that are useful for a variety of tasks ranging from image classification to voice translation. So what is all the deep learning rage about? The media seems to be all over the newest neural network research of the DeepMind company that was recently acquired by Google. They used neural networks to create algorithms that are able to play Atari games, learn them like a human would, eventually achieving superhuman performance. Deep learning means that we use artificial neural network with multiple layers, making it even more powerful for more difficult tasks. These machine learning techniques proved to be useful for many tasks beyond image recognition: they also excel at weather predictions, breast cancer cell mitosis detection, brain image segmentation and toxicity prediction among many others. In this episode, an intuitive explanation is given to show the inner workings of deep learning algorithms. ________________________ Original blog post by Christopher Olah (source of many images): http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ You can train your own deep neural networks on Andrej Karpathy's website: http://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html Images used in this video: Bunny by Tomi Tapio K (CC BY 2.0) - https://flic.kr/p/8EbcEk Train by B4bees (CC BY 2.0) - https://flic.kr/p/6RzHe4 Train with bunny by Alyssa L. Miller (CC BY 2.0) - https://flic.kr/p/5WPeRN The knot theory blackboard image was created by Clayton Shonkwiler (CC BY 2.0) https://flic.kr/p/64FYv The tangled knot image was created by Mikael Hvidtfeldt Christensen (CC BY 2.0) https://flic.kr/p/beYG9D The thumbnail image is a work of Duncan Hull (CC BY 2.0) - https://flic.kr/p/98qtJB Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Layfahier. A neural network is a very loose model of the human brain that we can program in a computer. Or, it's perhaps more appropriate to say that it is inspired by our knowledge of the inner workings of a human brain. Now, let's note that artificial neural networks have been studied for decades by experts. And the goal here is not to show all aspects, but one intuitive, graphical aspect that is really cool and easy to understand. Take a look at these curves on a plane. These curves are a collection of points, and these points you can imagine as images, sounds, or any kind of input data that we try to learn. The red and blue curves represent two different classes. The red can mean images of trains, and the blue, for instance, images of bunnies. Now, after we have trained the network from this limited data, which is basically a bunch of images of trains and bunnies, we will get new points on this plane, new images, and we would like to know whether this new image looks like a train or a bunny. This is what the algorithm has to find out. And this we call a classification problem, to which a simple and bad solution would be simply cutting the plane in half with a line. Images belonging to the red regions will be classified as the red class, and the blue regions as the blue class. Now, as you can see, the red region cuts into the blue curve, which means that some trains will be misclassified as bunnies. It seems that if we look at the problem from this angle, we cannot really separate the two classes perfectly with a straight line. However, if we use a simple neural network, it will give us this result. Hey, but that's cheating. We were talking about straight lines, right? This is anything but a straight line. A key concept of neural networks is that they create an inner representation of the data model and try to solve the problem in that space. What this intuitively means is that the algorithm will start transforming and warping these curves, where their shapes start changing, and it finds that if we do well with this warping step, we can actually draw a line to separate these two classes. After we undo this warping and transform the line back to the original problem, it will look like a curve. Really cool, isn't it? So these are actually lines only in a different representation of the problem, who said that the original representation is the best way to solve a problem. Take a look at this example with the entangled spirals. When we separate these with a line, not a chance, but the answer is not a chance with this representation. But if one starts warping them correctly, there will be states where they can easily be separated. However, there are rules in this game. For instance, one cannot just rip out one of the spirals here and put it somewhere else. These transformations have to be homeomorphisms, which is a term that mathematicians like to use. It intuitively means that the warpings are not too crazy, meaning that we don't tear apart important structures. And as they remain intact, the warped solution is still meaningful with respect to the original problem. Now comes the deep learning part. Deep learning means that the neural network has multiple of these hidden layers and can therefore create much more effective inner representations of the data. From an earlier episode, we've seen in an image recognition test that as we go further and further into the layers, first we'll see an edge detector and there's a combination of edges, object parts emerge. And in the later layers, a combination of object parts create object models. Let's take a look at this example. We have a bullseye here, if you will, and you can see that the network is trying to warp this to separate it with a line, but in vain. However, if we have a deep neural network, we have more degrees of freedom, more directions and possibilities to warp this data. And if you think intuitively, if this were a piece of paper, you could put your finger behind the red zone and push it in, making it possible to separate the two regions with a line. Let's take a look at the one-dimensional example to better see what's going on. This line is the one-de-equivalent of the original problem, and you can see that the problem becomes quite trivial if we have the freedom to do this kind of transformation. We can easily encounter cases where the data is very severely tangled, and we don't know how good the best solution can be. There is a very heavily academic subfield of mathematics called Noth Theory, which is the study of tangling and untangling objects. It is subject to a lot of snarky comments for not being well, too exciting or useful. What is really mind-blowing is that Noth Theory can actually help us study these kinds of problems, and it may ultimately end up being useful for recognizing traffic signs and designing self-driving cars. Now, it's time to get our hands dirty. Let's run a neural network on this dataset and see what happens. If we use a low number of neurons and one layer, you can see that it is trying ferociously, but we know that it is going to be a fruitless endeavor. Upon increasing the number of neurons, magic happens. And we know exactly why. Yeah! Thanks so much for watching and for your generous support. I feel really privileged to have supporters like you fellow scholars. Thank you and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Layfahier."}, {"start": 4.5600000000000005, "end": 9.16, "text": " A neural network is a very loose model of the human brain that we can program in a computer."}, {"start": 9.16, "end": 14.16, "text": " Or, it's perhaps more appropriate to say that it is inspired by our knowledge of the inner"}, {"start": 14.16, "end": 15.84, "text": " workings of a human brain."}, {"start": 15.84, "end": 20.96, "text": " Now, let's note that artificial neural networks have been studied for decades by experts."}, {"start": 20.96, "end": 25.88, "text": " And the goal here is not to show all aspects, but one intuitive, graphical aspect that is"}, {"start": 25.88, "end": 28.68, "text": " really cool and easy to understand."}, {"start": 28.68, "end": 30.6, "text": " Take a look at these curves on a plane."}, {"start": 30.6, "end": 36.2, "text": " These curves are a collection of points, and these points you can imagine as images, sounds,"}, {"start": 36.2, "end": 39.24, "text": " or any kind of input data that we try to learn."}, {"start": 39.24, "end": 42.28, "text": " The red and blue curves represent two different classes."}, {"start": 42.28, "end": 47.44, "text": " The red can mean images of trains, and the blue, for instance, images of bunnies."}, {"start": 47.44, "end": 52.2, "text": " Now, after we have trained the network from this limited data, which is basically a bunch"}, {"start": 52.2, "end": 57.28, "text": " of images of trains and bunnies, we will get new points on this plane, new images, and"}, {"start": 57.28, "end": 60.88, "text": " we would like to know whether this new image looks like a train or a bunny."}, {"start": 60.88, "end": 63.24, "text": " This is what the algorithm has to find out."}, {"start": 63.24, "end": 67.56, "text": " And this we call a classification problem, to which a simple and bad solution would be"}, {"start": 67.56, "end": 70.6, "text": " simply cutting the plane in half with a line."}, {"start": 70.6, "end": 74.52000000000001, "text": " Images belonging to the red regions will be classified as the red class, and the blue"}, {"start": 74.52000000000001, "end": 76.36, "text": " regions as the blue class."}, {"start": 76.36, "end": 81.2, "text": " Now, as you can see, the red region cuts into the blue curve, which means that some"}, {"start": 81.2, "end": 84.12, "text": " trains will be misclassified as bunnies."}, {"start": 84.12, "end": 88.48, "text": " It seems that if we look at the problem from this angle, we cannot really separate the"}, {"start": 88.48, "end": 90.88000000000001, "text": " two classes perfectly with a straight line."}, {"start": 90.88000000000001, "end": 95.2, "text": " However, if we use a simple neural network, it will give us this result."}, {"start": 95.2, "end": 96.88000000000001, "text": " Hey, but that's cheating."}, {"start": 96.88000000000001, "end": 99.24000000000001, "text": " We were talking about straight lines, right?"}, {"start": 99.24000000000001, "end": 101.28, "text": " This is anything but a straight line."}, {"start": 101.28, "end": 105.36000000000001, "text": " A key concept of neural networks is that they create an inner representation of the"}, {"start": 105.36000000000001, "end": 108.76, "text": " data model and try to solve the problem in that space."}, {"start": 108.76, "end": 113.48, "text": " What this intuitively means is that the algorithm will start transforming and warping"}, {"start": 113.48, "end": 118.76, "text": " these curves, where their shapes start changing, and it finds that if we do well with this warping"}, {"start": 118.76, "end": 123.84, "text": " step, we can actually draw a line to separate these two classes."}, {"start": 123.84, "end": 128.4, "text": " After we undo this warping and transform the line back to the original problem, it will"}, {"start": 128.4, "end": 129.88, "text": " look like a curve."}, {"start": 129.88, "end": 131.64000000000001, "text": " Really cool, isn't it?"}, {"start": 131.64000000000001, "end": 136.24, "text": " So these are actually lines only in a different representation of the problem, who said that"}, {"start": 136.24, "end": 140.16, "text": " the original representation is the best way to solve a problem."}, {"start": 140.16, "end": 143.0, "text": " Take a look at this example with the entangled spirals."}, {"start": 143.0, "end": 148.08, "text": " When we separate these with a line, not a chance, but the answer is not a chance with"}, {"start": 148.08, "end": 149.92, "text": " this representation."}, {"start": 149.92, "end": 153.84, "text": " But if one starts warping them correctly, there will be states where they can easily be"}, {"start": 153.84, "end": 154.84, "text": " separated."}, {"start": 154.84, "end": 157.24, "text": " However, there are rules in this game."}, {"start": 157.24, "end": 162.48, "text": " For instance, one cannot just rip out one of the spirals here and put it somewhere else."}, {"start": 162.48, "end": 167.56, "text": " These transformations have to be homeomorphisms, which is a term that mathematicians like"}, {"start": 167.56, "end": 168.72, "text": " to use."}, {"start": 168.72, "end": 172.84, "text": " It intuitively means that the warpings are not too crazy, meaning that we don't"}, {"start": 172.84, "end": 174.96, "text": " tear apart important structures."}, {"start": 174.96, "end": 179.64000000000001, "text": " And as they remain intact, the warped solution is still meaningful with respect to the original"}, {"start": 179.64000000000001, "end": 181.12, "text": " problem."}, {"start": 181.12, "end": 183.32, "text": " Now comes the deep learning part."}, {"start": 183.32, "end": 187.28, "text": " Deep learning means that the neural network has multiple of these hidden layers and can"}, {"start": 187.28, "end": 191.6, "text": " therefore create much more effective inner representations of the data."}, {"start": 191.6, "end": 195.76, "text": " From an earlier episode, we've seen in an image recognition test that as we go further"}, {"start": 195.76, "end": 200.48000000000002, "text": " and further into the layers, first we'll see an edge detector and there's a combination"}, {"start": 200.48, "end": 203.2, "text": " of edges, object parts emerge."}, {"start": 203.2, "end": 209.76, "text": " And in the later layers, a combination of object parts create object models."}, {"start": 209.76, "end": 211.28, "text": " Let's take a look at this example."}, {"start": 211.28, "end": 215.92, "text": " We have a bullseye here, if you will, and you can see that the network is trying to warp"}, {"start": 215.92, "end": 219.48, "text": " this to separate it with a line, but in vain."}, {"start": 219.48, "end": 224.79999999999998, "text": " However, if we have a deep neural network, we have more degrees of freedom, more directions"}, {"start": 224.79999999999998, "end": 227.44, "text": " and possibilities to warp this data."}, {"start": 227.44, "end": 231.57999999999998, "text": " And if you think intuitively, if this were a piece of paper, you could put your finger"}, {"start": 231.57999999999998, "end": 236.24, "text": " behind the red zone and push it in, making it possible to separate the two regions with"}, {"start": 236.24, "end": 239.2, "text": " a line."}, {"start": 239.2, "end": 243.16, "text": " Let's take a look at the one-dimensional example to better see what's going on."}, {"start": 243.16, "end": 247.28, "text": " This line is the one-de-equivalent of the original problem, and you can see that the problem"}, {"start": 247.28, "end": 255.12, "text": " becomes quite trivial if we have the freedom to do this kind of transformation."}, {"start": 255.12, "end": 259.76, "text": " We can easily encounter cases where the data is very severely tangled, and we don't know"}, {"start": 259.76, "end": 262.0, "text": " how good the best solution can be."}, {"start": 262.0, "end": 266.6, "text": " There is a very heavily academic subfield of mathematics called Noth Theory, which is"}, {"start": 266.6, "end": 269.72, "text": " the study of tangling and untangling objects."}, {"start": 269.72, "end": 275.88, "text": " It is subject to a lot of snarky comments for not being well, too exciting or useful."}, {"start": 275.88, "end": 280.96, "text": " What is really mind-blowing is that Noth Theory can actually help us study these kinds"}, {"start": 280.96, "end": 286.0, "text": " of problems, and it may ultimately end up being useful for recognizing traffic signs"}, {"start": 286.0, "end": 288.71999999999997, "text": " and designing self-driving cars."}, {"start": 288.71999999999997, "end": 293.24, "text": " Now, it's time to get our hands dirty."}, {"start": 293.24, "end": 296.59999999999997, "text": " Let's run a neural network on this dataset and see what happens."}, {"start": 296.59999999999997, "end": 301.96, "text": " If we use a low number of neurons and one layer, you can see that it is trying ferociously,"}, {"start": 301.96, "end": 304.91999999999996, "text": " but we know that it is going to be a fruitless endeavor."}, {"start": 304.91999999999996, "end": 310.91999999999996, "text": " Upon increasing the number of neurons, magic happens."}, {"start": 310.92, "end": 312.6, "text": " And we know exactly why."}, {"start": 312.6, "end": 314.28000000000003, "text": " Yeah!"}, {"start": 314.28000000000003, "end": 317.8, "text": " Thanks so much for watching and for your generous support."}, {"start": 317.8, "end": 321.96000000000004, "text": " I feel really privileged to have supporters like you fellow scholars."}, {"start": 321.96, "end": 350.47999999999996, "text": " Thank you and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=e-WB4lfg30M
Recurrent Neural Network Writes Sentences About Images | Two Minute Papers #23
This technique is a combination of two powerful machine learning algorithms: - convolutional neural networks are excellent at image classification, i.e., finding out what is seen on an input image, - recurrent neural networks that are capable of processing a sequence of inputs and outputs, therefore it can create sentences of what is seen on the image. Combining these two techniques makes it possible for a computer to describe in a sentence what is seen on an input image. _____________________ The paper "Deep Visual-Semantic Alignments for Generating Image Descriptions" is available here: http://cs.stanford.edu/people/karpathy/deepimagesent/ A gallery with more results with the same algorithm: http://cs.stanford.edu/people/karpathy/deepimagesent/generationdemo/ You can train your own convolutional neural network here: http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html The source code for the project is now available here: https://github.com/karpathy/neuraltalk2 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was made by Georgie Pauwels (CC BY 2.0) - https://flic.kr/p/qrRciQ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. Neural networks can be used to learn a variety of things, for instance, to classify images, which means that we'd like to find out what breed the dog is that we see on the image. This work uses a combination of two techniques, a neural network variant that is more adapted to the visual mechanisms of humans and is therefore very suitable for processing and classifying images. This variant we call a convolutional neural network. Here's a great web application where you can interactively train your own network and see how it improves at recognizing different things. This is a dataset where the algorithm tries to guess which class these smudgy images are from. If trained for long enough, it can achieve a classification accuracy of around 80%. The current state of the art in research is about 90%, which is just 4% off of humans who have performed the same classification. This is already insanity. We could be done right here, but let's put this on steroids. As you remember from an earlier episode, sentences are not one thing, but they are a sequence, a sequence of words. Therefore they can be created by recurrent neural networks. Now I hope you see where this is going. We have images as an input and sentences as an output. This means that we have an algorithm that is able to look at any image and summarize what is being seen on the image. Buckle up because you're going to see some wicked results. It can not only recognize the construction worker, it knows that he's in a safety vest and is currently working on the road. It can also recognize that a man is in the act of throwing a ball. A black and white dog jumps over a bar. It is not at all trivial for an algorithm to know what over and under means, because it is only looking at a 2D image that is the representation of the 3D world around us. And there are, of course, hilarious failure cases. Well, a baseball bat. Well, close enough. There is a very entertaining web demo with the algorithm and all kinds of goodies that are linked in the description box. Check them out. The bottom line is that what we thought was science fiction five years ago is now reality in machine learning research. And based on how fast this field is advancing, we know that we're still only scratching the surface. Thanks for watching and I'll see you next time. Oh, and before you go, you can now be a part of two minute papers and support the series on Patreon. A video with more details is coming soon. Until then, just click on the link on the screen if you're interested. Thank you.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 10.040000000000001, "text": " Neural networks can be used to learn a variety of things, for instance, to classify images,"}, {"start": 10.040000000000001, "end": 14.68, "text": " which means that we'd like to find out what breed the dog is that we see on the image."}, {"start": 14.68, "end": 19.76, "text": " This work uses a combination of two techniques, a neural network variant that is more adapted"}, {"start": 19.76, "end": 24.6, "text": " to the visual mechanisms of humans and is therefore very suitable for processing and"}, {"start": 24.6, "end": 30.76, "text": " classifying images. This variant we call a convolutional neural network. Here's a great web"}, {"start": 30.76, "end": 35.52, "text": " application where you can interactively train your own network and see how it improves"}, {"start": 35.52, "end": 40.760000000000005, "text": " at recognizing different things. This is a dataset where the algorithm tries to guess"}, {"start": 40.760000000000005, "end": 45.28, "text": " which class these smudgy images are from. If trained for long enough, it can achieve"}, {"start": 45.28, "end": 51.040000000000006, "text": " a classification accuracy of around 80%. The current state of the art in research is about"}, {"start": 51.04, "end": 57.2, "text": " 90%, which is just 4% off of humans who have performed the same classification."}, {"start": 57.2, "end": 62.84, "text": " This is already insanity. We could be done right here, but let's put this on steroids."}, {"start": 62.84, "end": 68.4, "text": " As you remember from an earlier episode, sentences are not one thing, but they are a sequence,"}, {"start": 68.4, "end": 73.68, "text": " a sequence of words. Therefore they can be created by recurrent neural networks."}, {"start": 73.68, "end": 78.8, "text": " Now I hope you see where this is going. We have images as an input and sentences as an"}, {"start": 78.8, "end": 84.52, "text": " output. This means that we have an algorithm that is able to look at any image and summarize"}, {"start": 84.52, "end": 90.8, "text": " what is being seen on the image. Buckle up because you're going to see some wicked results."}, {"start": 90.8, "end": 95.28, "text": " It can not only recognize the construction worker, it knows that he's in a safety vest"}, {"start": 95.28, "end": 100.52, "text": " and is currently working on the road. It can also recognize that a man is in the act"}, {"start": 100.52, "end": 107.08, "text": " of throwing a ball. A black and white dog jumps over a bar. It is not at all trivial for"}, {"start": 107.08, "end": 113.08, "text": " an algorithm to know what over and under means, because it is only looking at a 2D image"}, {"start": 113.08, "end": 117.0, "text": " that is the representation of the 3D world around us."}, {"start": 117.0, "end": 125.0, "text": " And there are, of course, hilarious failure cases. Well, a baseball bat. Well, close enough."}, {"start": 125.0, "end": 129.52, "text": " There is a very entertaining web demo with the algorithm and all kinds of goodies that"}, {"start": 129.52, "end": 134.52, "text": " are linked in the description box. Check them out. The bottom line is that what we thought"}, {"start": 134.52, "end": 139.64000000000001, "text": " was science fiction five years ago is now reality in machine learning research. And based"}, {"start": 139.64000000000001, "end": 144.56, "text": " on how fast this field is advancing, we know that we're still only scratching the surface."}, {"start": 144.56, "end": 147.16000000000003, "text": " Thanks for watching and I'll see you next time."}, {"start": 147.16000000000003, "end": 151.92000000000002, "text": " Oh, and before you go, you can now be a part of two minute papers and support the series"}, {"start": 151.92000000000002, "end": 156.64000000000001, "text": " on Patreon. A video with more details is coming soon. Until then, just click on the link"}, {"start": 156.64, "end": 167.04, "text": " on the screen if you're interested. Thank you."}]
Two Minute Papers
https://www.youtube.com/watch?v=iuJwmM2-JWM
Be a Part of Two Minute Papers on Patreon!
You can now be a part of Two Minute Papers on Patreon!- https://www.patreon.com/TwoMinutePapers Two Minute Papers is a series where I explain the latest and greatest research in a way that is understandable and enjoyable to everyone. Research papers are for experts, but Two Minute Papers is for everyone. Creating each of these videos is a lot of work: I do almost everything on my own: researching these topics, audio recordings, audio engineering, and putting the videos together. And my wife, Felícia designs beautiful thumbnails for of them. If you decide to support the series on Patreon, I am tremendously grateful for you to keep the series going! If you don't want to spend a dime, no worries, it's perfectly okay! Two Minute Papers will always remain free for everyone. I'm looking forward to greeting you in our growing club of fellow scholars! ____________________________ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
With the help of science, humans are capable of creating extraordinary things. Two-minute papers is a series where I explain the latest and greatest research in a way that is understandable and enjoyable to everyone. We talk about really exciting topics like machine learning techniques to paint in the style of famous artists, light simulation programs to create photorealistic images on a computer, fluid and smoke simulations that are so high quality that they are used in the movie industry, animating the movement of digital creatures on a computer, building bridges with flying machines, and many more extremely exciting topics. Research papers are for experts, but two-minute papers is for everyone. Creating each of these videos is a lot of work. I do almost everything on my own. Creating these topics, audio recordings, audio engineering, and putting the videos together. And my wife, Felicia, designs these beautiful thumbnails for each of them. And now you can become an active supporter of two-minute papers. If you help with only one dollar per month, you help more than a few thousand advertisement views on a video. It's insanity and it's tremendously helpful. And you also get really cool perks like accessing upcoming episodes earlier or deciding the topic of the next two-minute papers video. Two-minute papers is never going to be behind the paywall. It will always be free for everyone. I feel that it's just so honest. I create videos and if you like them, you can say, hey, I like what you're doing. Here's some help. That's really awesome. If you'd like to help, just click on the Patreon link at the end of this video or in the description box below. Or if you're watching this on the Patreon website, click become a patron and select an amount. And I am tremendously grateful for your support. Also, if you're already a supporter of the show and feel that you need this amount to make ends meet, no worries. You can just cancel the subscription at any time. And if you don't want to spend a dime or you can't afford it, it's completely okay. I'm very happy to have you around. And please, stay with us and let's continue our journey of science together. Let's show the world how cool science and research really is. Thanks for watching and I'm looking forward to greeting you in our growing club of fellow scholars. Hey.
[{"start": 0.0, "end": 5.6000000000000005, "text": " With the help of science, humans are capable of creating extraordinary things."}, {"start": 5.6000000000000005, "end": 10.08, "text": " Two-minute papers is a series where I explain the latest and greatest research in a way"}, {"start": 10.08, "end": 13.52, "text": " that is understandable and enjoyable to everyone."}, {"start": 13.52, "end": 18.28, "text": " We talk about really exciting topics like machine learning techniques to paint in the style"}, {"start": 18.28, "end": 25.68, "text": " of famous artists, light simulation programs to create photorealistic images on a computer,"}, {"start": 25.68, "end": 31.439999999999998, "text": " fluid and smoke simulations that are so high quality that they are used in the movie industry,"}, {"start": 31.439999999999998, "end": 37.68, "text": " animating the movement of digital creatures on a computer,"}, {"start": 37.68, "end": 43.239999999999995, "text": " building bridges with flying machines, and many more extremely exciting topics."}, {"start": 43.239999999999995, "end": 47.68, "text": " Research papers are for experts, but two-minute papers is for everyone."}, {"start": 47.68, "end": 50.4, "text": " Creating each of these videos is a lot of work."}, {"start": 50.4, "end": 52.8, "text": " I do almost everything on my own."}, {"start": 52.8, "end": 65.39999999999999, "text": " Creating these topics, audio recordings, audio engineering, and putting the videos together."}, {"start": 65.39999999999999, "end": 69.88, "text": " And my wife, Felicia, designs these beautiful thumbnails for each of them."}, {"start": 69.88, "end": 73.72, "text": " And now you can become an active supporter of two-minute papers."}, {"start": 73.72, "end": 78.47999999999999, "text": " If you help with only one dollar per month, you help more than a few thousand advertisement"}, {"start": 78.47999999999999, "end": 79.47999999999999, "text": " views on a video."}, {"start": 79.48, "end": 84.16, "text": " It's insanity and it's tremendously helpful."}, {"start": 84.16, "end": 89.60000000000001, "text": " And you also get really cool perks like accessing upcoming episodes earlier or deciding the"}, {"start": 89.60000000000001, "end": 92.16, "text": " topic of the next two-minute papers video."}, {"start": 92.16, "end": 95.12, "text": " Two-minute papers is never going to be behind the paywall."}, {"start": 95.12, "end": 97.16, "text": " It will always be free for everyone."}, {"start": 97.16, "end": 99.44, "text": " I feel that it's just so honest."}, {"start": 99.44, "end": 104.56, "text": " I create videos and if you like them, you can say, hey, I like what you're doing."}, {"start": 104.56, "end": 105.56, "text": " Here's some help."}, {"start": 105.56, "end": 107.08000000000001, "text": " That's really awesome."}, {"start": 107.08, "end": 111.4, "text": " If you'd like to help, just click on the Patreon link at the end of this video or in the description"}, {"start": 111.4, "end": 112.4, "text": " box below."}, {"start": 112.4, "end": 116.96, "text": " Or if you're watching this on the Patreon website, click become a patron and select an"}, {"start": 116.96, "end": 117.96, "text": " amount."}, {"start": 117.96, "end": 120.75999999999999, "text": " And I am tremendously grateful for your support."}, {"start": 120.75999999999999, "end": 125.32, "text": " Also, if you're already a supporter of the show and feel that you need this amount to make"}, {"start": 125.32, "end": 127.0, "text": " ends meet, no worries."}, {"start": 127.0, "end": 129.84, "text": " You can just cancel the subscription at any time."}, {"start": 129.84, "end": 134.12, "text": " And if you don't want to spend a dime or you can't afford it, it's completely okay."}, {"start": 134.12, "end": 135.96, "text": " I'm very happy to have you around."}, {"start": 135.96, "end": 140.32000000000002, "text": " And please, stay with us and let's continue our journey of science together."}, {"start": 140.32000000000002, "end": 143.76000000000002, "text": " Let's show the world how cool science and research really is."}, {"start": 143.76000000000002, "end": 148.16, "text": " Thanks for watching and I'm looking forward to greeting you in our growing club of fellow"}, {"start": 148.16, "end": 149.16, "text": " scholars."}, {"start": 149.16, "end": 167.06, "text": " Hey."}]
Two Minute Papers
https://www.youtube.com/watch?v=8xjTtE3JCDw
Automatic Lecture Notes From Videos | Two Minute Papers #22
Blackboard-style teaching videos are quite popular nowadays on YouTube, they are excellent materials to study a variety of topics ranging from history to mathematics. These videos augment textbooks quite well, but they have a drawback: it is not possible to search them easily. This piece of work creates an interactive textbook from a blackboard-style input video, where images and text are interleaved to offer an easily digestible lecture note for students. ____________________________ The paper "Visual Transcripts: Lecture Notes from Blackboard-Style Lecture Videos" is available here: http://web.mit.edu/hishin/www/paper.pdf http://graphics.csail.mit.edu/publications/2015 Khan Academy: https://www.khanacademy.org/ This image was used for the thumbnail by Mississippi Mike (public domain): https://flic.kr/p/votVC2 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir, where we expand our knowledge in science and research. Blackboard style lecture videos are really popular on YouTube nowadays. Khan Academy is an excellent example of that, where you get the feeling that someone is sitting next to you and teaching you, not like someone who is addressing you formally from the podium. Without question, these kinds of videos can augment textbooks quite well. However, they are often not easily searchable. This piece of work tries to take this one step beyond. The input is a video and a transcript, and the output of the algorithm is an interactive lecture note, where you can not only see the most important points during the lecture, but you can also click on some of them to see full derivations of the expressions. Let's outline the features that one would like to see in a usable outlining product. It has to be able to find milestones that are at the end of each derivation to present them to the user. If you study it mathematics, you know how mathematical derivations go. Following the train of thought of the teacher is not always trivial. It's also important to find meaningful groupings for a derivation. This involves finding similarities between drawings, trying to find out the individual steps and doing a segmentation to get a series of images out of it. And finally, the technique has to be good at interliving drawings and formulae with written text in an appealing and digestible way. It is very easy to mess up with this step as the text has to describe the visuals. Even though I wish tools like this existed when I was an undergrad student, it is still important to just study, study and study and expand one's knowledge. If textbooks like this start to appear, I'll be the first in line and I'll not be reading, I'll be devouring them. Also, think about how smart the next generation will be with awesome studying materials like these. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir, where we expand"}, {"start": 5.36, "end": 7.6000000000000005, "text": " our knowledge in science and research."}, {"start": 7.6000000000000005, "end": 11.700000000000001, "text": " Blackboard style lecture videos are really popular on YouTube nowadays."}, {"start": 11.700000000000001, "end": 15.92, "text": " Khan Academy is an excellent example of that, where you get the feeling that someone is"}, {"start": 15.92, "end": 20.400000000000002, "text": " sitting next to you and teaching you, not like someone who is addressing you formally from"}, {"start": 20.400000000000002, "end": 21.400000000000002, "text": " the podium."}, {"start": 21.400000000000002, "end": 25.8, "text": " Without question, these kinds of videos can augment textbooks quite well."}, {"start": 25.8, "end": 29.080000000000002, "text": " However, they are often not easily searchable."}, {"start": 29.08, "end": 32.04, "text": " This piece of work tries to take this one step beyond."}, {"start": 32.04, "end": 36.68, "text": " The input is a video and a transcript, and the output of the algorithm is an interactive"}, {"start": 36.68, "end": 41.4, "text": " lecture note, where you can not only see the most important points during the lecture,"}, {"start": 41.4, "end": 45.56, "text": " but you can also click on some of them to see full derivations of the expressions."}, {"start": 45.56, "end": 50.76, "text": " Let's outline the features that one would like to see in a usable outlining product."}, {"start": 50.76, "end": 55.879999999999995, "text": " It has to be able to find milestones that are at the end of each derivation to present them"}, {"start": 55.879999999999995, "end": 56.879999999999995, "text": " to the user."}, {"start": 56.88, "end": 61.24, "text": " If you study it mathematics, you know how mathematical derivations go."}, {"start": 61.24, "end": 64.60000000000001, "text": " Following the train of thought of the teacher is not always trivial."}, {"start": 64.60000000000001, "end": 68.28, "text": " It's also important to find meaningful groupings for a derivation."}, {"start": 68.28, "end": 73.52000000000001, "text": " This involves finding similarities between drawings, trying to find out the individual steps"}, {"start": 73.52000000000001, "end": 77.32000000000001, "text": " and doing a segmentation to get a series of images out of it."}, {"start": 77.32000000000001, "end": 81.44, "text": " And finally, the technique has to be good at interliving drawings and formulae with"}, {"start": 81.44, "end": 85.08, "text": " written text in an appealing and digestible way."}, {"start": 85.08, "end": 90.36, "text": " It is very easy to mess up with this step as the text has to describe the visuals."}, {"start": 90.36, "end": 94.67999999999999, "text": " Even though I wish tools like this existed when I was an undergrad student, it is still"}, {"start": 94.67999999999999, "end": 98.92, "text": " important to just study, study and study and expand one's knowledge."}, {"start": 98.92, "end": 104.03999999999999, "text": " If textbooks like this start to appear, I'll be the first in line and I'll not be reading,"}, {"start": 104.03999999999999, "end": 105.8, "text": " I'll be devouring them."}, {"start": 105.8, "end": 110.8, "text": " Also, think about how smart the next generation will be with awesome studying materials like"}, {"start": 110.8, "end": 111.8, "text": " these."}, {"start": 111.8, "end": 114.96, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mkI6qfpEJmI
Real-Time Facial Expression Transfer | Two Minute Papers #21
In computer animation, animating human faces is an art itself, but transferring expressions from one human to someone else is an even more complex task. One has to take into consideration the geometry, the reflectance properties, pose, and the illumination of both faces, and make sure that mouth movements and wrinkles are transferred properly. The fact that the human eye is very keen on catching artificial changes makes the problem even more difficult. This paper describes a real-time solution to this animation problem. ______________________________________ The paper "Real-time Expression Transfer for Facial Reenactment" is available here: http://graphics.stanford.edu/~niessner/thies2015realtime.html Recommended for you: ALL Two Minute Papers episodes! :) - https://www.youtube.com/playlist?list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image was created by tommerton2010 (CC BY 2.0) - Changes have been made to it (eye color, flip, background). https://flic.kr/p/9d8ApH Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizsol Naifahir. Today we are going to talk about a great algorithm that takes the facial expression of one human and transfers it onto someone else. First, there is a calibration step where the algorithm tries to capture the geometry and the reflectance properties of both faces. The expression transfer comes after this, which is fraught with difficulties. It has to be able to deal with changes in the geometry. The reflectance properties of the face, the illumination in the room, and finally changes in pose and expressions. All of this at the same time and with an negligible time delay. The difficulty of the problem is further magnified by the fact that we humans know really well how a human face is meant to move, therefore even the slightest inaccuracies are very easily caught by our eyes. Add this to the fact that one has to move details like additional wrinkles to a foreign face correctly, and it's easy to see that this is an incredibly challenging problem. And the resulting technique not only does the expression transfer quite well, but is also robust for lighting changes. However, it is not robust for occlusions, meaning that errors should be expected when something gets in the way. Problems also arise if the face is turned away from the camera, but the algorithm recovers from these erroneous states rapidly. What's even better, if you use this technique you can also cut back on your plastic surgery and hair plantation costs. How cool is that? This new technique promises tons of new possibilities. Beyond the obvious impersonation and reenactment fund for the motion picture industry, the authors propose the following in the paper. Imagine another setting in which you could reenact a professionally captured video of somebody in business attire with a new real-time face capture of yourself sitting in casual clothing on your sofa. Hell yeah! Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizsol Naifahir."}, {"start": 4.8, "end": 9.68, "text": " Today we are going to talk about a great algorithm that takes the facial expression of one human"}, {"start": 9.68, "end": 12.0, "text": " and transfers it onto someone else."}, {"start": 12.0, "end": 16.740000000000002, "text": " First, there is a calibration step where the algorithm tries to capture the geometry and"}, {"start": 16.740000000000002, "end": 19.8, "text": " the reflectance properties of both faces."}, {"start": 19.8, "end": 23.8, "text": " The expression transfer comes after this, which is fraught with difficulties."}, {"start": 23.8, "end": 27.12, "text": " It has to be able to deal with changes in the geometry."}, {"start": 27.12, "end": 32.46, "text": " The reflectance properties of the face, the illumination in the room, and finally changes"}, {"start": 32.46, "end": 34.78, "text": " in pose and expressions."}, {"start": 34.78, "end": 38.9, "text": " All of this at the same time and with an negligible time delay."}, {"start": 38.9, "end": 43.92, "text": " The difficulty of the problem is further magnified by the fact that we humans know really well"}, {"start": 43.92, "end": 49.32, "text": " how a human face is meant to move, therefore even the slightest inaccuracies are very easily"}, {"start": 49.32, "end": 50.92, "text": " caught by our eyes."}, {"start": 50.92, "end": 55.08, "text": " Add this to the fact that one has to move details like additional wrinkles to a foreign"}, {"start": 55.08, "end": 60.0, "text": " face correctly, and it's easy to see that this is an incredibly challenging problem."}, {"start": 60.0, "end": 64.52, "text": " And the resulting technique not only does the expression transfer quite well, but is also"}, {"start": 64.52, "end": 67.64, "text": " robust for lighting changes."}, {"start": 67.64, "end": 74.72, "text": " However, it is not robust for occlusions, meaning that errors should be expected when"}, {"start": 74.72, "end": 77.08, "text": " something gets in the way."}, {"start": 77.08, "end": 81.52, "text": " Problems also arise if the face is turned away from the camera, but the algorithm recovers"}, {"start": 81.52, "end": 85.32, "text": " from these erroneous states rapidly."}, {"start": 85.32, "end": 90.08, "text": " What's even better, if you use this technique you can also cut back on your plastic surgery"}, {"start": 90.08, "end": 92.36, "text": " and hair plantation costs."}, {"start": 92.36, "end": 94.03999999999999, "text": " How cool is that?"}, {"start": 94.03999999999999, "end": 97.44, "text": " This new technique promises tons of new possibilities."}, {"start": 97.44, "end": 102.12, "text": " Beyond the obvious impersonation and reenactment fund for the motion picture industry, the authors"}, {"start": 102.12, "end": 104.52, "text": " propose the following in the paper."}, {"start": 104.52, "end": 109.19999999999999, "text": " Imagine another setting in which you could reenact a professionally captured video of somebody"}, {"start": 109.2, "end": 115.64, "text": " in business attire with a new real-time face capture of yourself sitting in casual clothing"}, {"start": 115.64, "end": 117.16, "text": " on your sofa."}, {"start": 117.16, "end": 118.16, "text": " Hell yeah!"}, {"start": 118.16, "end": 147.72, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=sSnDTPjfBYU
Gradients, Poisson's Equation and Light Transport | Two Minute Papers #20
Photorealistic rendering (also called global illumination) enables us to see how digital objects would look like in real life. It is an amazingly powerful tool in the hands of a professional artist, who can create breathtaking images or animations with. However, images created with these technique contain a substantial amount of noise until a large number of light rays are computed. Today, we're going to talk about how to use gradients and Poisson's equation to speed up this process substantially. ________________________ The paper "Gradient-Domain Path Tracing" is available here: https://mediatech.aalto.fi/publications/graphics/GPT/ The paper "Gradient-Domain Metropolis Light Transport" is available here: https://mediatech.aalto.fi/publications/graphics/GMLT/ I held a course on photorealistic rendering at the Technical University of Vienna. Here you can learn how the physics of light works and to write programs like this: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Recommended for you: Metropolis Light Transport - https://www.youtube.com/watch?v=f0Uzit_-h3M&index=9&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Manipulating Photorealistic Renderings - https://www.youtube.com/watch?v=L7MOeQw47BM&index=6&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e A talk on ray tracing - https://www.youtube.com/watch?v=qyDUvatu5M8 The Moon's elevation map is provided by NASA and is available here (license: CC BY 2.0) - https://flic.kr/p/aFqE3n Music: "Infinite Perspective" by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Source: http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1500024 Artist: http://incompetech.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. This is going to be a more in-depth episode of the series. Photo-realistic rendering means that we create a 3D model of a scene on a computer and we run a light simulation program that shows how it would look like in reality. These programs simulate rays of light that connect the camera to the light sources and the scene and compute the flow of energy between them. If you have missed our earlier episode on Metropolis Light Transport and if you're interested, make sure to watch it first, I've put a link in the description box. This time, let's go one step beyond classical light transport algorithms and talk about a gradient domain rendering technique and how we can use it to create photo-realistic images quicker. First of all, what is a gradient? The gradient is a mathematical concept. Let's imagine an elevation map of a country where there are many hills and many flat regions. And imagine that you are an ambitious hill climber who is looking for a challenge, therefore you would always like to go in a direction that seems to be the highest elevation increase. The biggest rock that you can climb nearby. The gradient is a bunch of arrows that always point in the direction of the largest increase on the map. Here with blue, you can see the elevation map with the mountains and below it with red, the gradient of this elevation map. This is where you should be going if you are looking for a challenge. It is essentially a guidebook for aspiring hill climbers. One more example with a heat map. The blue or colors denote colder, the reddish colors show the warmer regions. If you are freezing, the gradients will show you where you should go to warm up. So if you have the elevation map, it is really easy to create the gradients out of it. But what if we have it the other way around? This would mean that we only have the guidebook, the red arrows, and from that we would like to guess what the blue elevation map looks like. It's like a crossword puzzle, only way cooler. In mathematics, we call this procedure solving the Poisson equation. So let's try to solve it by hand. I look at the middle where there are no arrows pointing in this direction, only once that point out of here. Meaning that there is an increase outwards, therefore this has to be a huge hole. If I look at the corners, I don't see very long arrows, meaning that there is no real change in these parts, therefore it must be a flat region. So we can solve this Poisson equation and recreate the map from the guidebook. To see what this is good for, let's jump right into the gradient domain render. Imagine that we have this simple scene with a light source, an object that occludes the light source, and the camera looking down on this shadow edge. Let's rip out this region and create a close-up of it. Imagine that the light regions are large hills on the elevation map, and the shadow edge is the ground level below those. These algorithms were looking to shoot as many rays as possible towards the brighter regions, but not this one. The gradient domain algorithm is looking for gradients, abrupt changes in the illumination, if you will. You can see these wide red pairs next to each other. These are the places where the algorithm concentrates. If we compute the difference between them, we get the gradients of our elevation map. In these regions, the difference is zero, therefore we would have infinitely small arrows, and from the previous examples, we solve the Poisson equation to get the blue map back from the red arrows. The small arrows mean that we have a completely flat region, so we can recognize that we have a wide wall in the background by just looking at a few places, we don't need to explore every inch of it, like previous algorithms do. And as you can see at the shadow edge, the algorithm is quite interested in this change. In our gradients, there will be a large red arrow pointing from the white to the red dot, because we are going from the darkness to a light region. After solving the Poisson equation, we recognize that there should be a huge jump here. So in the end, with this technique, we can often get a much better idea of the illumination in the scene than we did with previous methods that just try to explore every single inch of it. The result is improved output images with much less noise, even though the gradient domain renderer computed much less raised than the previous random algorithm. Excellent piece of work, bravo! Now that we understand what gradients and Poisson's equation is, let's play a quick game together and try to learn these mathematical concepts from the internet like an undergrad student would do. And before you run away and terror, this is not supposed to be pleasant. I'll try to make a point after reading this. In mathematics, the gradient is a generalization of the usual concept of derivative of a function in one dimension to a function in several dimensions. If f of x1 to xn is a differentiable scalar valued function of standard Cartesian coordinates in Euclidean space, its gradient is the vector whose components are the n partial derivatives of f. It is thus a vector valued function. Now let's proceed to Poisson's equation. In mathematics, Poisson's equation is a partial differential equation of elliptic type with broad utility in electrostatics, mechanical engineering and theoretical physics. It is used, for instance, to describe the potential energy field caused by a given charge or mass density distribution. This piece of text is one of the reasons why I started two-minute papers. I try to pull the curtains and show that difficult mathematical and scientific concepts often conceal very simple and intuitive ideas that anyone can understand. And I am delighted to have you by my side on this journey. This was anything but two minutes. I incorporated a bit more details for you to have a deeper understanding of this incredible work. I hope you don't mind. Let me know if you liked it in the comments section below. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.4, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.4, "end": 8.6, "text": " This is going to be a more in-depth episode of the series."}, {"start": 8.6, "end": 13.280000000000001, "text": " Photo-realistic rendering means that we create a 3D model of a scene on a computer and"}, {"start": 13.280000000000001, "end": 17.6, "text": " we run a light simulation program that shows how it would look like in reality."}, {"start": 17.6, "end": 21.68, "text": " These programs simulate rays of light that connect the camera to the light sources and"}, {"start": 21.68, "end": 24.84, "text": " the scene and compute the flow of energy between them."}, {"start": 24.84, "end": 29.16, "text": " If you have missed our earlier episode on Metropolis Light Transport and if you're interested, make"}, {"start": 29.16, "end": 32.76, "text": " sure to watch it first, I've put a link in the description box."}, {"start": 32.76, "end": 37.519999999999996, "text": " This time, let's go one step beyond classical light transport algorithms and talk about"}, {"start": 37.519999999999996, "end": 42.120000000000005, "text": " a gradient domain rendering technique and how we can use it to create photo-realistic images"}, {"start": 42.120000000000005, "end": 43.120000000000005, "text": " quicker."}, {"start": 43.120000000000005, "end": 46.08, "text": " First of all, what is a gradient?"}, {"start": 46.08, "end": 48.28, "text": " The gradient is a mathematical concept."}, {"start": 48.28, "end": 54.04, "text": " Let's imagine an elevation map of a country where there are many hills and many flat regions."}, {"start": 54.04, "end": 58.2, "text": " And imagine that you are an ambitious hill climber who is looking for a challenge, therefore"}, {"start": 58.2, "end": 63.2, "text": " you would always like to go in a direction that seems to be the highest elevation increase."}, {"start": 63.2, "end": 65.76, "text": " The biggest rock that you can climb nearby."}, {"start": 65.76, "end": 70.32000000000001, "text": " The gradient is a bunch of arrows that always point in the direction of the largest increase"}, {"start": 70.32000000000001, "end": 71.64, "text": " on the map."}, {"start": 71.64, "end": 76.72, "text": " Here with blue, you can see the elevation map with the mountains and below it with red,"}, {"start": 76.72, "end": 78.68, "text": " the gradient of this elevation map."}, {"start": 78.68, "end": 81.44, "text": " This is where you should be going if you are looking for a challenge."}, {"start": 81.44, "end": 86.0, "text": " It is essentially a guidebook for aspiring hill climbers."}, {"start": 86.0, "end": 87.68, "text": " One more example with a heat map."}, {"start": 87.68, "end": 92.2, "text": " The blue or colors denote colder, the reddish colors show the warmer regions."}, {"start": 92.2, "end": 96.36000000000001, "text": " If you are freezing, the gradients will show you where you should go to warm up."}, {"start": 96.36000000000001, "end": 100.60000000000001, "text": " So if you have the elevation map, it is really easy to create the gradients out of it."}, {"start": 100.60000000000001, "end": 103.0, "text": " But what if we have it the other way around?"}, {"start": 103.0, "end": 108.12, "text": " This would mean that we only have the guidebook, the red arrows, and from that we would like"}, {"start": 108.12, "end": 111.24000000000001, "text": " to guess what the blue elevation map looks like."}, {"start": 111.24000000000001, "end": 114.60000000000001, "text": " It's like a crossword puzzle, only way cooler."}, {"start": 114.6, "end": 118.83999999999999, "text": " In mathematics, we call this procedure solving the Poisson equation."}, {"start": 118.83999999999999, "end": 120.55999999999999, "text": " So let's try to solve it by hand."}, {"start": 120.55999999999999, "end": 124.83999999999999, "text": " I look at the middle where there are no arrows pointing in this direction, only once that"}, {"start": 124.83999999999999, "end": 126.64, "text": " point out of here."}, {"start": 126.64, "end": 131.95999999999998, "text": " Meaning that there is an increase outwards, therefore this has to be a huge hole."}, {"start": 131.95999999999998, "end": 136.24, "text": " If I look at the corners, I don't see very long arrows, meaning that there is no real"}, {"start": 136.24, "end": 140.0, "text": " change in these parts, therefore it must be a flat region."}, {"start": 140.0, "end": 144.96, "text": " So we can solve this Poisson equation and recreate the map from the guidebook."}, {"start": 144.96, "end": 149.44, "text": " To see what this is good for, let's jump right into the gradient domain render."}, {"start": 149.44, "end": 153.32, "text": " Imagine that we have this simple scene with a light source, an object that occludes the"}, {"start": 153.32, "end": 157.32, "text": " light source, and the camera looking down on this shadow edge."}, {"start": 157.32, "end": 160.8, "text": " Let's rip out this region and create a close-up of it."}, {"start": 160.8, "end": 165.0, "text": " Imagine that the light regions are large hills on the elevation map, and the shadow edge"}, {"start": 165.0, "end": 167.32, "text": " is the ground level below those."}, {"start": 167.32, "end": 172.44, "text": " These algorithms were looking to shoot as many rays as possible towards the brighter regions,"}, {"start": 172.44, "end": 173.51999999999998, "text": " but not this one."}, {"start": 173.51999999999998, "end": 178.35999999999999, "text": " The gradient domain algorithm is looking for gradients, abrupt changes in the illumination,"}, {"start": 178.35999999999999, "end": 179.35999999999999, "text": " if you will."}, {"start": 179.35999999999999, "end": 182.51999999999998, "text": " You can see these wide red pairs next to each other."}, {"start": 182.51999999999998, "end": 185.16, "text": " These are the places where the algorithm concentrates."}, {"start": 185.16, "end": 189.79999999999998, "text": " If we compute the difference between them, we get the gradients of our elevation map."}, {"start": 189.79999999999998, "end": 194.48, "text": " In these regions, the difference is zero, therefore we would have infinitely small arrows,"}, {"start": 194.48, "end": 198.92, "text": " and from the previous examples, we solve the Poisson equation to get the blue map back"}, {"start": 198.92, "end": 200.76, "text": " from the red arrows."}, {"start": 200.76, "end": 205.6, "text": " The small arrows mean that we have a completely flat region, so we can recognize that we have"}, {"start": 205.6, "end": 210.32, "text": " a wide wall in the background by just looking at a few places, we don't need to explore"}, {"start": 210.32, "end": 215.04, "text": " every inch of it, like previous algorithms do."}, {"start": 215.04, "end": 219.32, "text": " And as you can see at the shadow edge, the algorithm is quite interested in this change."}, {"start": 219.32, "end": 224.79999999999998, "text": " In our gradients, there will be a large red arrow pointing from the white to the red dot,"}, {"start": 224.79999999999998, "end": 227.84, "text": " because we are going from the darkness to a light region."}, {"start": 227.84, "end": 232.88, "text": " After solving the Poisson equation, we recognize that there should be a huge jump here."}, {"start": 232.88, "end": 237.48, "text": " So in the end, with this technique, we can often get a much better idea of the illumination"}, {"start": 237.48, "end": 242.35999999999999, "text": " in the scene than we did with previous methods that just try to explore every single inch"}, {"start": 242.35999999999999, "end": 243.35999999999999, "text": " of it."}, {"start": 243.35999999999999, "end": 248.2, "text": " The result is improved output images with much less noise, even though the gradient domain"}, {"start": 248.2, "end": 252.92, "text": " renderer computed much less raised than the previous random algorithm."}, {"start": 252.92, "end": 255.16, "text": " Excellent piece of work, bravo!"}, {"start": 255.16, "end": 260.44, "text": " Now that we understand what gradients and Poisson's equation is, let's play a quick game together"}, {"start": 260.44, "end": 265.12, "text": " and try to learn these mathematical concepts from the internet like an undergrad student"}, {"start": 265.12, "end": 266.12, "text": " would do."}, {"start": 266.12, "end": 269.8, "text": " And before you run away and terror, this is not supposed to be pleasant."}, {"start": 269.8, "end": 272.28, "text": " I'll try to make a point after reading this."}, {"start": 272.28, "end": 277.48, "text": " In mathematics, the gradient is a generalization of the usual concept of derivative of a function"}, {"start": 277.48, "end": 281.0, "text": " in one dimension to a function in several dimensions."}, {"start": 281.0, "end": 288.24, "text": " If f of x1 to xn is a differentiable scalar valued function of standard Cartesian coordinates"}, {"start": 288.24, "end": 293.16, "text": " in Euclidean space, its gradient is the vector whose components are the n partial derivatives"}, {"start": 293.16, "end": 294.16, "text": " of f."}, {"start": 294.16, "end": 297.28000000000003, "text": " It is thus a vector valued function."}, {"start": 297.28000000000003, "end": 299.52000000000004, "text": " Now let's proceed to Poisson's equation."}, {"start": 299.52000000000004, "end": 304.64000000000004, "text": " In mathematics, Poisson's equation is a partial differential equation of elliptic type"}, {"start": 304.64, "end": 309.96, "text": " with broad utility in electrostatics, mechanical engineering and theoretical physics."}, {"start": 309.96, "end": 314.71999999999997, "text": " It is used, for instance, to describe the potential energy field caused by a given charge or"}, {"start": 314.71999999999997, "end": 316.91999999999996, "text": " mass density distribution."}, {"start": 316.91999999999996, "end": 321.2, "text": " This piece of text is one of the reasons why I started two-minute papers."}, {"start": 321.2, "end": 326.36, "text": " I try to pull the curtains and show that difficult mathematical and scientific concepts"}, {"start": 326.36, "end": 331.28, "text": " often conceal very simple and intuitive ideas that anyone can understand."}, {"start": 331.28, "end": 334.59999999999997, "text": " And I am delighted to have you by my side on this journey."}, {"start": 334.6, "end": 336.52000000000004, "text": " This was anything but two minutes."}, {"start": 336.52000000000004, "end": 341.36, "text": " I incorporated a bit more details for you to have a deeper understanding of this incredible"}, {"start": 341.36, "end": 342.36, "text": " work."}, {"start": 342.36, "end": 343.36, "text": " I hope you don't mind."}, {"start": 343.36, "end": 345.64000000000004, "text": " Let me know if you liked it in the comments section below."}, {"start": 345.64, "end": 372.4, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Jkkjy7dVdaY
Recurrent Neural Network Writes Music and Shakespeare Novels | Two Minute Papers #19
Artificial neural networks are powerful machine learning techniques that can learn to recognize images or paint in the style of Van Gogh. Recurrent neural networks offer a more general model that can learn input sequences and create output sequences. The resulting technique (Long Short-Term Memory in these examples) can write novels in the style of Tolstoy, Shakespeare, or write their own music. ________________________ Andrej Karpathy's original article is available here: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ Source code: https://github.com/karpathy/char-rnn The paper "Long Short-Term Memory" by Sepp Hochreiter and Jürgen Schmidhuber is available here: http://www.bioinf.jku.at/publications/older/2604.pdf http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf Continuing "Let It Go" from Disney with a recurrent neural network: https://ericye16.com/music-rnn/ Recommended for you: Artificial Neural Networks and Deep Learning - https://www.youtube.com/watch?v=rCWTOOgVXyE Deep Neural Network Learns Van Gogh's Art - https://www.youtube.com/watch?v=-R9bJGNHltQ Creating Photographs Using Deep Learning - https://www.youtube.com/watch?v=HOLoPgTzV6g A great write-up on how LSTMs work: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ More applications of Long Short-Term Memory: http://googleresearch.blogspot.co.uk/2015/09/google-voice-search-faster-and-more.html http://googleresearch.blogspot.co.at/2015/08/the-neural-networks-behind-google-voice.html Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image background was created by Brandon Giesbrecht (license: CC BY 2.0). Slight changes were made for better blending. - https://www.flickr.com/photos/naturegeak/5819184201/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Music: "Gymnopedie no1" by Satie. Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Artificial neural networks are very useful tools that are able to learn and recognize objects on images or learn the style of Van Gogh and paint new pictures in his style. Today we're going to talk about recurrent neural networks. So what does the recurrent part mean? With an artificial neural network, we usually have a one-to-one relation between the input and the output. This means that one image comes in and one classification result comes out whether the image depicts a human face or a train. With recurrent neural networks, we can have a one-to-many relation between the input and the output. The input would still be an image, but the output would not be a word, but a sequence of words, a sentence that describes what we see on the image. For a many-to-one relation, a good example is sentiment analysis. This means that a sequence of inputs, for instance, a sentence, is classified as either negative or positive. This is very useful for processing movie reviews, where we'd like to know whether the user liked or hated the movie without reading pages and pages of discussion. And finally, recurrent neural networks can also deal with many-to-many relations, translating an input sequence into an output sequence. Examples of this can be machine translations that take an input sentence and translate it to an output sentence in a different language. For another example of a many-to-many relation, let's see what the algorithm learned after reading Tolstoy's War and Peace novel by asking it to write exactly in that style. It should be noted that generating a new novel happens letter by letter, so the algorithm is not allowed to memorize words. Let's look at the results at different stages of the training process. The initial results are, well, gibberish. But the algorithm seems to recognize immediately that words are basically a big bunch of letters that are separated by spaces. If we wait a bit more, we see that it starts to get a very rudimentary understanding of structures. For instance, a quotation mark that you have opened must be closed, and a sentence can be closed by a period, and it is followed by an uppercase letter. Later, it starts to learn shorter and more common words, such as fall, debt, the, for, me. If we wait for longer, we see that it already gets a grasp of longer words, and smaller parts of sentences actually start to make sense. Here's a piece of Shakespeare that was written by the algorithm after reading all of his works. You see names that make sense, and you really have to check the text thoroughly to conclude that it's indeed not the real deal. It can also try to write math papers. I had to look for quite a bit until I realized that something is fishy here. It is not unreasonable to think that it can very easily deceive a non-expert reader. Can you believe this? This is insanity. It is also capable of learning the source code of the Linux operating system and generate new code that looks quite sensible. This can also try to continue the song Let It Go from the famous Disney movie Frozen. So recurrent neural networks are amazing tools that open up completely new horizons for solving problems where either the inputs or the outputs are not one thing, but a sequence of things. And now, signing off with a piece of recurrent neural network wisdom. Well, your wit is in the care of side and death. Bear this in mind wherever you go. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 5.6000000000000005, "end": 9.96, "text": " Artificial neural networks are very useful tools that are able to learn and recognize objects"}, {"start": 9.96, "end": 14.88, "text": " on images or learn the style of Van Gogh and paint new pictures in his style."}, {"start": 14.88, "end": 18.240000000000002, "text": " Today we're going to talk about recurrent neural networks."}, {"start": 18.240000000000002, "end": 20.240000000000002, "text": " So what does the recurrent part mean?"}, {"start": 20.240000000000002, "end": 24.64, "text": " With an artificial neural network, we usually have a one-to-one relation between the input"}, {"start": 24.64, "end": 25.64, "text": " and the output."}, {"start": 25.64, "end": 30.080000000000002, "text": " This means that one image comes in and one classification result comes out whether"}, {"start": 30.080000000000002, "end": 32.76, "text": " the image depicts a human face or a train."}, {"start": 32.76, "end": 36.92, "text": " With recurrent neural networks, we can have a one-to-many relation between the input and"}, {"start": 36.92, "end": 37.92, "text": " the output."}, {"start": 37.92, "end": 42.480000000000004, "text": " The input would still be an image, but the output would not be a word, but a sequence"}, {"start": 42.480000000000004, "end": 46.88, "text": " of words, a sentence that describes what we see on the image."}, {"start": 46.88, "end": 51.16, "text": " For a many-to-one relation, a good example is sentiment analysis."}, {"start": 51.16, "end": 56.4, "text": " This means that a sequence of inputs, for instance, a sentence, is classified as either negative"}, {"start": 56.4, "end": 57.4, "text": " or positive."}, {"start": 57.4, "end": 61.67999999999999, "text": " This is very useful for processing movie reviews, where we'd like to know whether the"}, {"start": 61.67999999999999, "end": 66.52, "text": " user liked or hated the movie without reading pages and pages of discussion."}, {"start": 66.52, "end": 71.19999999999999, "text": " And finally, recurrent neural networks can also deal with many-to-many relations, translating"}, {"start": 71.19999999999999, "end": 74.28, "text": " an input sequence into an output sequence."}, {"start": 74.28, "end": 78.64, "text": " Examples of this can be machine translations that take an input sentence and translate it"}, {"start": 78.64, "end": 81.6, "text": " to an output sentence in a different language."}, {"start": 81.6, "end": 86.0, "text": " For another example of a many-to-many relation, let's see what the algorithm learned after"}, {"start": 86.0, "end": 91.4, "text": " reading Tolstoy's War and Peace novel by asking it to write exactly in that style."}, {"start": 91.4, "end": 95.96000000000001, "text": " It should be noted that generating a new novel happens letter by letter, so the algorithm"}, {"start": 95.96000000000001, "end": 98.36, "text": " is not allowed to memorize words."}, {"start": 98.36, "end": 101.76, "text": " Let's look at the results at different stages of the training process."}, {"start": 101.76, "end": 105.68, "text": " The initial results are, well, gibberish."}, {"start": 105.68, "end": 110.60000000000001, "text": " But the algorithm seems to recognize immediately that words are basically a big bunch of letters"}, {"start": 110.60000000000001, "end": 113.0, "text": " that are separated by spaces."}, {"start": 113.0, "end": 117.16000000000001, "text": " If we wait a bit more, we see that it starts to get a very rudimentary understanding of"}, {"start": 117.16000000000001, "end": 118.48, "text": " structures."}, {"start": 118.48, "end": 122.44000000000001, "text": " For instance, a quotation mark that you have opened must be closed, and a sentence can"}, {"start": 122.44000000000001, "end": 126.92000000000002, "text": " be closed by a period, and it is followed by an uppercase letter."}, {"start": 126.92000000000002, "end": 134.96, "text": " Later, it starts to learn shorter and more common words, such as fall, debt, the, for,"}, {"start": 134.96, "end": 135.96, "text": " me."}, {"start": 135.96, "end": 140.84, "text": " If we wait for longer, we see that it already gets a grasp of longer words, and smaller"}, {"start": 140.84, "end": 145.8, "text": " parts of sentences actually start to make sense."}, {"start": 145.8, "end": 149.72, "text": " Here's a piece of Shakespeare that was written by the algorithm after reading all of his"}, {"start": 149.72, "end": 150.72, "text": " works."}, {"start": 150.72, "end": 155.0, "text": " You see names that make sense, and you really have to check the text thoroughly to conclude"}, {"start": 155.0, "end": 157.48000000000002, "text": " that it's indeed not the real deal."}, {"start": 157.48000000000002, "end": 159.92000000000002, "text": " It can also try to write math papers."}, {"start": 159.92000000000002, "end": 164.16, "text": " I had to look for quite a bit until I realized that something is fishy here."}, {"start": 164.16, "end": 168.72, "text": " It is not unreasonable to think that it can very easily deceive a non-expert reader."}, {"start": 168.72, "end": 169.88, "text": " Can you believe this?"}, {"start": 169.88, "end": 171.6, "text": " This is insanity."}, {"start": 171.6, "end": 175.8, "text": " It is also capable of learning the source code of the Linux operating system and generate"}, {"start": 175.8, "end": 180.6, "text": " new code that looks quite sensible."}, {"start": 180.6, "end": 201.92, "text": " This can also try to continue the song Let It Go from the famous Disney movie Frozen."}, {"start": 201.92, "end": 207.1, "text": " So recurrent neural networks are amazing tools that open up completely new horizons for"}, {"start": 207.1, "end": 212.2, "text": " solving problems where either the inputs or the outputs are not one thing, but a sequence"}, {"start": 212.2, "end": 213.2, "text": " of things."}, {"start": 213.2, "end": 217.35999999999999, "text": " And now, signing off with a piece of recurrent neural network wisdom."}, {"start": 217.35999999999999, "end": 223.28, "text": " Well, your wit is in the care of side and death."}, {"start": 223.28, "end": 225.4, "text": " Bear this in mind wherever you go."}, {"start": 225.4, "end": 252.12, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uj8b5mu0P7Y
Modeling Colliding and Merging Fluids | Two Minute Papers #18
In Two Minute Papers, we have talked about different fluid simulation techniques. This time, we are going to talk about surface tracking. Surface tracking is required to account for topological changes when different fluid interfaces collide. This work also takes into consideration the possibility of colliding fluids that are made of different materials. The resulting surface tracking algorithm is very robust, which means that it can deal with a large number of materials and topological changes at the same time. _________________________________ The paper "Multimaterial Mesh-Based Surface Tracking" is available here: http://www.cs.columbia.edu/cg/multitracker/ Recommended for you: Adaptive Fluid Simulations - https://www.youtube.com/watch?v=dH1s49-lrBk&index=1&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Fluid Simulations with Blender and Wavelet Turbulence - https://www.youtube.com/watch?v=5xLSbj5SsSE&index=15&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Social media graph image: Marc Smith - https://flic.kr/p/836Ttv Attribution 2.0 Generic (CC BY 2.0) - https://creativecommons.org/licenses/by/2.0/ Music: "Dixie Outlandish" by John Deley and the 41 Players Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background is taken from the mentioned paper. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. Simulating the behavior of water and other fluids is something we have been talking about in the series. However, we are now interested in modeling the interactions between two fluid interfaces that are potentially made of different materials. During these collisions, deformations and topology changes happen that are very far from trivial to simulate properly. The interesting part about this technique is that it uses graph theory to model these interface changes. Graph theory is a mathematical field that studies relations between, well, different things. Graphs are defined by vertices and edges where the vertices can represent people on your favorite social network and any pair of these people who know each other can be connected by edges. Graphs are mostly used to study and represent discrete structures. This means that you either know someone or you don't, there is nothing in between. For instance, the number of people that inhabit the Earth is an integer. It is also a discrete quantity. However, the surface of different fluid interfaces is a continuum. It is not really meant to be described by discrete mathematical tools such as graphs. And, well, that's exactly what happened here. Even though the surface of a fluid is a continuum, when dealing with topological changes, an important thing we'd like to know is the number of regions inside and around the fluid. The number of these regions can increase or decrease over time, depending on whether multiple materials split or merge. And surprisingly, graph theory has proved to be very useful in describing this kind of behavior. The resulting algorithm is extremely robust, meaning that it can successfully deal with a large number of different materials. These include merging and wobbling droplets, piling plastic bunnies, and swirling spheres of glue. Beautiful results! If you liked this episode, please don't forget to subscribe and become a member of our growing club of fellow scholars. Please come along and join us on our journey and let show the world how cool research really is. Thanks so much for watching and I'll see you next time!
[{"start": 0.0, "end": 6.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 6.0, "end": 9.88, "text": " Simulating the behavior of water and other fluids is something we have been talking about"}, {"start": 9.88, "end": 10.88, "text": " in the series."}, {"start": 10.88, "end": 15.76, "text": " However, we are now interested in modeling the interactions between two fluid interfaces"}, {"start": 15.76, "end": 19.04, "text": " that are potentially made of different materials."}, {"start": 19.04, "end": 24.240000000000002, "text": " During these collisions, deformations and topology changes happen that are very far from trivial"}, {"start": 24.240000000000002, "end": 25.64, "text": " to simulate properly."}, {"start": 25.64, "end": 30.560000000000002, "text": " The interesting part about this technique is that it uses graph theory to model these interface"}, {"start": 30.560000000000002, "end": 31.560000000000002, "text": " changes."}, {"start": 31.560000000000002, "end": 38.44, "text": " Graph theory is a mathematical field that studies relations between, well, different things."}, {"start": 38.44, "end": 42.64, "text": " Graphs are defined by vertices and edges where the vertices can represent people on your"}, {"start": 42.64, "end": 47.16, "text": " favorite social network and any pair of these people who know each other can be connected"}, {"start": 47.16, "end": 48.68, "text": " by edges."}, {"start": 48.68, "end": 52.519999999999996, "text": " Graphs are mostly used to study and represent discrete structures."}, {"start": 52.52, "end": 56.6, "text": " This means that you either know someone or you don't, there is nothing in between."}, {"start": 56.6, "end": 60.32, "text": " For instance, the number of people that inhabit the Earth is an integer."}, {"start": 60.32, "end": 62.400000000000006, "text": " It is also a discrete quantity."}, {"start": 62.400000000000006, "end": 66.12, "text": " However, the surface of different fluid interfaces is a continuum."}, {"start": 66.12, "end": 71.48, "text": " It is not really meant to be described by discrete mathematical tools such as graphs."}, {"start": 71.48, "end": 74.80000000000001, "text": " And, well, that's exactly what happened here."}, {"start": 74.80000000000001, "end": 79.96000000000001, "text": " Even though the surface of a fluid is a continuum, when dealing with topological changes, an important"}, {"start": 79.96, "end": 84.36, "text": " thing we'd like to know is the number of regions inside and around the fluid."}, {"start": 84.36, "end": 88.63999999999999, "text": " The number of these regions can increase or decrease over time, depending on whether"}, {"start": 88.63999999999999, "end": 91.19999999999999, "text": " multiple materials split or merge."}, {"start": 91.19999999999999, "end": 95.55999999999999, "text": " And surprisingly, graph theory has proved to be very useful in describing this kind of"}, {"start": 95.55999999999999, "end": 97.83999999999999, "text": " behavior."}, {"start": 97.83999999999999, "end": 102.16, "text": " The resulting algorithm is extremely robust, meaning that it can successfully deal with"}, {"start": 102.16, "end": 108.83999999999999, "text": " a large number of different materials."}, {"start": 108.84, "end": 129.2, "text": " These include merging and wobbling droplets, piling plastic bunnies, and swirling spheres"}, {"start": 129.2, "end": 134.24, "text": " of glue."}, {"start": 134.24, "end": 136.24, "text": " Beautiful results!"}, {"start": 136.24, "end": 139.92000000000002, "text": " If you liked this episode, please don't forget to subscribe and become a member of our"}, {"start": 139.92000000000002, "end": 141.8, "text": " growing club of fellow scholars."}, {"start": 141.8, "end": 146.20000000000002, "text": " Please come along and join us on our journey and let show the world how cool research"}, {"start": 146.20000000000002, "end": 147.20000000000002, "text": " really is."}, {"start": 147.2, "end": 175.64, "text": " Thanks so much for watching and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=2kOCTf8jIik
3D Printing a Glockenspiel | Two Minute Papers #17
Researchers at Harvard, Columbia University and MIT got interested in exploring the sounds that different metal objects emit when struck, opening up the possibility of computationally designing musical instruments such as a glockenspiel. The output of the algorithm is the blueprint of the instrument that can be 3D printed. The sound quality of these instruments is remarkably close to professionally manufactured instruments. _______________________________ The paper "Computational Design of Metallophone Contact Sounds" is available below. It also contains a comparison to a professionally manufactured instrument. http://people.seas.harvard.edu/~gaurav/papers/cdmcs_sa_2015/ Recommended for you: Hydrographic 3D printing - https://www.youtube.com/watch?v=kLnG073NYtw&index=12&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background is taken from the original paper. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir. A Glockenspiel is a percussion instrument that consists of small pieces of metal that are tuned to emit a given musical note when they are struck. In order to achieve these sounds, this instrument is usually manufactured as a set of metal bars. Researchers at Harvard, Columbia University and MIT became interested in designing a computer algorithm to obtain different shapes that lead to the same sounds. And if it's possible, then one should be able to mill or 3D print these shapes and see whether the computation results are in line with reality. The algorithm takes an input material, a target shape and a location where we'd like to strike it, and a frequency spectrum that describes the characteristics of the sound we are looking for. Furthermore, the algorithm also has to optimize how exactly the stand of the piece looks like to make sure that no valuable frequencies are dampened. Here's an example to show how impactful the design of this stand is and how beautiful sustain the sound is if it is well optimized. You'll see a set of input shapes specified by the user that are tuned to standard musical notes and below them, the optimized shapes that are as similar as possible, but with the constraint of emitting the correct sound. The question is how should the technique change your target shape to match the sound that you specified? One can also specify what overtones the sound should have. An overtone means that besides the fundamental tone that we play, for instance on a guitar, higher frequency sounds are also emitted producing a richer and more harmonious sound. In this example, the metal piece will emit higher octaves of the same note. If you have a keen ear for music, you will hear and appreciate the difference in the sounds. In summary, with this technique, one can inexpensively create awesome, custom-made lock-and-spills that have a comparable sound quality to professionally manufactured instruments, staggering results. It seems that we are starting to appear in the news. It is really cool to see that there is a hunger for knowing more about science and research. If you like this episode, please help us reaching more and more people and share the series with your friends, especially with people who have nothing to do with science. Thanks so much for watching and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir."}, {"start": 4.8, "end": 12.44, "text": " A Glockenspiel is a percussion instrument that consists of small pieces of metal that are tuned to emit a given musical note when they are struck."}, {"start": 12.44, "end": 17.92, "text": " In order to achieve these sounds, this instrument is usually manufactured as a set of metal bars."}, {"start": 17.92, "end": 27.76, "text": " Researchers at Harvard, Columbia University and MIT became interested in designing a computer algorithm to obtain different shapes that lead to the same sounds."}, {"start": 27.76, "end": 36.0, "text": " And if it's possible, then one should be able to mill or 3D print these shapes and see whether the computation results are in line with reality."}, {"start": 38.160000000000004, "end": 43.6, "text": " The algorithm takes an input material, a target shape and a location where we'd like to strike it,"}, {"start": 43.6, "end": 48.400000000000006, "text": " and a frequency spectrum that describes the characteristics of the sound we are looking for."}, {"start": 48.400000000000006, "end": 57.28, "text": " Furthermore, the algorithm also has to optimize how exactly the stand of the piece looks like to make sure that no valuable frequencies are dampened."}, {"start": 57.28, "end": 65.12, "text": " Here's an example to show how impactful the design of this stand is and how beautiful sustain the sound is if it is well optimized."}, {"start": 69.28, "end": 76.08, "text": " You'll see a set of input shapes specified by the user that are tuned to standard musical notes and below them,"}, {"start": 76.08, "end": 82.24000000000001, "text": " the optimized shapes that are as similar as possible, but with the constraint of emitting the correct sound."}, {"start": 82.24, "end": 87.67999999999999, "text": " The question is how should the technique change your target shape to match the sound that you specified?"}, {"start": 87.68, "end": 113.12, "text": " One can also specify what overtones the sound should have."}, {"start": 113.12, "end": 123.60000000000001, "text": " An overtone means that besides the fundamental tone that we play, for instance on a guitar, higher frequency sounds are also emitted producing a richer and more harmonious sound."}, {"start": 123.60000000000001, "end": 127.92, "text": " In this example, the metal piece will emit higher octaves of the same note."}, {"start": 127.92, "end": 143.68, "text": " If you have a keen ear for music, you will hear and appreciate the difference in the sounds."}, {"start": 143.68, "end": 156.32, "text": " In summary, with this technique, one can inexpensively create awesome, custom-made lock-and-spills that have a comparable sound quality to professionally manufactured instruments, staggering results."}, {"start": 156.32, "end": 163.35999999999999, "text": " It seems that we are starting to appear in the news. It is really cool to see that there is a hunger for knowing more about science and research."}, {"start": 163.35999999999999, "end": 172.0, "text": " If you like this episode, please help us reaching more and more people and share the series with your friends, especially with people who have nothing to do with science."}, {"start": 172.0, "end": 186.24, "text": " Thanks so much for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=f0Uzit_-h3M
Metropolis Light Transport | Two Minute Papers #16
Metropolis light transport is an advanced photorealistic rendering technique that is remarkably effective at finding the brighter regions of a scene and building many light paths that target these regions. The resulting algorithm is more efficient than traditional random path building algorithms, such as path tracing. _______________________ The paper "Metropolis Light Transport" by Veach and Guibas is available here: https://graphics.stanford.edu/papers/metro/ I held a course on photorealistic rendering at the Technical University of Vienna. Here you can learn how the physics of light works and to write programs like this: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Recommended for you: Manipulating Photorealistic Renderings - https://www.youtube.com/watch?v=L7MOeQw47BM&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=7 Ray Tracing, Subsurface Scattering @ Function 2015 - https://www.youtube.com/watch?v=qyDUvatu5M8 A more elaborate discussion on Metropolis Light Transport - https://www.youtube.com/watch?v=Zl36H9pwsHE&list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi&index=33 Eric Veach's Sci-tech award speech: https://www.youtube.com/watch?v=e3ss_Ozb9Yg Scene credits: Italian Still Life - Bhavin Solanki - http://www.blendswap.com/blends/view/67815 Spheres - Vlad Miller (SATtva) - http://www.luxrender.net/wiki/Show-off_pack Music: "Bet On It" by Silent Partner. A higher resolution version of the sphere scene comparison is available here: https://cg.tuwien.ac.at/~zsolnai/gfx/adaptive_metropolis/ The image from fxguide is available here: http://www.fxguide.com/featured/the-state-of-rendering-part-2/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. If we would like to see how digitally modeled objects would look like in real life, we would create a 3D model of the desired scene, assign material models to the objects within, and use a photorealistic rendering algorithm to finish the job. It simulates rays of light that connect the camera to the light sources in the scene and compute the flow of energy between them. Initially, after a few rays, we'll only have a rough idea on how the image should look like, therefore our initial results will contain the substantial amount of noise. We can get rid of this by simulating the path of millions and millions of rays that will eventually clean up our image. This process where a noisy image gets clearer and clearer, we call convergence, and the problem is that this can take excruciatingly long, even up to hours to get a perfectly clear image. With the simpler algorithms out there, we generate these light paths randomly. This technique we call path tracing. However, in the scene that you see here, most random paths can connect the camera and the light source because this wall is in the way obstructing many of them. Light paths like these don't contribute anything to our calculations and are ultimately a waste of time and precious resources. After generating hundreds of random light paths, we have found a path that finally connects the camera with the light source without any obstructions. In generating the next path, it will be a crime not to use this knowledge to our advantage. A technique called metropolis light transport will make sure to use this valuable knowledge and upon finding a bright light path, it will explore other paths that are nearby to have the best shot at creating valid, unobstructed connections. If we have a difficult scene at hand, metropolis light transport gives us way better results than traditional, completely random paths sampling techniques such as path tracing. There are some equal time comparisons against path tracing to show how big of a difference this technique makes. An equal time comparison means that we save the output of two algorithms that we ran for the same amount of time on the same scene and see which one performs better. This scene is extremely difficult in a sense that the only source of light is coming from the upper left and after the light goes through multiple glass spheres, most of the light paths that would generate will be invalid. As you can see, the random path tracing yields really dreadful results. Well, if you can call a black image a result that is. And as you can see, metropolis light transport is extremely useful in these cases. And here's the beautiful, completely cleaned up, converged result. The lead author of this technique, Eric Vich, won a technical Oscar award for his work, one of which was metropolis light transport. If you like this series, please click on that subscribe button to become a fellow scholar. Thanks for watching, there are millions of videos out there and you decided to take your time with this one. That is amazing. Thank you and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir."}, {"start": 4.6000000000000005, "end": 8.68, "text": " If we would like to see how digitally modeled objects would look like in real life, we would"}, {"start": 8.68, "end": 14.6, "text": " create a 3D model of the desired scene, assign material models to the objects within, and"}, {"start": 14.6, "end": 17.8, "text": " use a photorealistic rendering algorithm to finish the job."}, {"start": 17.8, "end": 22.400000000000002, "text": " It simulates rays of light that connect the camera to the light sources in the scene and"}, {"start": 22.400000000000002, "end": 24.48, "text": " compute the flow of energy between them."}, {"start": 24.48, "end": 28.8, "text": " Initially, after a few rays, we'll only have a rough idea on how the image should look"}, {"start": 28.8, "end": 33.88, "text": " like, therefore our initial results will contain the substantial amount of noise."}, {"start": 33.88, "end": 38.2, "text": " We can get rid of this by simulating the path of millions and millions of rays that will"}, {"start": 38.2, "end": 40.4, "text": " eventually clean up our image."}, {"start": 40.4, "end": 44.760000000000005, "text": " This process where a noisy image gets clearer and clearer, we call convergence, and the"}, {"start": 44.760000000000005, "end": 49.88, "text": " problem is that this can take excruciatingly long, even up to hours to get a perfectly"}, {"start": 49.88, "end": 52.8, "text": " clear image."}, {"start": 52.8, "end": 56.6, "text": " With the simpler algorithms out there, we generate these light paths randomly."}, {"start": 56.6, "end": 59.0, "text": " This technique we call path tracing."}, {"start": 59.0, "end": 63.32, "text": " However, in the scene that you see here, most random paths can connect the camera and"}, {"start": 63.32, "end": 67.4, "text": " the light source because this wall is in the way obstructing many of them."}, {"start": 67.4, "end": 72.04, "text": " Light paths like these don't contribute anything to our calculations and are ultimately a waste"}, {"start": 72.04, "end": 74.88, "text": " of time and precious resources."}, {"start": 74.88, "end": 79.96000000000001, "text": " After generating hundreds of random light paths, we have found a path that finally connects"}, {"start": 79.96000000000001, "end": 83.16, "text": " the camera with the light source without any obstructions."}, {"start": 83.16, "end": 88.32, "text": " In generating the next path, it will be a crime not to use this knowledge to our advantage."}, {"start": 88.32, "end": 92.96, "text": " A technique called metropolis light transport will make sure to use this valuable knowledge"}, {"start": 92.96, "end": 97.84, "text": " and upon finding a bright light path, it will explore other paths that are nearby to have"}, {"start": 97.84, "end": 101.56, "text": " the best shot at creating valid, unobstructed connections."}, {"start": 101.56, "end": 106.16, "text": " If we have a difficult scene at hand, metropolis light transport gives us way better results"}, {"start": 106.16, "end": 110.8, "text": " than traditional, completely random paths sampling techniques such as path tracing."}, {"start": 110.8, "end": 114.92, "text": " There are some equal time comparisons against path tracing to show how big of a difference"}, {"start": 114.92, "end": 116.39999999999999, "text": " this technique makes."}, {"start": 116.39999999999999, "end": 120.67999999999999, "text": " An equal time comparison means that we save the output of two algorithms that we ran"}, {"start": 120.67999999999999, "end": 127.56, "text": " for the same amount of time on the same scene and see which one performs better."}, {"start": 127.56, "end": 131.8, "text": " This scene is extremely difficult in a sense that the only source of light is coming from"}, {"start": 131.8, "end": 136.36, "text": " the upper left and after the light goes through multiple glass spheres, most of the light"}, {"start": 136.36, "end": 138.56, "text": " paths that would generate will be invalid."}, {"start": 138.56, "end": 142.88, "text": " As you can see, the random path tracing yields really dreadful results."}, {"start": 142.88, "end": 146.04, "text": " Well, if you can call a black image a result that is."}, {"start": 146.04, "end": 151.72, "text": " And as you can see, metropolis light transport is extremely useful in these cases."}, {"start": 151.72, "end": 155.72, "text": " And here's the beautiful, completely cleaned up, converged result."}, {"start": 155.72, "end": 160.32, "text": " The lead author of this technique, Eric Vich, won a technical Oscar award for his work,"}, {"start": 160.32, "end": 163.28, "text": " one of which was metropolis light transport."}, {"start": 163.28, "end": 167.96, "text": " If you like this series, please click on that subscribe button to become a fellow scholar."}, {"start": 167.96, "end": 172.28, "text": " Thanks for watching, there are millions of videos out there and you decided to take your"}, {"start": 172.28, "end": 173.56, "text": " time with this one."}, {"start": 173.56, "end": 174.56, "text": " That is amazing."}, {"start": 174.56, "end": 201.32, "text": " Thank you and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=rskdLEl05KI
Synthesizing Sound From Collisions | Two Minute Papers #15
Simulating colliding bodies is an essential part of creating photorealistic video footage on a computer. However, even though we know how these collisions look like, we don't yet know how they sound like. In this piece of work, a technique is described that is capable of simulating the sound emitted by smashing deformable bodies together. The results match real-world experiments remarkably well. ___________________________________________ The paper "Toward High-Quality Modal Contact Sound" is available here: http://www.cs.cornell.edu/projects/Sound/mc/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail image was taken from the paper mentioned above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is 2 Minute Papers with Karo Jolenefa here. So far we have seen excellent works on how to simulate the motion and the collision of bodies, but we have completely neglected some aspect of videos that is just as important as visuals, and that aspect is none other than sound. What if you have footage of objects colliding but no access to the sound of the encounter? You obviously have to recreate the situation that you see on the screen and even for the easiest cases you have to sit in the studio with a small hammer and a mug, which is difficult and often a very laborious process. If we can simulate the forces that arise when bodies collide, what if we could also simulate the sound of such encounters? If you would like a great solution for this, this is the work you should be looking at. Most techniques in the fields take objects into consideration as rigid bodies. In this work, the authors extend the simulation to deformable bodies, therefore making it possible to create rich clanging sound effects. Now, the mandatory question arises, how do evaluate such a technique? Something means that we would like to find out how accurate it really is. And obviously, the ideal cases if we compare the sounds created by the algorithm to what we would experience in the real world and see how close they are. Well pretty damn close. I love these simulation software works the most when they are not only beautiful, but they somehow relate to reality, and this technique is a great example of that. It feels quite empowering that we have these really smart people who can solve problems that sound inconceivably difficult. Thank you so much for checking the series out. If you would like to be notified quickly when a new episode of 2 Minute Papers pops up, consider following me on Twitter. I announce every upload right away. I've put a link for this in the description box. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Jolenefa here."}, {"start": 4.88, "end": 9.200000000000001, "text": " So far we have seen excellent works on how to simulate the motion and the collision of"}, {"start": 9.200000000000001, "end": 14.32, "text": " bodies, but we have completely neglected some aspect of videos that is just as important"}, {"start": 14.32, "end": 18.88, "text": " as visuals, and that aspect is none other than sound."}, {"start": 18.88, "end": 23.64, "text": " What if you have footage of objects colliding but no access to the sound of the encounter?"}, {"start": 23.64, "end": 27.84, "text": " You obviously have to recreate the situation that you see on the screen and even for the"}, {"start": 27.84, "end": 33.04, "text": " easiest cases you have to sit in the studio with a small hammer and a mug, which is difficult"}, {"start": 33.04, "end": 35.519999999999996, "text": " and often a very laborious process."}, {"start": 35.519999999999996, "end": 40.04, "text": " If we can simulate the forces that arise when bodies collide, what if we could also simulate"}, {"start": 40.04, "end": 42.64, "text": " the sound of such encounters?"}, {"start": 42.64, "end": 46.84, "text": " If you would like a great solution for this, this is the work you should be looking at."}, {"start": 46.84, "end": 51.24, "text": " Most techniques in the fields take objects into consideration as rigid bodies."}, {"start": 51.24, "end": 56.44, "text": " In this work, the authors extend the simulation to deformable bodies, therefore making it possible"}, {"start": 56.44, "end": 60.44, "text": " to create rich clanging sound effects."}, {"start": 60.44, "end": 83.08, "text": " Now, the mandatory question arises, how do evaluate such a technique?"}, {"start": 83.08, "end": 87.08, "text": " Something means that we would like to find out how accurate it really is."}, {"start": 87.08, "end": 91.67999999999999, "text": " And obviously, the ideal cases if we compare the sounds created by the algorithm to what"}, {"start": 91.68, "end": 114.08000000000001, "text": " we would experience in the real world and see how close they are."}, {"start": 114.08000000000001, "end": 115.56, "text": " Well pretty damn close."}, {"start": 115.56, "end": 120.04, "text": " I love these simulation software works the most when they are not only beautiful, but they"}, {"start": 120.04, "end": 124.56, "text": " somehow relate to reality, and this technique is a great example of that."}, {"start": 124.56, "end": 128.8, "text": " It feels quite empowering that we have these really smart people who can solve problems"}, {"start": 128.8, "end": 131.6, "text": " that sound inconceivably difficult."}, {"start": 131.6, "end": 134.04000000000002, "text": " Thank you so much for checking the series out."}, {"start": 134.04000000000002, "end": 138.20000000000002, "text": " If you would like to be notified quickly when a new episode of 2 Minute Papers pops up,"}, {"start": 138.20000000000002, "end": 139.88, "text": " consider following me on Twitter."}, {"start": 139.88, "end": 142.08, "text": " I announce every upload right away."}, {"start": 142.08, "end": 144.28, "text": " I've put a link for this in the description box."}, {"start": 144.28, "end": 151.28, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=LU3pdWTD4Rw
Adaptive Cloth Simulations | Two Minute Papers #14
This time, we are going to set foot in cloth simulations that are widely used in the motion picture industry. Adaptive algorithms are a class of techniques that try to adapt at the problem that we have at hand. This adaptive method focuses computational resources to regions which are likely to have fine details (wrinkles) and coarsens the simulation quality in regions that are at rest. This substantially reduces the computation time we need for the cloth simulation step. ________________________ The paper "Adaptive Anisotropic Remeshing for Cloth Simulation" by Narain et al. is available here: http://graphics.berkeley.edu/papers/Narain-AAR-2012-11/ Recommended for you - Adaptive Fluid Simulations: https://www.youtube.com/watch?v=dH1s49-lrBk&index=1&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e The YouTube channel of Sardi Pax with lots of useful Blender tutorials is available here: https://www.youtube.com/user/srf123 Here are some Blender (and cloth simulation) tutorials to get you started: https://www.youtube.com/watch?v=lZe3tGWSy6s https://www.youtube.com/watch?v=k4czh0x31xk https://www.youtube.com/watch?v=gARJxEDzg6k http://blender.org/ The background of the thumbnail image is the work of Theresa Thompson: https://flic.kr/p/5khSsE It has went through slight modifications (rotation and a monochrome transform). Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is Two Minute Papers with Carlos Rona Ifehir. Let's talk about the behavior of cloth in animations. In Disney movies, you often see characters wandering around in extremely realistically behaving a parallel. It sounds like something that would be extremely laborious to create by hand. Do animators have to create all of this movement by hand? Not a chance. We use computer programs to simulate the forces that act on the fabric, which start spending and stretching in a number of different directions. The more detailed simulations we are looking for, the more computational time we have to invest and the more we have to wait. The computations can take up to a minute for every image, but if we have lots of movement and different fabrics in the scene, it can take even more. Is there a solution for this? Can we get really high quality simulations in a reasonable amount of time? Of course we can. The name of the game is Adaptive Simulation again. We have talked about Adaptive Fluid Simulations before. Adaptive means that the technique tries to adapt to the problem that we have at hand. Here in the world of cloth simulations, it means that the algorithm tries to invest more resources in computing regions that are likely to have high fidelity details such as wrinkles. These regions are marked with red to show that wrinkles are likely to form here. The blue and yellow denotes regions where there is not so much going on, therefore we don't have to do too many calculations there. These are the places where we can save a lot of computational resources. This example illustrates the concept extreme level. Take a look. While the fabric is at rest, it's mostly blue and yellow, but as forces are exerted on it wrinkles appear and the algorithm recognizes that these are the regions that we really need to focus on. With this Adaptive technique, the simulation time for every picture that we create is reduced substantially. Luckily, some cloth simulation routines are implemented in Blender, which is an amazing free software package that is definitely worth checking out. I've put some links in the description box to get you started. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is Two Minute Papers with Carlos Rona Ifehir."}, {"start": 4.48, "end": 7.38, "text": " Let's talk about the behavior of cloth in animations."}, {"start": 7.38, "end": 12.48, "text": " In Disney movies, you often see characters wandering around in extremely realistically behaving"}, {"start": 12.48, "end": 13.48, "text": " a parallel."}, {"start": 13.48, "end": 17.72, "text": " It sounds like something that would be extremely laborious to create by hand."}, {"start": 17.72, "end": 21.36, "text": " Do animators have to create all of this movement by hand?"}, {"start": 21.36, "end": 22.44, "text": " Not a chance."}, {"start": 22.44, "end": 26.88, "text": " We use computer programs to simulate the forces that act on the fabric, which start spending"}, {"start": 26.88, "end": 29.8, "text": " and stretching in a number of different directions."}, {"start": 29.8, "end": 33.68, "text": " The more detailed simulations we are looking for, the more computational time we have to"}, {"start": 33.68, "end": 35.88, "text": " invest and the more we have to wait."}, {"start": 35.88, "end": 40.44, "text": " The computations can take up to a minute for every image, but if we have lots of movement"}, {"start": 40.44, "end": 43.8, "text": " and different fabrics in the scene, it can take even more."}, {"start": 43.8, "end": 45.52, "text": " Is there a solution for this?"}, {"start": 45.52, "end": 49.72, "text": " Can we get really high quality simulations in a reasonable amount of time?"}, {"start": 49.72, "end": 50.8, "text": " Of course we can."}, {"start": 50.8, "end": 53.6, "text": " The name of the game is Adaptive Simulation again."}, {"start": 53.6, "end": 57.040000000000006, "text": " We have talked about Adaptive Fluid Simulations before."}, {"start": 57.04, "end": 61.36, "text": " Adaptive means that the technique tries to adapt to the problem that we have at hand."}, {"start": 61.36, "end": 65.84, "text": " Here in the world of cloth simulations, it means that the algorithm tries to invest more"}, {"start": 65.84, "end": 71.36, "text": " resources in computing regions that are likely to have high fidelity details such as wrinkles."}, {"start": 71.36, "end": 75.64, "text": " These regions are marked with red to show that wrinkles are likely to form here."}, {"start": 75.64, "end": 79.64, "text": " The blue and yellow denotes regions where there is not so much going on, therefore we"}, {"start": 79.64, "end": 82.32, "text": " don't have to do too many calculations there."}, {"start": 82.32, "end": 93.39999999999999, "text": " These are the places where we can save a lot of computational resources."}, {"start": 93.39999999999999, "end": 96.24, "text": " This example illustrates the concept extreme level."}, {"start": 96.24, "end": 97.55999999999999, "text": " Take a look."}, {"start": 97.55999999999999, "end": 101.63999999999999, "text": " While the fabric is at rest, it's mostly blue and yellow, but as forces are exerted"}, {"start": 101.63999999999999, "end": 106.6, "text": " on it wrinkles appear and the algorithm recognizes that these are the regions that we really"}, {"start": 106.6, "end": 108.28, "text": " need to focus on."}, {"start": 108.28, "end": 112.6, "text": " With this Adaptive technique, the simulation time for every picture that we create is reduced"}, {"start": 112.6, "end": 116.4, "text": " substantially."}, {"start": 116.4, "end": 123.44, "text": " Luckily, some cloth simulation routines are implemented in Blender, which is an amazing"}, {"start": 123.44, "end": 126.36, "text": " free software package that is definitely worth checking out."}, {"start": 126.36, "end": 129.08, "text": " I've put some links in the description box to get you started."}, {"start": 129.08, "end": 155.84, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HOLoPgTzV6g
Creating Photographs Using Deep Learning | Two Minute Papers #13
Machine learning techniques such as deep learning artificial neural networks had proven to be extremely useful for a variety of tasks that were previously deemed very difficult, or even impossible to solve. In this work, a deep learning technique is used to learn how different light source positions affect a scene and create ("guess") new photographs with unknown light source positions. The results are absolutely stunning. The promised links for artificial neural networks follow below. __________________________________ The paper "Image Based Relighting Using Neural Networks" is available here: http://research.microsoft.com/en-us/um/people/yuedong/project/neuralibr/neuralibr.htm Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Recommended for you: Artificial Neural Networks and Deep Learning - https://www.youtube.com/watch?v=rCWTOOgVXyE Deep Neural Network Learns Van Gogh's Art - https://www.youtube.com/watch?v=-R9bJGNHltQ Music: "The Place Inside" by Silent Partner Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu The thumbnail background was taken from the paper "Image Based Relighting Using Neural Networks". Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In this work, we place a small light source to a chosen point in a scene and record a photograph of how things look like with the given placement. Then we place the light source to a new position and record an image again. We repeat this process several times. Then, after we have done that, we have the question, what would the photograph look like if I put the light source to places I haven't seen yet? This process we call image-relighting. This work uses neural networks to do relighting by learning how different light source placements behave. If you haven't heard about neural networks before, make sure to check out our previous episodes on the topic. I have put links for you in the description box. After the training, this technique guesses how completely unknown light source setups would look like in reality. We give the algorithm a light source position we haven't seen yet and it will generate us a photograph of how it would look like in reality. The first question is ok, but how well does it do the job? I am not sure if you are going to believe this one as you will be witnessing some magnificent results. On the left you will see real photographs and on the right, reconstructions that are basically the guesses of the algorithm. Note that it doesn't know how the photograph would look like. It has to generate new photographs based on the knowledge that it has from seeing other photos. It is completely indistinguishable from reality. This is especially difficult in the presence of the so-called high frequency lighting effects. The high frequency part means that if we change the light source just a bit, there may be very large changes in the output image. Such a thing can happen when a light source is moved very slightly but is suddenly hidden behind an object, therefore our photograph changes drastically. The proposed technique uses ensembles, it means that multiple neural networks are trained and their guesses are average to get better results. What do you do if you go to the doctor and he says you have a very severe and very unlikely condition? Well, you go and ask multiple doctors and see if they say the same thing. It is reasonable to expect that the more doctors you ask, the clearer you will see and this is exactly what the algorithm does. Now look at this. On the left side there is a real photo and on the right the guess of the algorithm after training. Can you believe it? You can barely see the difference and this is a failure case. The success story scenarios for many techniques are not as good as the failure cases here. These results are absolutely stunning. The algorithm can also deal with multiple light sources of different colors. As you can see, machine learning techniques such as deep neural networks have opened so many doors in research lately. We are starting to solve problems that everyone agreed were absolutely impossible before. We are currently over 2000 subscribers, our club of scholars is growing at a really rapid phase. Please share the series so we can reach people that don't know about us yet. Let's draw them in and show them how cool research really is. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.4, "end": 10.28, "text": " In this work, we place a small light source to a chosen point in a scene and record a photograph"}, {"start": 10.28, "end": 12.8, "text": " of how things look like with the given placement."}, {"start": 12.8, "end": 16.48, "text": " Then we place the light source to a new position and record an image again."}, {"start": 16.48, "end": 18.88, "text": " We repeat this process several times."}, {"start": 18.88, "end": 23.080000000000002, "text": " Then, after we have done that, we have the question, what would the photograph look like if"}, {"start": 23.080000000000002, "end": 26.12, "text": " I put the light source to places I haven't seen yet?"}, {"start": 26.12, "end": 29.240000000000002, "text": " This process we call image-relighting."}, {"start": 29.24, "end": 33.4, "text": " This work uses neural networks to do relighting by learning how different light source"}, {"start": 33.4, "end": 34.92, "text": " placements behave."}, {"start": 34.92, "end": 39.12, "text": " If you haven't heard about neural networks before, make sure to check out our previous episodes"}, {"start": 39.12, "end": 40.12, "text": " on the topic."}, {"start": 40.12, "end": 42.96, "text": " I have put links for you in the description box."}, {"start": 42.96, "end": 47.56, "text": " After the training, this technique guesses how completely unknown light source setups"}, {"start": 47.56, "end": 49.239999999999995, "text": " would look like in reality."}, {"start": 49.239999999999995, "end": 53.2, "text": " We give the algorithm a light source position we haven't seen yet and it will generate"}, {"start": 53.2, "end": 57.0, "text": " us a photograph of how it would look like in reality."}, {"start": 57.0, "end": 60.68, "text": " The first question is ok, but how well does it do the job?"}, {"start": 60.68, "end": 65.68, "text": " I am not sure if you are going to believe this one as you will be witnessing some magnificent"}, {"start": 65.68, "end": 66.68, "text": " results."}, {"start": 66.68, "end": 70.8, "text": " On the left you will see real photographs and on the right, reconstructions that are"}, {"start": 70.8, "end": 73.24000000000001, "text": " basically the guesses of the algorithm."}, {"start": 73.24000000000001, "end": 76.24000000000001, "text": " Note that it doesn't know how the photograph would look like."}, {"start": 76.24000000000001, "end": 80.44, "text": " It has to generate new photographs based on the knowledge that it has from seeing other"}, {"start": 80.44, "end": 81.44, "text": " photos."}, {"start": 81.44, "end": 87.64, "text": " It is completely indistinguishable from reality."}, {"start": 87.64, "end": 92.67999999999999, "text": " This is especially difficult in the presence of the so-called high frequency lighting effects."}, {"start": 92.67999999999999, "end": 97.03999999999999, "text": " The high frequency part means that if we change the light source just a bit, there may be"}, {"start": 97.03999999999999, "end": 99.52, "text": " very large changes in the output image."}, {"start": 99.52, "end": 103.6, "text": " Such a thing can happen when a light source is moved very slightly but is suddenly hidden"}, {"start": 103.6, "end": 108.0, "text": " behind an object, therefore our photograph changes drastically."}, {"start": 108.0, "end": 113.2, "text": " The proposed technique uses ensembles, it means that multiple neural networks are trained"}, {"start": 113.2, "end": 116.12, "text": " and their guesses are average to get better results."}, {"start": 116.12, "end": 120.6, "text": " What do you do if you go to the doctor and he says you have a very severe and very unlikely"}, {"start": 120.6, "end": 121.6, "text": " condition?"}, {"start": 121.6, "end": 126.2, "text": " Well, you go and ask multiple doctors and see if they say the same thing."}, {"start": 126.2, "end": 130.6, "text": " It is reasonable to expect that the more doctors you ask, the clearer you will see and this"}, {"start": 130.6, "end": 133.08, "text": " is exactly what the algorithm does."}, {"start": 133.08, "end": 134.2, "text": " Now look at this."}, {"start": 134.2, "end": 138.48, "text": " On the left side there is a real photo and on the right the guess of the algorithm after"}, {"start": 138.48, "end": 139.48, "text": " training."}, {"start": 139.48, "end": 140.92, "text": " Can you believe it?"}, {"start": 140.92, "end": 145.0, "text": " You can barely see the difference and this is a failure case."}, {"start": 145.0, "end": 150.07999999999998, "text": " The success story scenarios for many techniques are not as good as the failure cases here."}, {"start": 150.07999999999998, "end": 153.16, "text": " These results are absolutely stunning."}, {"start": 153.16, "end": 167.2, "text": " The algorithm can also deal with multiple light sources of different colors."}, {"start": 167.2, "end": 170.96, "text": " As you can see, machine learning techniques such as deep neural networks have opened"}, {"start": 170.96, "end": 173.2, "text": " so many doors in research lately."}, {"start": 173.2, "end": 178.68, "text": " We are starting to solve problems that everyone agreed were absolutely impossible before."}, {"start": 178.68, "end": 183.56, "text": " We are currently over 2000 subscribers, our club of scholars is growing at a really rapid"}, {"start": 183.56, "end": 184.56, "text": " phase."}, {"start": 184.56, "end": 187.76000000000002, "text": " Please share the series so we can reach people that don't know about us yet."}, {"start": 187.76000000000002, "end": 191.12, "text": " Let's draw them in and show them how cool research really is."}, {"start": 191.12, "end": 217.88, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2i1hrywDwPo
Reconstructing Sound From Vibrations | Two Minute Papers #12
When exposed to sound waves, the surface of objects in a room start vibrating. One could wonder that if we had sufficiently advanced technology, maybe the sound itself could be reconstructed from looking only at the vibration of these objects. A great technique they call "visual microphone" has been recently proposed that executes this idea with breathtaking results. _____________________ The paper "The Visual Microphone: Passive Recovery of Sound from Video" is available here: http://people.csail.mit.edu/mrub/VisualMic/ Disclaimer: I was not part of this research project, I am merely providing commentary on this work. The mentioned TED talk from Abe Davis: https://www.youtube.com/watch?v=npNYP2vzaPo Reddit AMA with the main author, Abe Davis: https://www.reddit.com/r/IAmA/comments/357b4o/im_an_mit_computer_scientist_and_recent_ted/ Material properties from vibrations: http://www.visualvibrometry.com/ The background image for the thumbnail was created by Corey Leopold: https://flic.kr/p/54UZdL Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. The work we are going to discuss today is about visual microphones. What does this mean exactly? The influence of sound creates surface vibrations in many objects such as plants or a bag of chips, foil containers, water, and even bricks. They thereby work as visual microphones. Now hold onto your chairs because this algorithm can reconstruct audio data from video footage of vibrations. What this means is that if someone outside of your house pointed a high speed camera at a bag of chips when you start talking in your room, he will be able to guess what you said by only seeing the vibrations of the bag of chips. In the following example, you will see a recorded footage of the bag, but the movement is so subtle that your naked eye won't see any of it. First you'll hear the speech of a person recorded in the house, then the reconstruction from only the visual footage of the bag. Mary had a little lamb whose speech was lightest snow, and everywhere that Mary went, that lamb was stored to go. And this is what we were able to recover from high speed video filmed from outside behind soundproof glass. This is just unbelievable. Here is another example with a plastic bag where you can see the movement caused by the sound waves. The paper is very detailed and rigorous, this is definitely one of the best research works I've seen in a while. The most awesome part of this is that this is not only an excellent piece of research, it is also a great product. And note that this problem is even harder than one would think since the frequency response of various objects can be quite different, which means that every single object vibrates a bit differently when hit by the same sound waves. You can see a great example of these responses from bricks, water, and many others here. What it will be used for is shrouded in mystery for the moment. Even though I think this work provides fertile grounds for new conspiracy theories, the authors don't believe it is suitable to use for surveillance. Someone argued that it may be useful for early earthquake detection, which is an awesome idea. Also, maybe it could be used to detect video redubbing and recovering beaped out speech from videos and I'm sure there will be many other applications. The authors also have a follow-up paper on estimating material properties by looking at how objects vibrate. Awesome. Do you have any potential applications in mind? Let me know in the comments section. And there's also a fantastic TED Talk and paper video on the topic that you can find in the description box alongside with a great ready discussion link. I urge you to check all of these out. The videos are narrated by Abe Davis with it a great job at explaining their concepts. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.6000000000000005, "end": 9.0, "text": " The work we are going to discuss today is about visual microphones."}, {"start": 9.0, "end": 10.6, "text": " What does this mean exactly?"}, {"start": 10.6, "end": 16.18, "text": " The influence of sound creates surface vibrations in many objects such as plants or a bag of"}, {"start": 16.18, "end": 19.96, "text": " chips, foil containers, water, and even bricks."}, {"start": 19.96, "end": 22.56, "text": " They thereby work as visual microphones."}, {"start": 22.56, "end": 28.16, "text": " Now hold onto your chairs because this algorithm can reconstruct audio data from video footage"}, {"start": 28.16, "end": 29.16, "text": " of vibrations."}, {"start": 29.16, "end": 34.4, "text": " What this means is that if someone outside of your house pointed a high speed camera at"}, {"start": 34.4, "end": 38.56, "text": " a bag of chips when you start talking in your room, he will be able to guess what you"}, {"start": 38.56, "end": 42.519999999999996, "text": " said by only seeing the vibrations of the bag of chips."}, {"start": 42.519999999999996, "end": 46.68, "text": " In the following example, you will see a recorded footage of the bag, but the movement"}, {"start": 46.68, "end": 50.04, "text": " is so subtle that your naked eye won't see any of it."}, {"start": 50.04, "end": 54.16, "text": " First you'll hear the speech of a person recorded in the house, then the reconstruction"}, {"start": 54.16, "end": 57.68, "text": " from only the visual footage of the bag."}, {"start": 57.68, "end": 65.56, "text": " Mary had a little lamb whose speech was lightest snow, and everywhere that Mary went, that"}, {"start": 65.56, "end": 68.36, "text": " lamb was stored to go."}, {"start": 68.36, "end": 73.56, "text": " And this is what we were able to recover from high speed video filmed from outside behind"}, {"start": 73.56, "end": 86.72, "text": " soundproof glass."}, {"start": 86.72, "end": 88.84, "text": " This is just unbelievable."}, {"start": 88.84, "end": 92.88, "text": " Here is another example with a plastic bag where you can see the movement caused by the"}, {"start": 92.88, "end": 122.84, "text": " sound waves."}, {"start": 122.88, "end": 132.28, "text": " The paper is very detailed and rigorous, this is definitely one of the best research works"}, {"start": 132.28, "end": 133.28, "text": " I've seen in a while."}, {"start": 133.28, "end": 138.44, "text": " The most awesome part of this is that this is not only an excellent piece of research, it"}, {"start": 138.44, "end": 140.44, "text": " is also a great product."}, {"start": 140.44, "end": 145.24, "text": " And note that this problem is even harder than one would think since the frequency response"}, {"start": 145.24, "end": 149.72, "text": " of various objects can be quite different, which means that every single object vibrates"}, {"start": 149.72, "end": 152.44, "text": " a bit differently when hit by the same sound waves."}, {"start": 152.44, "end": 158.76, "text": " You can see a great example of these responses from bricks, water, and many others here."}, {"start": 158.76, "end": 162.32, "text": " What it will be used for is shrouded in mystery for the moment."}, {"start": 162.32, "end": 166.84, "text": " Even though I think this work provides fertile grounds for new conspiracy theories, the authors"}, {"start": 166.84, "end": 170.8, "text": " don't believe it is suitable to use for surveillance."}, {"start": 170.8, "end": 176.0, "text": " Someone argued that it may be useful for early earthquake detection, which is an awesome idea."}, {"start": 176.0, "end": 180.96, "text": " Also, maybe it could be used to detect video redubbing and recovering beaped out speech"}, {"start": 180.96, "end": 184.6, "text": " from videos and I'm sure there will be many other applications."}, {"start": 184.6, "end": 190.0, "text": " The authors also have a follow-up paper on estimating material properties by looking at how objects"}, {"start": 190.0, "end": 191.0, "text": " vibrate."}, {"start": 191.0, "end": 192.0, "text": " Awesome."}, {"start": 192.0, "end": 195.44, "text": " Do you have any potential applications in mind?"}, {"start": 195.44, "end": 197.0, "text": " Let me know in the comments section."}, {"start": 197.0, "end": 201.60000000000002, "text": " And there's also a fantastic TED Talk and paper video on the topic that you can find"}, {"start": 201.60000000000002, "end": 205.56, "text": " in the description box alongside with a great ready discussion link."}, {"start": 205.56, "end": 207.8, "text": " I urge you to check all of these out."}, {"start": 207.8, "end": 212.44, "text": " The videos are narrated by Abe Davis with it a great job at explaining their concepts."}, {"start": 212.44, "end": 239.2, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SmyiKmfnbhc
Building Bridges With Flying Machines | Two Minute Papers #11
Building architectural elements and buildings with flying machines is a hot research topic. It is a remarkably difficult task, as many of these flying machines not only have to be controlled safely, but they also have to collaborate efficiently to succeed in building complex structures. Even something as mundane as deploying the rope has its own science. In this work from the ETH Zürich, these flying machines build quite reliable rope bridges that humans can use for traversal. _______________________ The original video can be found here: http://www.idsc.ethz.ch/research-dandrea/research-projects/aerial-construction.html The paper "Building Tensile Structures with Flying Machines" is available here: http://flyingmachinearena.org/wp-content/publications/2013/augIROS13.pdf Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu The thumbnail image was Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fahir. Anyone who has tried building bridges over a huge chasm realized that it is possibly one of the most difficult and dangerous things you could do on a family vacation. The basic construction elements for such a bridge can be ropes, cables and wires. And this kind of task is fundamentally different from classical architectural building problems. Here you don't need to have any kind of scaffolding or to carry building blocks that way a lot. However, you have to know how to tie knots. Therefore this is the kind of problem you need flying machines for. They can fly anywhere, they are nimble, and they are disadvantaged that they have a very limited payload does not play a big role here. In this piece of work at the ETH Zurich, these machines can create some crazy knots. From single to multi-round turn hitches, knobs, elbows, round turns and multiple rope knobs. And these you have to be able to create in a collaborative manner, because each individual flying machine will hold one rope, therefore they have to pass through given control points at a strictly given time and a target velocity. These little guys also have to know the exact amount of force they need to exert on the structure to move into a desirable target position. Even deploying the rope is not that trivial. The machine is equipped with a roller to do so, but the friction of this roller can be changed at any time according to the rope releasing direction to unroll it properly. You also have to face the correct direction as well. And these structures are not just toys, the resulting bridges are resilient enough for humans to use. This work is a great example to show that the technology of today is improving at an incredible pace. If we can solve difficult, collaborative control problems such as this one, just think about the possibilities. What an exciting time it is to be alive. We have gotten lots of shares for the series on social media. I'm trying to send a short thank you message for every single one of you. I'm trying my best and don't forget, every single share helps spreading the word for the series immensely. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fahir."}, {"start": 5.0, "end": 10.0, "text": " Anyone who has tried building bridges over a huge chasm realized that it is possibly"}, {"start": 10.0, "end": 14.68, "text": " one of the most difficult and dangerous things you could do on a family vacation."}, {"start": 14.68, "end": 19.72, "text": " The basic construction elements for such a bridge can be ropes, cables and wires."}, {"start": 19.72, "end": 24.68, "text": " And this kind of task is fundamentally different from classical architectural building problems."}, {"start": 24.68, "end": 30.12, "text": " Here you don't need to have any kind of scaffolding or to carry building blocks that way a lot."}, {"start": 30.12, "end": 32.68, "text": " However, you have to know how to tie knots."}, {"start": 32.68, "end": 36.12, "text": " Therefore this is the kind of problem you need flying machines for."}, {"start": 36.12, "end": 40.04, "text": " They can fly anywhere, they are nimble, and they are disadvantaged that they have a very"}, {"start": 40.04, "end": 43.2, "text": " limited payload does not play a big role here."}, {"start": 43.2, "end": 48.16, "text": " In this piece of work at the ETH Zurich, these machines can create some crazy knots."}, {"start": 48.16, "end": 54.28, "text": " From single to multi-round turn hitches, knobs, elbows, round turns and multiple rope knobs."}, {"start": 54.28, "end": 58.84, "text": " And these you have to be able to create in a collaborative manner, because each individual"}, {"start": 58.84, "end": 63.6, "text": " flying machine will hold one rope, therefore they have to pass through given control points"}, {"start": 63.6, "end": 67.84, "text": " at a strictly given time and a target velocity."}, {"start": 67.84, "end": 72.04, "text": " These little guys also have to know the exact amount of force they need to exert on the"}, {"start": 72.04, "end": 75.8, "text": " structure to move into a desirable target position."}, {"start": 75.8, "end": 78.2, "text": " Even deploying the rope is not that trivial."}, {"start": 78.2, "end": 82.84, "text": " The machine is equipped with a roller to do so, but the friction of this roller can be changed"}, {"start": 82.84, "end": 87.72, "text": " at any time according to the rope releasing direction to unroll it properly."}, {"start": 87.72, "end": 90.4, "text": " You also have to face the correct direction as well."}, {"start": 90.4, "end": 94.44, "text": " And these structures are not just toys, the resulting bridges are resilient enough for"}, {"start": 94.44, "end": 96.44, "text": " humans to use."}, {"start": 96.44, "end": 101.56, "text": " This work is a great example to show that the technology of today is improving at an incredible"}, {"start": 101.56, "end": 102.56, "text": " pace."}, {"start": 102.56, "end": 106.88, "text": " If we can solve difficult, collaborative control problems such as this one, just think"}, {"start": 106.88, "end": 108.76, "text": " about the possibilities."}, {"start": 108.76, "end": 111.64, "text": " What an exciting time it is to be alive."}, {"start": 111.64, "end": 114.96000000000001, "text": " We have gotten lots of shares for the series on social media."}, {"start": 114.96000000000001, "end": 118.72, "text": " I'm trying to send a short thank you message for every single one of you."}, {"start": 118.72, "end": 123.2, "text": " I'm trying my best and don't forget, every single share helps spreading the word for"}, {"start": 123.2, "end": 124.6, "text": " the series immensely."}, {"start": 124.6, "end": 151.35999999999999, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dH1s49-lrBk
Adaptive Fluid Simulations | Two Minute Papers #10
There are computer programs that can simulate the behavior of fluids, such as water, milk, honey and many others. However, creating detailed simulations takes a really long time, up to days even for a few seconds of video footage. Adaptive algorithms are a class of techniques that try to adapt at the problem that we have at hand. This adaptive method focuses computational resources to regions which are visible and have many fine details, and coarsens the simulation quality in regions that are not visible (or interesting). The resulting algorithm is much more efficient at simulating small scale turbulent details. __________________ Recommended for you - Wavelet Turbulence: https://www.youtube.com/watch?v=5xLSbj5SsSE&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e&index=7 The paper "Highly Adaptive Liquid Simulations on Tetrahedral Meshes" is available here: http://pub.ist.ac.at/group_wojtan/projects/2013_Ando_HALSoTM/index.html Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Music: "Awakening" by Silent Partner Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Eiffagher. As we discussed before, simulating the motion of fluids and smoke with a computer program is a very expensive process. We have to compute quantities like the velocity and the pressure of a piece of fluid at every given point in space. Even though we cannot compute them everywhere, we can place a 3D grid and compute these quantities in the grid points and use mathematical techniques to find out what is exactly happening between these grid points. But still, even if we do this, we still have to wait up to days, even for a few seconds of video footage. One possible way to alleviate this would be to write an adaptive simulation program. Adaptive means that the simulator tries to adapt to the problem at hand. Here it means that it recognizes the regions where it needs to focus a lot of computational resources on and at the same time it also tries to find regions where it can get away with using less computation. Here you can see spheres of different sizes, in regions where there is a lot going on you will see smaller spheres. This means that we have a finer grid in this region, therefore we know more about what is exactly happening here. In other places you also see larger spheres, meaning that the resolution of our grid is more coarse in these regions. This we can get away with only because there is not much happening there. Essentially, we focus our resources to regions that really require it. For instance, where there are lots of small scale details. The spheres are only used for the sake of visualization, the actual output of the simulator looks like this. It also takes into consideration which regions we are currently looking at. Here we are watching one side of the corridor, where the simulator will take this into consideration and create a highly detailed simulation at the cost of sacrificing details on the other side of the corridor, but that's fine because we don't see any of that. However, there may be some objects the fluid needs to interact with. Here, the algorithm makes sure to increase the resolution so that the particles can correctly flow through the holes of this object. The authors have also published the source code of their techniques, so anyone with a bit of programming knowledge can start playing with this amazing piece of work. The word of research is incredibly fast moving. When you are done with something, you immediately need to jump onto the next project. Two minute papers is a series where we slow down a bit and celebrate these wonderful works. We're also trying to show that research is not only for experts, it is for everyone. If you like this series, please make sure to help me spread the word and share the series to your friends so we can all marvel at these beautiful works. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.0600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Eiffagher."}, {"start": 5.0600000000000005, "end": 9.5, "text": " As we discussed before, simulating the motion of fluids and smoke with a computer program"}, {"start": 9.5, "end": 11.3, "text": " is a very expensive process."}, {"start": 11.3, "end": 15.58, "text": " We have to compute quantities like the velocity and the pressure of a piece of fluid at every"}, {"start": 15.58, "end": 17.3, "text": " given point in space."}, {"start": 17.3, "end": 22.38, "text": " Even though we cannot compute them everywhere, we can place a 3D grid and compute these quantities"}, {"start": 22.38, "end": 27.3, "text": " in the grid points and use mathematical techniques to find out what is exactly happening between"}, {"start": 27.3, "end": 28.54, "text": " these grid points."}, {"start": 28.54, "end": 33.5, "text": " But still, even if we do this, we still have to wait up to days, even for a few seconds"}, {"start": 33.5, "end": 35.06, "text": " of video footage."}, {"start": 35.06, "end": 39.82, "text": " One possible way to alleviate this would be to write an adaptive simulation program."}, {"start": 39.82, "end": 43.46, "text": " Adaptive means that the simulator tries to adapt to the problem at hand."}, {"start": 43.46, "end": 48.099999999999994, "text": " Here it means that it recognizes the regions where it needs to focus a lot of computational"}, {"start": 48.099999999999994, "end": 53.14, "text": " resources on and at the same time it also tries to find regions where it can get away with"}, {"start": 53.14, "end": 55.14, "text": " using less computation."}, {"start": 55.14, "end": 59.46, "text": " Here you can see spheres of different sizes, in regions where there is a lot going on"}, {"start": 59.46, "end": 61.18, "text": " you will see smaller spheres."}, {"start": 61.18, "end": 65.3, "text": " This means that we have a finer grid in this region, therefore we know more about what"}, {"start": 65.3, "end": 66.94, "text": " is exactly happening here."}, {"start": 66.94, "end": 71.42, "text": " In other places you also see larger spheres, meaning that the resolution of our grid is"}, {"start": 71.42, "end": 73.38, "text": " more coarse in these regions."}, {"start": 73.38, "end": 76.94, "text": " This we can get away with only because there is not much happening there."}, {"start": 76.94, "end": 80.74000000000001, "text": " Essentially, we focus our resources to regions that really require it."}, {"start": 80.74000000000001, "end": 83.74000000000001, "text": " For instance, where there are lots of small scale details."}, {"start": 83.74, "end": 88.17999999999999, "text": " The spheres are only used for the sake of visualization, the actual output of the simulator"}, {"start": 88.17999999999999, "end": 93.58, "text": " looks like this."}, {"start": 93.58, "end": 97.58, "text": " It also takes into consideration which regions we are currently looking at."}, {"start": 97.58, "end": 103.17999999999999, "text": " Here we are watching one side of the corridor, where the simulator will take this into consideration"}, {"start": 103.17999999999999, "end": 107.97999999999999, "text": " and create a highly detailed simulation at the cost of sacrificing details on the other"}, {"start": 107.98, "end": 114.26, "text": " side of the corridor, but that's fine because we don't see any of that."}, {"start": 114.26, "end": 118.66, "text": " However, there may be some objects the fluid needs to interact with."}, {"start": 118.66, "end": 123.58, "text": " Here, the algorithm makes sure to increase the resolution so that the particles can correctly"}, {"start": 123.58, "end": 127.18, "text": " flow through the holes of this object."}, {"start": 127.18, "end": 131.06, "text": " The authors have also published the source code of their techniques, so anyone with a bit"}, {"start": 131.06, "end": 135.58, "text": " of programming knowledge can start playing with this amazing piece of work."}, {"start": 135.58, "end": 138.38000000000002, "text": " The word of research is incredibly fast moving."}, {"start": 138.38000000000002, "end": 142.46, "text": " When you are done with something, you immediately need to jump onto the next project."}, {"start": 142.46, "end": 146.98000000000002, "text": " Two minute papers is a series where we slow down a bit and celebrate these wonderful"}, {"start": 146.98000000000002, "end": 147.98000000000002, "text": " works."}, {"start": 147.98000000000002, "end": 152.06, "text": " We're also trying to show that research is not only for experts, it is for everyone."}, {"start": 152.06, "end": 156.14000000000001, "text": " If you like this series, please make sure to help me spread the word and share the series"}, {"start": 156.14000000000001, "end": 159.5, "text": " to your friends so we can all marvel at these beautiful works."}, {"start": 159.5, "end": 166.5, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=L7MOeQw47BM
Manipulating Photorealistic Renderings | Two Minute Papers #9
Photorealistic rendering (also called global illumination) enables us to see how digital objects would look like in real life. It is an amazingly powerful tool in the hands of a professional artist, who can create breathtaking images or animations with. However, for the longest time, artists didn't use it in the movie industry because it did not offer a great artistic freedom - after all, it works according to the laws of physics, which are exact. This piece of work enables us to apply artistic edits to photorealistic renderings easily and intuitively. I believe this one has the potential to single-handedly change the landscape of photorealistic rendering on a production scale. ______________________ VFX tricks with photorealistic rendering in Game of Thrones: https://www.youtube.com/watch?v=C56t6ieVxBs https://www.youtube.com/watch?v=YJDsl4Kl8G4 The paper "Path-Space Manipulation of Physically-Based Light Transport" is available here: https://cg.ivd.kit.edu/english/PSMPBLT.php Disclaimer: I was not part of this research project, I am merely providing commentary on this work. I held a course on photorealistic rendering at the Technical University of Vienna. Here you can learn how the physics of light works and to write programs like this: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Lightrig: http://lightrig.de/ Function 2015 demoparty: http://2015.function.hu/ Scene credits: Last Light - J the Ninja (Jason Clarke) - also used as the thumbnail background Italian Style Still Life - Bhavin Solanki Interior scene - EnzoR Klein Bottle - BravoZulu Audi R8 - barryangus SL65 "Black edition" - zuzzi Music: "Do It Right" by Jingle Punks The thumbnail background was created by Jason Clarke. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is Two Minute Papers with Kádoz Zsolnai-Fehér. Photorealistic rendering is a really exciting field in computer graphics. It works the following way. We use a piece of modeling software to create the geometry of objects, then we assign material models to them. After that, we would like to know how these objects would look like in real life. To achieve this, we use computer programs that simulate the behavior of light. So this is how the scene would look like with photorealistic rendering. If it is possible to create digital objects that look like if they were real, then artists have an extremely powerful tool they can create wonderful images and animations with. It is not a surprise that we see photorealistic rendered cities next to real actors in many feature-length movies nowadays. Game of Thrones is also a great example of this. I've linked two jaw-dropping examples in the description box below. Take a look. The automotive industry also has lots of ads where people don't even know that they are not looking at reality, but a computer simulation. But in the movie industry, the Pixar people were reluctant to use photorealistic rendering for the longest time, and it is because it constrained their artistic freedom. One classical example is when the artist says that I want those shadows to be brighter. Then the engineer says, okay, let's put brighter light sources in the scene. But then the artist goes no, don't ruin the rest of the scene, just change those shadows. It is not possible if you change something everything else in the surroundings changes. This is how physics works, but artists did not want any of that. But now things are changing. With this piece of work, you can both use photorealistic rendering and manipulate the results according to your artist's vision. For instance, the reflection of the car in the mirror here doesn't look really great. In order to overcome this, we could rotate the mirror to have a better looking reflection, but we wanted to stay where it is now. So we'll just pretend as if we rotated it so the reflection looks different, but everything else remains the same. Or we can change the angle of the incoming sunlight, but we don't want to put the sun itself to a different place, because it would change the entire scene. The artist wants only this one effect to change, and she is now able to do that, which is spectacular. Removing the green splotch from the wall is now also not much of a problem. And also, if I don't like that only half of the reflection of the sphere is visible on the face of the bunny, I could move the entire sphere. But I don't want to. I just want to grab the reflection and move it without changing anything else in the scene. Great! It has a much better cinematic look now. This is an amazing piece of work, and what's even better, these guys didn't only publish the paper, but they went all the way and found the startup on top of it. Way to go! The next episode of Two Minute Papers will be very slightly delayed, because I will be holding a one hour seminar at an event soon, and I'm trying to make it the best I can. My apologies for the delay. Hmm, this one got a bit longer, it's a bit more like three minute papers. But I really hope that you liked it. Thanks for watching, and if you liked this series, become a fellow scholar by hitting that subscribe button. I am looking forward to have you in our growing group of scholars. Thanks, and I'll see you next time.
[{"start": 0.0, "end": 5.68, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1doz Zsolnai-Feh\u00e9r."}, {"start": 5.68, "end": 10.36, "text": " Photorealistic rendering is a really exciting field in computer graphics."}, {"start": 10.36, "end": 11.76, "text": " It works the following way."}, {"start": 11.76, "end": 16.76, "text": " We use a piece of modeling software to create the geometry of objects, then we assign material"}, {"start": 16.76, "end": 17.96, "text": " models to them."}, {"start": 17.96, "end": 22.04, "text": " After that, we would like to know how these objects would look like in real life."}, {"start": 22.04, "end": 26.76, "text": " To achieve this, we use computer programs that simulate the behavior of light."}, {"start": 26.76, "end": 31.520000000000003, "text": " So this is how the scene would look like with photorealistic rendering."}, {"start": 31.520000000000003, "end": 36.0, "text": " If it is possible to create digital objects that look like if they were real, then artists"}, {"start": 36.0, "end": 41.480000000000004, "text": " have an extremely powerful tool they can create wonderful images and animations with."}, {"start": 41.480000000000004, "end": 46.36, "text": " It is not a surprise that we see photorealistic rendered cities next to real actors in many"}, {"start": 46.36, "end": 48.36, "text": " feature-length movies nowadays."}, {"start": 48.36, "end": 51.36, "text": " Game of Thrones is also a great example of this."}, {"start": 51.36, "end": 55.28, "text": " I've linked two jaw-dropping examples in the description box below."}, {"start": 55.28, "end": 56.52, "text": " Take a look."}, {"start": 56.52, "end": 61.080000000000005, "text": " The automotive industry also has lots of ads where people don't even know that they"}, {"start": 61.080000000000005, "end": 65.0, "text": " are not looking at reality, but a computer simulation."}, {"start": 65.0, "end": 69.48, "text": " But in the movie industry, the Pixar people were reluctant to use photorealistic rendering"}, {"start": 69.48, "end": 75.0, "text": " for the longest time, and it is because it constrained their artistic freedom."}, {"start": 75.0, "end": 79.96000000000001, "text": " One classical example is when the artist says that I want those shadows to be brighter."}, {"start": 79.96000000000001, "end": 84.68, "text": " Then the engineer says, okay, let's put brighter light sources in the scene."}, {"start": 84.68, "end": 90.24000000000001, "text": " But then the artist goes no, don't ruin the rest of the scene, just change those shadows."}, {"start": 90.24000000000001, "end": 95.16000000000001, "text": " It is not possible if you change something everything else in the surroundings changes."}, {"start": 95.16000000000001, "end": 99.80000000000001, "text": " This is how physics works, but artists did not want any of that."}, {"start": 99.80000000000001, "end": 102.36000000000001, "text": " But now things are changing."}, {"start": 102.36000000000001, "end": 108.16000000000001, "text": " With this piece of work, you can both use photorealistic rendering and manipulate the results according"}, {"start": 108.16000000000001, "end": 110.32000000000001, "text": " to your artist's vision."}, {"start": 110.32, "end": 115.0, "text": " For instance, the reflection of the car in the mirror here doesn't look really great."}, {"start": 115.0, "end": 120.0, "text": " In order to overcome this, we could rotate the mirror to have a better looking reflection,"}, {"start": 120.0, "end": 122.24, "text": " but we wanted to stay where it is now."}, {"start": 122.24, "end": 126.88, "text": " So we'll just pretend as if we rotated it so the reflection looks different, but everything"}, {"start": 126.88, "end": 132.16, "text": " else remains the same."}, {"start": 132.16, "end": 136.79999999999998, "text": " Or we can change the angle of the incoming sunlight, but we don't want to put the sun"}, {"start": 136.8, "end": 140.60000000000002, "text": " itself to a different place, because it would change the entire scene."}, {"start": 140.60000000000002, "end": 147.52, "text": " The artist wants only this one effect to change, and she is now able to do that, which is spectacular."}, {"start": 147.52, "end": 156.36, "text": " Removing the green splotch from the wall is now also not much of a problem."}, {"start": 156.36, "end": 160.76000000000002, "text": " And also, if I don't like that only half of the reflection of the sphere is visible on"}, {"start": 160.76000000000002, "end": 164.24, "text": " the face of the bunny, I could move the entire sphere."}, {"start": 164.24, "end": 165.48000000000002, "text": " But I don't want to."}, {"start": 165.48, "end": 169.95999999999998, "text": " I just want to grab the reflection and move it without changing anything else in the"}, {"start": 169.95999999999998, "end": 170.95999999999998, "text": " scene."}, {"start": 170.95999999999998, "end": 171.95999999999998, "text": " Great!"}, {"start": 171.95999999999998, "end": 174.83999999999997, "text": " It has a much better cinematic look now."}, {"start": 174.83999999999997, "end": 180.16, "text": " This is an amazing piece of work, and what's even better, these guys didn't only publish"}, {"start": 180.16, "end": 187.0, "text": " the paper, but they went all the way and found the startup on top of it."}, {"start": 187.0, "end": 188.0, "text": " Way to go!"}, {"start": 188.0, "end": 192.56, "text": " The next episode of Two Minute Papers will be very slightly delayed, because I will be holding"}, {"start": 192.56, "end": 197.6, "text": " a one hour seminar at an event soon, and I'm trying to make it the best I can."}, {"start": 197.6, "end": 199.84, "text": " My apologies for the delay."}, {"start": 199.84, "end": 204.76, "text": " Hmm, this one got a bit longer, it's a bit more like three minute papers."}, {"start": 204.76, "end": 206.72, "text": " But I really hope that you liked it."}, {"start": 206.72, "end": 211.16, "text": " Thanks for watching, and if you liked this series, become a fellow scholar by hitting that"}, {"start": 211.16, "end": 212.16, "text": " subscribe button."}, {"start": 212.16, "end": 215.48000000000002, "text": " I am looking forward to have you in our growing group of scholars."}, {"start": 215.48, "end": 222.48, "text": " Thanks, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=kQ2bqz3HPJE
Digital Creatures Learn To Walk | Two Minute Papers #8
In this episode, we are going to talk about computer animation, animating bipeds in particular. If we have the geometry of a creature, we need to specify the bones, the muscle routings and the muscle activations to make them able to walk. Depending on the body proportions and types, it may require quite a bit of trial and error to build muscle layouts so the creature doesn't collapse. Making them walk is even more difficult! This piece of work not only makes it happen for a variety of bipedal creatures, but the results are robust for a variety of target walking speeds, uneven terrain and other, unpleasant difficulties. _________________________________ The paper "Flexible Muscle-Based Locomotion for Bipedal Creatures" is available here: http://www.goatstream.com/research/papers/SA2013/ Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Music: "Daisy Dukes" by Silent Partner Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. First of all, thanks so much for watching Two Minute Papers. You Fellow Scholars have been an amazing and supportive audience. We just started, but the series already has a steady following and I'm super excited to see that. It is also great that the helpful and respectful community has formed in the comment section. It's really cool to discuss these results and possibly come up with cool new ideas together. In this episode we're going to set foot in computer animation. Imagine that we have built bipedal creatures in a modeling program. We have the geometry down, but it is not nearly enough to animate them in a way that looks physically plausible. We have to go one step beyond and define the bones and the rooting of muscles inside their bodies. If we want them to walk, we also need to specify how these muscles should be controlled during this process. This work presents a novel algorithm that takes many tries to build a new muscle rooting and progressively improving the results. It also deals with the control of all of these muscles. For instance, one quickly needs to discover that the neck muscles cannot move arbitrarily or they will fail to support the head and the whole character will collapse in a very amusing manner. When talking about things like this, scientists often use the term decrease of freedom to define the number of independent ways a dynamic system can move. Building a system that is stable and uses a minimal amount of energy for locomotion is incredibly challenging. You can see that even the most miniscule change will collapse a system that previously worked perfectly. The fact that we can walk and move around unharmed can be attributed to the unbelievable efficiency of evolution. The difficulty of this problem is further magnified by the fact that many possible body compositions and setups exist, many of which are quite challenging to hold together while moving. And even if we solve this problem, walking at a given target speed is one thing. What about higher target speeds? In this work, the resulting muscle setups can deal with different target speeds, and even terrain. And hmm, other unpleasant difficulties. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 8.4, "text": " First of all, thanks so much for watching Two Minute Papers."}, {"start": 8.4, "end": 11.8, "text": " You Fellow Scholars have been an amazing and supportive audience."}, {"start": 11.8, "end": 16.240000000000002, "text": " We just started, but the series already has a steady following and I'm super excited"}, {"start": 16.240000000000002, "end": 17.240000000000002, "text": " to see that."}, {"start": 17.240000000000002, "end": 21.92, "text": " It is also great that the helpful and respectful community has formed in the comment section."}, {"start": 21.92, "end": 27.240000000000002, "text": " It's really cool to discuss these results and possibly come up with cool new ideas together."}, {"start": 27.24, "end": 30.68, "text": " In this episode we're going to set foot in computer animation."}, {"start": 30.68, "end": 34.56, "text": " Imagine that we have built bipedal creatures in a modeling program."}, {"start": 34.56, "end": 38.92, "text": " We have the geometry down, but it is not nearly enough to animate them in a way that looks"}, {"start": 38.92, "end": 40.239999999999995, "text": " physically plausible."}, {"start": 40.239999999999995, "end": 44.68, "text": " We have to go one step beyond and define the bones and the rooting of muscles inside"}, {"start": 44.68, "end": 45.68, "text": " their bodies."}, {"start": 45.68, "end": 49.92, "text": " If we want them to walk, we also need to specify how these muscles should be controlled"}, {"start": 49.92, "end": 51.8, "text": " during this process."}, {"start": 51.8, "end": 56.32, "text": " This work presents a novel algorithm that takes many tries to build a new muscle rooting"}, {"start": 56.32, "end": 58.480000000000004, "text": " and progressively improving the results."}, {"start": 58.480000000000004, "end": 61.480000000000004, "text": " It also deals with the control of all of these muscles."}, {"start": 61.480000000000004, "end": 66.32, "text": " For instance, one quickly needs to discover that the neck muscles cannot move arbitrarily"}, {"start": 66.32, "end": 70.56, "text": " or they will fail to support the head and the whole character will collapse in a very"}, {"start": 70.56, "end": 71.8, "text": " amusing manner."}, {"start": 71.8, "end": 76.88, "text": " When talking about things like this, scientists often use the term decrease of freedom to"}, {"start": 76.88, "end": 81.0, "text": " define the number of independent ways a dynamic system can move."}, {"start": 81.0, "end": 85.28, "text": " Building a system that is stable and uses a minimal amount of energy for locomotion"}, {"start": 85.28, "end": 87.4, "text": " is incredibly challenging."}, {"start": 87.4, "end": 91.32000000000001, "text": " You can see that even the most miniscule change will collapse a system that previously"}, {"start": 91.32000000000001, "end": 92.96000000000001, "text": " worked perfectly."}, {"start": 92.96000000000001, "end": 97.84, "text": " The fact that we can walk and move around unharmed can be attributed to the unbelievable"}, {"start": 97.84, "end": 99.6, "text": " efficiency of evolution."}, {"start": 99.6, "end": 104.72, "text": " The difficulty of this problem is further magnified by the fact that many possible body compositions"}, {"start": 104.72, "end": 110.0, "text": " and setups exist, many of which are quite challenging to hold together while moving."}, {"start": 110.0, "end": 114.92, "text": " And even if we solve this problem, walking at a given target speed is one thing."}, {"start": 114.92, "end": 116.8, "text": " What about higher target speeds?"}, {"start": 116.8, "end": 130.84, "text": " In this work, the resulting muscle setups can deal with different target speeds,"}, {"start": 130.84, "end": 144.2, "text": " and even terrain."}, {"start": 144.2, "end": 152.24, "text": " And hmm, other unpleasant difficulties."}, {"start": 152.24, "end": 178.92000000000002, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=r52zC2VpMng
Announcing LuxRender 1.5
Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz LuxRender is a completely free, open source, physically based renderer with many contributors led by Jean-Philippe Grimaldi. LuxRender version 1.5 introduces the following features: - microkernel architecture for significantly faster rendering, - biased path tracing engine (including tiled rendering, radiance clamping and many more), - adaptive rendering, - Intel Embree-based accelerator, - Laser light source type - arbitrary clipping planes, - pointiness option for materials, - volume priority system, - hair strand primitives, - up to 16 times faster exporting for meshes - new 3Ds max plugin, - volumetric emission system. A full feature list announcement is available here: http://www.luxrender.net/forum/viewtopic.php?f=12&t=12363 A video of my 7 favorite LuxRender features: https://www.youtube.com/watch?v=LD5YS8-0Rkg The mentioned scene repository is available here: http://www.luxrender.net/wiki/Show-off_pack Official website: http://www.luxrender.net/ Forums: http://www.luxrender.net/forum Gallery: http://www.luxrender.net/gallery Infographics: https://www.cg.tuwien.ac.at/~zsolnai/wp/wp-content/uploads/2014/01/luxrender-infographics.jpg Reddit: https://www.reddit.com/r/luxrender/ Scene credits: Splash Screen - HAL 9000 Floor, bowl and the fish models and textures: ColeHarris, plant models and textures: Bhavin Solanki, HDRI: Maxime Roz Mr. Snippy (microkernel image) - Simon Wendsche Cornell Box - Aaron Hill Interior - Walter Zatta Simple Morning - Chris Foster Jade Dragon - Peter Sandbacka and Ian Blew Laser Bear - J the Ninja Dojo - Vlad Miller Heterogeneous medium video - Simon Wendsche Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Music: "Pachabelly" by Huma-Huma. Also thanks to Hapoofesgeli for his help with some of the images! Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Ladies and gentlemen, this is Karo Yuzhou and I Fahir and I am very excited to show you the new features of Laxrender version 1.5. Laxrender is a physically based renderer program that generates photorealistic images from scenes that are created in 3D modeler programs. Here you can see the geometry of a scene in the modeler program. After we have the geometry of objects, we assign material models to them. And here you can see what the photorealistic renderer program does with it. At Laxrender we are privileged to have so many incredibly skilled artists using our system and creating stunning scenes such as these ones. So about the new release, there is just so much meat in this one. Very excited to tell you all about it. Ready? Let's get started. First off, Laxrender now uses a micro-coronal based architecture that can compile and render super high resolution images like this in about 5 minutes. The resolution of this image is higher than 4K. It is so detailed that even if we zoom in this much, it still looks remarkably crisp. The new BIOS Path Trace Rangin has a variety of new features. Examples include tile rendering, radiance clamping to reduce firefly noise, visibility for indirect rays and many others. In short, the new BIOS Path Engine allows fine control over the sampling process, giving you a more powerful and flexible algorithm. Laxrender now supports adaptive rendering, which means that it will automatically find and concentrate on noisy regions of the image. It won't waste your resources on regions of the image that are already done. The no-intel-embry-based accelerator is between 20-50% faster than the previous technique for building acceleration structures. This helps the renderer to minimize the amount of time spent with intersecting rays of light against geometry. Laxrender now natively supports a new light source type called laser light. No more hacking with tubes and IES profiles. You can now create unique artistic effects by slicing scenes in half with the new arbitrary clipping plane feature. The new point in a feature allows using surface curvature information in materials and textures. This powerful mechanic can be used to create worn wooden edges, moss in rock crevices and many others sophisticated effects. With the new volume priority system, it is finally really easy to correctly and intuitively render overlapping volumes. Hair strand primitives are now supported. Look at these incredible examples. There's going to be lots of fun to be had with this one. Exporting meshes is now up to 16 times faster. We have a completely new Laxrender plugin for 3D studio max. It's still early in development but it's definitely worth checking out. Let us know what you think about it. And the icing on the cake, we have a new volumetric emission system that supports fire and many other kinds of glowing volumes. Here in the video you see nothing less than a textured heterogeneous volume with animated colors. Love the demos in look on this one. And please note that this is not everything. In fact, not even close to what Laxrender 1.5 has to offer. I have put the forum post with all the changes and the new features in the description box. Check it out. We invite you to come and give it a try. We also have a seemingly positive where you can download a collection of really tasty scenes to get you started. And if you're stuck or if there's anything we can help you with, just jump on the forums and let us know. We'll be more than happy to have you as a member of our friendly community. Or if you have created something great with Laxrender, let us know so we can marvel at your work together. Thanks for watching and I'll see you in the forums. Cheers!
[{"start": 0.0, "end": 8.0, "text": " Ladies and gentlemen, this is Karo Yuzhou and I Fahir and I am very excited to show you the new features of Laxrender version 1.5."}, {"start": 8.0, "end": 15.0, "text": " Laxrender is a physically based renderer program that generates photorealistic images from scenes that are created in 3D modeler programs."}, {"start": 15.0, "end": 23.0, "text": " Here you can see the geometry of a scene in the modeler program. After we have the geometry of objects, we assign material models to them."}, {"start": 23.0, "end": 27.0, "text": " And here you can see what the photorealistic renderer program does with it."}, {"start": 27.0, "end": 35.0, "text": " At Laxrender we are privileged to have so many incredibly skilled artists using our system and creating stunning scenes such as these ones."}, {"start": 35.0, "end": 40.0, "text": " So about the new release, there is just so much meat in this one."}, {"start": 40.0, "end": 45.0, "text": " Very excited to tell you all about it. Ready? Let's get started."}, {"start": 45.0, "end": 54.0, "text": " First off, Laxrender now uses a micro-coronal based architecture that can compile and render super high resolution images like this in about 5 minutes."}, {"start": 54.0, "end": 63.0, "text": " The resolution of this image is higher than 4K. It is so detailed that even if we zoom in this much, it still looks remarkably crisp."}, {"start": 63.0, "end": 75.0, "text": " The new BIOS Path Trace Rangin has a variety of new features. Examples include tile rendering, radiance clamping to reduce firefly noise, visibility for indirect rays and many others."}, {"start": 75.0, "end": 84.0, "text": " In short, the new BIOS Path Engine allows fine control over the sampling process, giving you a more powerful and flexible algorithm."}, {"start": 84.0, "end": 91.0, "text": " Laxrender now supports adaptive rendering, which means that it will automatically find and concentrate on noisy regions of the image."}, {"start": 91.0, "end": 95.0, "text": " It won't waste your resources on regions of the image that are already done."}, {"start": 95.0, "end": 102.0, "text": " The no-intel-embry-based accelerator is between 20-50% faster than the previous technique for building acceleration structures."}, {"start": 102.0, "end": 108.0, "text": " This helps the renderer to minimize the amount of time spent with intersecting rays of light against geometry."}, {"start": 108.0, "end": 116.0, "text": " Laxrender now natively supports a new light source type called laser light. No more hacking with tubes and IES profiles."}, {"start": 116.0, "end": 123.0, "text": " You can now create unique artistic effects by slicing scenes in half with the new arbitrary clipping plane feature."}, {"start": 123.0, "end": 136.0, "text": " The new point in a feature allows using surface curvature information in materials and textures. This powerful mechanic can be used to create worn wooden edges, moss in rock crevices and many others sophisticated effects."}, {"start": 136.0, "end": 143.0, "text": " With the new volume priority system, it is finally really easy to correctly and intuitively render overlapping volumes."}, {"start": 143.0, "end": 150.0, "text": " Hair strand primitives are now supported. Look at these incredible examples. There's going to be lots of fun to be had with this one."}, {"start": 150.0, "end": 154.0, "text": " Exporting meshes is now up to 16 times faster."}, {"start": 154.0, "end": 164.0, "text": " We have a completely new Laxrender plugin for 3D studio max. It's still early in development but it's definitely worth checking out. Let us know what you think about it."}, {"start": 164.0, "end": 171.0, "text": " And the icing on the cake, we have a new volumetric emission system that supports fire and many other kinds of glowing volumes."}, {"start": 171.0, "end": 179.0, "text": " Here in the video you see nothing less than a textured heterogeneous volume with animated colors. Love the demos in look on this one."}, {"start": 179.0, "end": 185.0, "text": " And please note that this is not everything. In fact, not even close to what Laxrender 1.5 has to offer."}, {"start": 185.0, "end": 192.0, "text": " I have put the forum post with all the changes and the new features in the description box. Check it out. We invite you to come and give it a try."}, {"start": 192.0, "end": 198.0, "text": " We also have a seemingly positive where you can download a collection of really tasty scenes to get you started."}, {"start": 198.0, "end": 203.0, "text": " And if you're stuck or if there's anything we can help you with, just jump on the forums and let us know."}, {"start": 203.0, "end": 212.0, "text": " We'll be more than happy to have you as a member of our friendly community. Or if you have created something great with Laxrender, let us know so we can marvel at your work together."}, {"start": 212.0, "end": 241.0, "text": " Thanks for watching and I'll see you in the forums. Cheers!"}]
Two Minute Papers
https://www.youtube.com/watch?v=kLnG073NYtw
Hydrographic Printing | Two Minute Papers #7
3D printing is a technique to create digital objects in real life. This technology is mostly focused on reproducing the digital geometry itself - colored patterns (textures) still remains a challenge, and we only have very rudimentary technology to do that. Hydrographic printing on 3D surfaces is a really simple technique: you place a film in water, use a chemical activator spray on it, and shove the object in the water. However, since these objects start stretching the film, the technique is not very accurate, and it only helps you putting repetitive patterns on these objects. Computational Hydrographic Printing is a technique that simulates all of these physical forces that are exerted on the film when your desired object is immersed into the water. Then, it creates a new image map taking all of these distortions into account, and this image you can print with your home inkjet printer. The results will be really accurate, close to indistinguishable from the digitally designed object. The paper "Computational Hydrographic Printing" is available here: http://www.cs.columbia.edu/~cxz/publications/hydrographics.pdf Disclaimer: I was not part of this research project, I am merely providing commentary on this work. The splash screen background was taken from "Computational Hydrographic Printing" by Zheng et al. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Flickr: Wonderlane Link: https://flic.kr/p/atCLXr Music: Soul Groove by Audionautix - it is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, this is Two Minute Papers with Karoi Zsolnai-Fair. So many of you were sharing the previous episode, for the first time I just couldn't keep up and write a kind message to every single one of you. But I'm trying my best. It really means a lot and again just thanks so much for sharing. So delighted to see that people are coming in, checking out the series and expressing that they liked it. The feedback has been absolutely insane. Two Fellow Scholars seem to love the show quite a bit and it really makes my day. It's also fantastic to see that there is a hunger out there for science. People want to know more on what is happening inside the labs. That's really amazing. Thank you and let us continue together on our scholarly journey. 3D printing is a technique to create digital objects in real life. It has come a long way in the last few years. There has been excellent work done on designing deformable characters, mechanical characters, and characters of varying elasticity. You can even scan your teeth and print copies of them. And these are just a few examples of a multitude of things that you can do with 3D printing. However, this technology is mostly focused on the geometry itself. Colored patterns that people call textures still remain so challenge and we only have very rudimentary technology to do that. So check this out. This is going to be an immersive experience. Traffic printing on 3D surfaces is a really simple technique. You place a film in water, use a chemical activator spray on it, and shove the object in the water. So far so good. However, since these objects start stretching the film, the technique is not very accurate. It only helps you putting repetitive patterns on these objects. Computational hydrographic printing is a technique that simulates all of these physical forces that are exerted on the film when your desired object is immersed into the water. Then it creates a no-image map taking all of these distortions into account and this image you can print with your home inject filter. The results will be really accurate, close to indistinguishable from the digitally designed object. The technique also supports multiple emotions that helps putting textures on a non-planar object with multiple sides to be colored. So as you can see 3D printing is improving at a rapid pace, there's tons of great research going on in this field. It is a technology that is going to change the way we live our daily lives in ways that we cannot even imagine yet. And what would you print with this? Do you have any crazy ideas? Let me know in the comment section. Thank you for now, thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoi Zsolnai-Fair."}, {"start": 4.84, "end": 8.92, "text": " So many of you were sharing the previous episode, for the first time I just couldn't keep"}, {"start": 8.92, "end": 12.16, "text": " up and write a kind message to every single one of you."}, {"start": 12.16, "end": 13.64, "text": " But I'm trying my best."}, {"start": 13.64, "end": 17.76, "text": " It really means a lot and again just thanks so much for sharing."}, {"start": 17.76, "end": 22.2, "text": " So delighted to see that people are coming in, checking out the series and expressing that"}, {"start": 22.2, "end": 23.2, "text": " they liked it."}, {"start": 23.2, "end": 26.28, "text": " The feedback has been absolutely insane."}, {"start": 26.28, "end": 30.400000000000002, "text": " Two Fellow Scholars seem to love the show quite a bit and it really makes my day."}, {"start": 30.400000000000002, "end": 34.760000000000005, "text": " It's also fantastic to see that there is a hunger out there for science."}, {"start": 34.760000000000005, "end": 38.0, "text": " People want to know more on what is happening inside the labs."}, {"start": 38.0, "end": 39.52, "text": " That's really amazing."}, {"start": 39.52, "end": 43.400000000000006, "text": " Thank you and let us continue together on our scholarly journey."}, {"start": 43.400000000000006, "end": 47.2, "text": " 3D printing is a technique to create digital objects in real life."}, {"start": 47.2, "end": 49.72, "text": " It has come a long way in the last few years."}, {"start": 49.72, "end": 56.24, "text": " There has been excellent work done on designing deformable characters, mechanical characters,"}, {"start": 56.24, "end": 59.68, "text": " and characters of varying elasticity."}, {"start": 59.68, "end": 62.6, "text": " You can even scan your teeth and print copies of them."}, {"start": 62.6, "end": 67.36, "text": " And these are just a few examples of a multitude of things that you can do with 3D printing."}, {"start": 67.36, "end": 71.8, "text": " However, this technology is mostly focused on the geometry itself."}, {"start": 71.8, "end": 76.32000000000001, "text": " Colored patterns that people call textures still remain so challenge and we only have"}, {"start": 76.32000000000001, "end": 78.88, "text": " very rudimentary technology to do that."}, {"start": 78.88, "end": 79.72, "text": " So check this out."}, {"start": 79.72, "end": 83.08, "text": " This is going to be an immersive experience."}, {"start": 83.08, "end": 86.44, "text": " Traffic printing on 3D surfaces is a really simple technique."}, {"start": 86.44, "end": 91.16, "text": " You place a film in water, use a chemical activator spray on it, and shove the object in the"}, {"start": 91.16, "end": 92.16, "text": " water."}, {"start": 92.16, "end": 93.16, "text": " So far so good."}, {"start": 93.16, "end": 98.64, "text": " However, since these objects start stretching the film, the technique is not very accurate."}, {"start": 98.64, "end": 107.8, "text": " It only helps you putting repetitive patterns on these objects."}, {"start": 107.8, "end": 112.12, "text": " Computational hydrographic printing is a technique that simulates all of these physical forces"}, {"start": 112.12, "end": 116.84, "text": " that are exerted on the film when your desired object is immersed into the water."}, {"start": 116.84, "end": 121.16000000000001, "text": " Then it creates a no-image map taking all of these distortions into account and this"}, {"start": 121.16000000000001, "end": 124.08000000000001, "text": " image you can print with your home inject filter."}, {"start": 124.08000000000001, "end": 128.88, "text": " The results will be really accurate, close to indistinguishable from the digitally designed"}, {"start": 128.88, "end": 136.88, "text": " object."}, {"start": 136.88, "end": 141.68, "text": " The technique also supports multiple emotions that helps putting textures on a non-planar"}, {"start": 141.68, "end": 145.20000000000002, "text": " object with multiple sides to be colored."}, {"start": 145.20000000000002, "end": 150.36, "text": " So as you can see 3D printing is improving at a rapid pace, there's tons of great research"}, {"start": 150.36, "end": 151.44, "text": " going on in this field."}, {"start": 151.44, "end": 156.08, "text": " It is a technology that is going to change the way we live our daily lives in ways that"}, {"start": 156.08, "end": 158.64000000000001, "text": " we cannot even imagine yet."}, {"start": 158.64000000000001, "end": 160.64000000000001, "text": " And what would you print with this?"}, {"start": 160.64000000000001, "end": 162.60000000000002, "text": " Do you have any crazy ideas?"}, {"start": 162.60000000000002, "end": 164.44, "text": " Let me know in the comment section."}, {"start": 164.44, "end": 191.12, "text": " Thank you for now, thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-R9bJGNHltQ
Deep Neural Network Learns Van Gogh's Art | Two Minute Papers #6
Artificial neural networks were inspired by the human brain and simulate how neurons behave when they are shown a sensory input (e.g., images, sounds, etc). They are known to be excellent tools for image recognition, any many other problems beyond that - they also excel at weather predictions, breast cancer cell mitosis detection, brain image segmentation and toxicity prediction among many others. Deep learning means that we use an artificial neural network with multiple layers, making it even more powerful for more difficult tasks. This time they have been shown to be apt at reproducing the artistic style of many famous painters, such as Vincent Van Gogh and Pablo Picasso among many others. All the user needs to do is provide an input photograph and a target image from which the artistic style will be learned. ______________________ I promised some links, so here they come! The paper "A Neural Algorithm of Artistic Style" is available here: http://arxiv.org/abs/1508.06576v1 Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Recommended for you - Two Minute Papers episode on Artificial Neural Networks: https://www.youtube.com/watch?v=rCWTOOgVXyE&index=3&list=PLujxSBD-JXgnqDD1n-V30pKtp6Q886x7e Picasso meets Gandalf: http://mashable.com/2015/08/29/computer-photos/ A nice website with many results: https://deepart.io/ More examples with Picasso and some sketches: http://imgur.com/a/jeJB6 Google DeepMind's Deep Q-learning algorithm plays Atari games: https://www.youtube.com/watch?v=V1eYniJ0Rnk The first implementations / source code packages are now available: 1. http://gitxiv.com/posts/jG46ukGod8R7Rdtud/a-neural-algorithm-of-artistic-style 2. https://github.com/kaishengtai/neuralart 3. https://github.com/jcjohnson/neural-style A great read on Deep Dreaming Neural Networks: http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html Many of you have asked for the code. Some people were experimenting with it in the Machine Learning reddit. Check it out: https://www.reddit.com/r/MachineLearning/comments/3imx1m/a_neural_algorithm_of_artistic_style/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Music: Epilog - Ghostpocalypse by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Source: http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1100666 Artist: http://incompetech.com/ ______________________ Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fair and this paper is as fresh as it gets. As of the making of this video, it has been out for only one day and I got so excited about it that I wanted to show it to you Fellow Scholars as soon as humanly possible because you've got to see this. Not so long ago we have been talking about deep neural networks, a technique that was inspired by the human visual system. It enables computers to learn things in a very similar way that a human would. There is a previous two-minute papers episode on this, just click on the link in the description box if you've missed it. Neural networks are by no means perfect, so do not worry, don't quit your job, you're good. But some applications are getting out of control. In Google DeepMind's case, it started to learn playing simple computer games and eventually showed us superhuman level plays in some cases. So, if you've run this piece of code and got some pretty sweet results that you can check out, there's a link to it in the description box as well. So, about this paper we have here today, what does this one do? You take photographs with your camera and you can assign it any painting and it will apply this painting's artistic style to it. You can add the artistic style of Vincent van Gogh's beautiful story and get some gorgeous results. Or, if you are looking for a bit more emotional or may I say disturbed look, you can go for Edward Monks' The Scream for some stunning results. And of course, the mandatory Picasso. So, as you can see, deep neural networks are capable of amazing things and we expect even more revolutionary works in the very near future. Thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 7.44, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fair and this paper is as fresh as it gets."}, {"start": 7.44, "end": 19.0, "text": " As of the making of this video, it has been out for only one day and I got so excited about it that I wanted to show it to you Fellow Scholars as soon as humanly possible because you've got to see this."}, {"start": 19.0, "end": 26.0, "text": " Not so long ago we have been talking about deep neural networks, a technique that was inspired by the human visual system."}, {"start": 26.0, "end": 30.0, "text": " It enables computers to learn things in a very similar way that a human would."}, {"start": 30.0, "end": 36.0, "text": " There is a previous two-minute papers episode on this, just click on the link in the description box if you've missed it."}, {"start": 36.0, "end": 42.0, "text": " Neural networks are by no means perfect, so do not worry, don't quit your job, you're good."}, {"start": 42.0, "end": 45.0, "text": " But some applications are getting out of control."}, {"start": 45.0, "end": 53.0, "text": " In Google DeepMind's case, it started to learn playing simple computer games and eventually showed us superhuman level plays in some cases."}, {"start": 53.0, "end": 60.0, "text": " So, if you've run this piece of code and got some pretty sweet results that you can check out, there's a link to it in the description box as well."}, {"start": 60.0, "end": 64.0, "text": " So, about this paper we have here today, what does this one do?"}, {"start": 64.0, "end": 72.0, "text": " You take photographs with your camera and you can assign it any painting and it will apply this painting's artistic style to it."}, {"start": 72.0, "end": 78.0, "text": " You can add the artistic style of Vincent van Gogh's beautiful story and get some gorgeous results."}, {"start": 78.0, "end": 87.0, "text": " Or, if you are looking for a bit more emotional or may I say disturbed look, you can go for Edward Monks' The Scream for some stunning results."}, {"start": 87.0, "end": 90.0, "text": " And of course, the mandatory Picasso."}, {"start": 90.0, "end": 100.0, "text": " So, as you can see, deep neural networks are capable of amazing things and we expect even more revolutionary works in the very near future."}, {"start": 100.0, "end": 108.0, "text": " Thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UePDRN94C8c
Time Lapse Videos From Community Photos | Two Minute Papers #5
Building time lapse videos from community photographs is an incredibly difficult and laborious task: these photos were taken at a different part of the year, from different times of the day, with different viewpoints and cameras. A good algorithm should try to equalize these images and bring them to a common denominator to get rid of the commonly seen flickering effect. Researchers at the University of Washington and Google nailed this regularization in their newest work that they showcased at SIGGRAPH 2015. Check out the video for the details! Photograph credits in the video: Flickr user dration, Zack Lee, Nadav Tobias, Juan Jesus Orío and Klaus Wißkirchen. In the original paper, photographs from the following Flickr users were reproduced under Creative Commons license: Aliento Más Allá, jirihnidek, mcxurxo, elka cz, Daikrieg, Free the image, Cebete and ToastyKen. The paper "Time-lapse Mining from Internet Photos" is available here: http://grail.cs.washington.edu/projects/timelapse/ Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Thumbnail background by Davidw: https://www.flickr.com/photos/davidw/2297191644/ https://creativecommons.org/licenses/by/2.0/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Hey there fellow scholars, I am Karoi Zonaifahir and this is two-minute papers where we learn that research is not only for experts, it is for everyone. Is everything going fine? I hope you are all doing well and you are having a wonderful time. In this episode we are going to look at time lapse videos. Let's say we would like to build a beautiful time lapse of an Norwegian glacier. The solution sounds quite simple. Let's find hundreds of photos from the internet and build a time lapse video from them. If we just cut a video from them where we put them one after each other we will see a disturbing flickering effect. Why? Because the images were taken at a different time of the day so the illumination of the landscape is looking very different on all of them. They are also taken at a different time of the year and from different viewpoints. Moreover since these images are taken by cameras, different regions of the image may be in focus and out of focus. The algorithm therefore would have to somehow equalize all of the differences between these images and bring them to a common denominator. This process we call regularization and it is a really difficult problem. On the left you can see the flickering effect from the output of a previous algorithm that was already pretty good at regularization but it still has quite a bit of flickering. Here on the right you see the most recent results from the University of Washington and Google compared to this previous one. The new algorithm is also able to show us these beautiful, rhythmical seasonal changes in Lombard Street, some Francisco. It can also show us how sculptures change over the years and I feel that this example really shows the possibilities of the algorithm. We can observe effects around us that we would normally not notice in our everyday life simply because of the reason that they happen too slowly. And now here's the final time lapse for the glacier that we were looking for. So building high quality time lapse videos from an arbitrary set of photographs is unbelievably difficult and these guys have just nailed it. I'm loving this piece of work. And what do you think? Did you also like the results? Let me know in the comment section. Thanks for now, thanks for watching and I'll see you next time.
[{"start": 0.0, "end": 6.92, "text": " Hey there fellow scholars, I am Karoi Zonaifahir and this is two-minute papers where we"}, {"start": 6.92, "end": 10.84, "text": " learn that research is not only for experts, it is for everyone."}, {"start": 10.84, "end": 12.120000000000001, "text": " Is everything going fine?"}, {"start": 12.120000000000001, "end": 15.700000000000001, "text": " I hope you are all doing well and you are having a wonderful time."}, {"start": 15.700000000000001, "end": 19.48, "text": " In this episode we are going to look at time lapse videos."}, {"start": 19.48, "end": 23.92, "text": " Let's say we would like to build a beautiful time lapse of an Norwegian glacier."}, {"start": 23.92, "end": 26.04, "text": " The solution sounds quite simple."}, {"start": 26.04, "end": 30.619999999999997, "text": " Let's find hundreds of photos from the internet and build a time lapse video from them."}, {"start": 30.619999999999997, "end": 35.56, "text": " If we just cut a video from them where we put them one after each other we will see a disturbing"}, {"start": 35.56, "end": 36.96, "text": " flickering effect."}, {"start": 36.96, "end": 38.36, "text": " Why?"}, {"start": 38.36, "end": 42.44, "text": " Because the images were taken at a different time of the day so the illumination of the"}, {"start": 42.44, "end": 45.4, "text": " landscape is looking very different on all of them."}, {"start": 45.4, "end": 50.8, "text": " They are also taken at a different time of the year and from different viewpoints."}, {"start": 50.8, "end": 55.28, "text": " Moreover since these images are taken by cameras, different regions of the image may be"}, {"start": 55.28, "end": 57.92, "text": " in focus and out of focus."}, {"start": 57.92, "end": 62.36, "text": " The algorithm therefore would have to somehow equalize all of the differences between these"}, {"start": 62.36, "end": 65.68, "text": " images and bring them to a common denominator."}, {"start": 65.68, "end": 70.68, "text": " This process we call regularization and it is a really difficult problem."}, {"start": 70.68, "end": 74.88, "text": " On the left you can see the flickering effect from the output of a previous algorithm that"}, {"start": 74.88, "end": 80.72, "text": " was already pretty good at regularization but it still has quite a bit of flickering."}, {"start": 80.72, "end": 84.8, "text": " Here on the right you see the most recent results from the University of Washington and"}, {"start": 84.8, "end": 88.0, "text": " Google compared to this previous one."}, {"start": 88.0, "end": 92.6, "text": " The new algorithm is also able to show us these beautiful, rhythmical seasonal changes"}, {"start": 92.6, "end": 100.12, "text": " in Lombard Street, some Francisco."}, {"start": 100.12, "end": 105.0, "text": " It can also show us how sculptures change over the years and I feel that this example"}, {"start": 105.0, "end": 107.72, "text": " really shows the possibilities of the algorithm."}, {"start": 107.72, "end": 112.28, "text": " We can observe effects around us that we would normally not notice in our everyday life"}, {"start": 112.28, "end": 115.68, "text": " simply because of the reason that they happen too slowly."}, {"start": 115.68, "end": 120.88, "text": " And now here's the final time lapse for the glacier that we were looking for."}, {"start": 120.88, "end": 126.16, "text": " So building high quality time lapse videos from an arbitrary set of photographs is unbelievably"}, {"start": 126.16, "end": 129.16, "text": " difficult and these guys have just nailed it."}, {"start": 129.16, "end": 131.2, "text": " I'm loving this piece of work."}, {"start": 131.2, "end": 132.56, "text": " And what do you think?"}, {"start": 132.56, "end": 134.52, "text": " Did you also like the results?"}, {"start": 134.52, "end": 136.36, "text": " Let me know in the comment section."}, {"start": 136.36, "end": 145.04000000000002, "text": " Thanks for now, thanks for watching and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=A7Gut679I-o
Simulating Breaking Glass | Two Minute Papers #4
There is something inherently exciting about watching breaking glass and other objects. Researchers in computer graphics also like to have some fun and write simulation programs to smash together a variety of virtual objects in slow motion. However, despite being beautiful, they are physically not correct as many effects are neglected, such as simulating plasticity, bending stiffness, stretching energies and many others. Pfaff et al.'s paper "Adaptive Tearing and Cracking of Thin Sheets" addresses this issue by creating an adaptive simulator that uses more computational resources only around regions where cracks are likely to happen. This new technique enables the simulation of tearing for a variety of materials like cork, foils, metals, vinyl and it also yields physically correct results for glass. The algorithm also lets artists influence the outcome to be in line with their artistic visions. Pfaff et al.'s research paper "Adaptive Tearing and Cracking of Thin Sheets" is available here: http://graphics.berkeley.edu/papers/Pfaff-ATC-2014-07/ Disclaimer: I was not part of this research project, I am merely providing commentary on this work. In Two Minute Papers, I attempt to bring the most awesome research discoveries to everyone a couple minutes at a time. The shattered glass image from the thumbnail was created by Andrew Magill. Music: "Jolly Old St Nicholas" by E's Jammy Jams Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Greetings to all of you fellow scholars out there. This is two-minute papers where I explain awesome research works a couple minutes at a time. You know, I wish someone explained to me in simple terms what's going on in genetics, biology and just about every field of scientific research. There are tons of wonderful works coming every day that we don't know about. And I'm trying my best here to bring it to you the simplest way I possibly can. So you know, researchers are people, and physics research at the Hadron Collider basically means that people smash atoms together. Well computer graphics people also like to have some fun and write simulation programs to smash together a variety of objects in slow motion. However, even though most of these simulations look pretty good, they are physically not correct as many effects are neglected, such as simulating plasticity, bending stiffness, stretching energies and many others. And unfortunately, these are too expensive to compute in high resolution. Unless you have some tricks up the sleeve. Researchers at UC Berkeley have managed to correct this nut by creating an algorithm that uses more computational resources only around regions where cracks are likely to happen. This new technique enables the simulation of tearing for a variety of materials like cork, foils, metals, vinyl, and it also yields physically correct results for glass. Here's an example of a beaten up rubber sheet from their simulation program compared to a real world photograph. It's really awesome that you can do something on your computer in a virtual world that has something to do with reality. It is impossible to get used to this feeling. It's so amazing. And what's even better since it is really difficult to know in advance how the cracks would exactly look like. They have also enhanced the direct ability of the simulation, so artists could change things up a bit to achieve a desired artistic effect. In this example, they have managed to avoid tearing a duck in two by weakening the paths around them. Bravo! Thanks for watching and if you liked this series, just hit the like and subscribe buttons below the video to become a member of our growing club of scholars. Thanks and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Greetings to all of you fellow scholars out there."}, {"start": 4.32, "end": 9.84, "text": " This is two-minute papers where I explain awesome research works a couple minutes at a time."}, {"start": 9.84, "end": 14.76, "text": " You know, I wish someone explained to me in simple terms what's going on in genetics,"}, {"start": 14.76, "end": 18.56, "text": " biology and just about every field of scientific research."}, {"start": 18.56, "end": 23.0, "text": " There are tons of wonderful works coming every day that we don't know about."}, {"start": 23.0, "end": 27.8, "text": " And I'm trying my best here to bring it to you the simplest way I possibly can."}, {"start": 27.8, "end": 32.56, "text": " So you know, researchers are people, and physics research at the Hadron Collider basically"}, {"start": 32.56, "end": 35.24, "text": " means that people smash atoms together."}, {"start": 35.24, "end": 39.24, "text": " Well computer graphics people also like to have some fun and write simulation programs"}, {"start": 39.24, "end": 43.120000000000005, "text": " to smash together a variety of objects in slow motion."}, {"start": 43.120000000000005, "end": 47.8, "text": " However, even though most of these simulations look pretty good, they are physically not"}, {"start": 47.8, "end": 53.28, "text": " correct as many effects are neglected, such as simulating plasticity, bending stiffness,"}, {"start": 53.28, "end": 55.96, "text": " stretching energies and many others."}, {"start": 55.96, "end": 60.160000000000004, "text": " And unfortunately, these are too expensive to compute in high resolution."}, {"start": 60.160000000000004, "end": 63.160000000000004, "text": " Unless you have some tricks up the sleeve."}, {"start": 63.160000000000004, "end": 66.88, "text": " Researchers at UC Berkeley have managed to correct this nut by creating an algorithm that"}, {"start": 66.88, "end": 72.68, "text": " uses more computational resources only around regions where cracks are likely to happen."}, {"start": 72.68, "end": 77.8, "text": " This new technique enables the simulation of tearing for a variety of materials like cork,"}, {"start": 77.8, "end": 83.84, "text": " foils, metals, vinyl, and it also yields physically correct results for glass."}, {"start": 83.84, "end": 88.04, "text": " Here's an example of a beaten up rubber sheet from their simulation program compared to"}, {"start": 88.04, "end": 89.64, "text": " a real world photograph."}, {"start": 89.64, "end": 94.80000000000001, "text": " It's really awesome that you can do something on your computer in a virtual world that has"}, {"start": 94.80000000000001, "end": 96.84, "text": " something to do with reality."}, {"start": 96.84, "end": 99.32000000000001, "text": " It is impossible to get used to this feeling."}, {"start": 99.32000000000001, "end": 101.44, "text": " It's so amazing."}, {"start": 101.44, "end": 105.80000000000001, "text": " And what's even better since it is really difficult to know in advance how the cracks would"}, {"start": 105.80000000000001, "end": 106.96000000000001, "text": " exactly look like."}, {"start": 106.96000000000001, "end": 112.24000000000001, "text": " They have also enhanced the direct ability of the simulation, so artists could change things"}, {"start": 112.24, "end": 115.8, "text": " up a bit to achieve a desired artistic effect."}, {"start": 115.8, "end": 120.6, "text": " In this example, they have managed to avoid tearing a duck in two by weakening the paths"}, {"start": 120.6, "end": 125.03999999999999, "text": " around them."}, {"start": 125.03999999999999, "end": 127.75999999999999, "text": " Bravo!"}, {"start": 127.75999999999999, "end": 132.6, "text": " Thanks for watching and if you liked this series, just hit the like and subscribe buttons below"}, {"start": 132.6, "end": 136.24, "text": " the video to become a member of our growing club of scholars."}, {"start": 136.24, "end": 146.24, "text": " Thanks and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Xf8JUM8g7Ks
250 Subscribers - Our Quest & A Thank You Message
We have reached 250 subscribers. Now is a good time to celebrate, thank you for your support and talk a bit about our quest together! Music: "Runaways" by Silent Partner The mentioned lecture video series: https://www.youtube.com/watch?v=pjc1QAI6zS0&list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, we have just reached 250 subscribers on the channel. So far, the reception of these videos have been overwhelmingly positive. I'll show you some of these comments on the screen in the meantime. 250 Subscribers This is probably not much compared to even mid-sized YouTubers, but it means so much to me. It means that there are 250 people somewhere around the world waiting for new videos to come up. This is insane. If you think about it, even one subscriber is insane. Even one click from somewhere is mind blowing. Imagine that someone who you have never met somewhere on the face of Earth, perhaps in Peru. Somewhere in the United States or maybe in the middle of Africa, is excited for your work and just waiting for you to say something. There are millions of other videos they could watch, but they devote their time to listening to you. And now, multiply this by 250. I am just sitting here in disbelief. As a computer engineer, I've been working with computers and network algorithms for a long time, but I still find this mind blowing. I can just record the lectures that I hold at the university and thousands of people can watch it at any time, even while I'm asleep at night. I can teach people while I am asleep at night. We have over a thousand views on my first lecture, which is possibly more people than I will ever reach through the university seminar rooms. So for all 250 of you, and everyone who has ever watched any of these videos, thank you very much for watching and subscribing. I have created two-minute papers to show you the best of what research can offer and what your hard-earned tax money is spent on. Because that's the thing. Every single country I've been to, researchers are complaining about the lack of funding. And rightfully so, because most of them can't secure the funds to continue their work. But let's try to turn the argument around. Funding comes from your tax money, and 99.9% of the case you have no idea what your money is spent on. There are lots of incredible works published every single day of the year, but the people don't know anything about them. No one is stepping up to explain what your money is spent on. And I am sure that people would be happy to spend more on research if they know what they invest in. Two-minute papers is here to celebrate the genius of the best and most beautiful research results. I will be trying my best to explain all of these works so that everyone is able to understand them. It's not only for experts, it's definitely for everyone. So thank you for all of you, thanks for hanging in there, and please, spread the word. Let your friends know about the show so even more of us can marvel at these beautiful works. And until then, I'll see you next time.
[{"start": 0.0, "end": 7.68, "text": " Dear Fellow Scholars, we have just reached 250 subscribers on the channel."}, {"start": 7.68, "end": 11.28, "text": " So far, the reception of these videos have been overwhelmingly positive."}, {"start": 11.28, "end": 15.280000000000001, "text": " I'll show you some of these comments on the screen in the meantime."}, {"start": 15.280000000000001, "end": 20.52, "text": " 250 Subscribers This is probably not much compared to even"}, {"start": 20.52, "end": 24.72, "text": " mid-sized YouTubers, but it means so much to me."}, {"start": 24.72, "end": 31.919999999999998, "text": " It means that there are 250 people somewhere around the world waiting for new videos to come up."}, {"start": 31.919999999999998, "end": 33.44, "text": " This is insane."}, {"start": 33.44, "end": 37.16, "text": " If you think about it, even one subscriber is insane."}, {"start": 37.16, "end": 41.04, "text": " Even one click from somewhere is mind blowing."}, {"start": 41.04, "end": 47.28, "text": " Imagine that someone who you have never met somewhere on the face of Earth, perhaps in Peru."}, {"start": 47.28, "end": 52.44, "text": " Somewhere in the United States or maybe in the middle of Africa, is excited for your work"}, {"start": 52.44, "end": 55.48, "text": " and just waiting for you to say something."}, {"start": 55.48, "end": 62.56, "text": " There are millions of other videos they could watch, but they devote their time to listening to you."}, {"start": 62.56, "end": 66.44, "text": " And now, multiply this by 250."}, {"start": 66.44, "end": 69.84, "text": " I am just sitting here in disbelief."}, {"start": 69.84, "end": 74.8, "text": " As a computer engineer, I've been working with computers and network algorithms for a long time,"}, {"start": 74.8, "end": 77.6, "text": " but I still find this mind blowing."}, {"start": 77.6, "end": 83.75999999999999, "text": " I can just record the lectures that I hold at the university and thousands of people can watch it at any time,"}, {"start": 83.75999999999999, "end": 86.36, "text": " even while I'm asleep at night."}, {"start": 86.36, "end": 90.39999999999999, "text": " I can teach people while I am asleep at night."}, {"start": 90.39999999999999, "end": 98.08, "text": " We have over a thousand views on my first lecture, which is possibly more people than I will ever reach through the university seminar rooms."}, {"start": 98.08, "end": 104.11999999999999, "text": " So for all 250 of you, and everyone who has ever watched any of these videos,"}, {"start": 104.11999999999999, "end": 106.88, "text": " thank you very much for watching and subscribing."}, {"start": 106.88, "end": 111.6, "text": " I have created two-minute papers to show you the best of what research can offer"}, {"start": 111.6, "end": 114.83999999999999, "text": " and what your hard-earned tax money is spent on."}, {"start": 114.83999999999999, "end": 116.19999999999999, "text": " Because that's the thing."}, {"start": 116.19999999999999, "end": 122.0, "text": " Every single country I've been to, researchers are complaining about the lack of funding."}, {"start": 122.0, "end": 127.67999999999999, "text": " And rightfully so, because most of them can't secure the funds to continue their work."}, {"start": 127.67999999999999, "end": 129.92, "text": " But let's try to turn the argument around."}, {"start": 129.92, "end": 138.48, "text": " Funding comes from your tax money, and 99.9% of the case you have no idea what your money is spent on."}, {"start": 138.48, "end": 142.23999999999998, "text": " There are lots of incredible works published every single day of the year,"}, {"start": 142.23999999999998, "end": 145.16, "text": " but the people don't know anything about them."}, {"start": 145.16, "end": 148.79999999999998, "text": " No one is stepping up to explain what your money is spent on."}, {"start": 148.79999999999998, "end": 154.56, "text": " And I am sure that people would be happy to spend more on research if they know what they invest in."}, {"start": 154.56, "end": 159.76, "text": " Two-minute papers is here to celebrate the genius of the best and most beautiful research results."}, {"start": 159.76, "end": 165.12, "text": " I will be trying my best to explain all of these works so that everyone is able to understand them."}, {"start": 165.12, "end": 169.35999999999999, "text": " It's not only for experts, it's definitely for everyone."}, {"start": 169.35999999999999, "end": 174.48, "text": " So thank you for all of you, thanks for hanging in there, and please, spread the word."}, {"start": 174.48, "end": 179.76, "text": " Let your friends know about the show so even more of us can marvel at these beautiful works."}, {"start": 179.76, "end": 189.76, "text": " And until then, I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=LD5YS8-0Rkg
Blender Rendering - Top 7 LuxRender Features
LuxRender is a completely free, open source, physically based renderer with many contributors led by Jean-Philippe Grimaldi. I believe it is one of the best renderers out there for Blender, 3Ds Max and Maya. The author of the Hotel Lobby scene is Peter Sandbacka. Some of the main benefits of using LuxRender are the following: - it supports a multitude of material models, - with light groups, you can adjust the influence of light sources on your scene without needing to rerender your image, - a great thing about LuxRender is that is supports network rendering. With a neat trick, this is possible even without a network! - it has sophisticated rendering algorithms like Metropolis Light Transport to render notoriously difficult scenes, - different film brands and models have different color profiles, which means that they react to the same, for instance, red light differently. LuxRender is able to simulate this effect, - it supports GPU rendering, - it is cross-platform, it works on Windows, Linux and OSX as well, and it also works with a huge number of modeling software out there like Blender, 3Ds Max, Maya and many more. Official website: http://www.luxrender.net/ Official forums: http://www.luxrender.net/forum Official gallery: http://www.luxrender.net/gallery Infographics: https://www.cg.tuwien.ac.at/~zsolnai/wp/wp-content/uploads/2014/01/luxrender-infographics.jpg Reddit: https://www.reddit.com/r/luxrender/ LuxMerger: http://www.luxrender.net/wiki/LuxMerger An amazing slow motion shatter test video rendered on the GPU, it took 25 seconds per HD-frame: https://www.youtube.com/watch?v=FIPu9_OGFgc The splash screen scene was created by Janjy Giggins: http://www.luxrender.net/forum/gallery2.php?g2_itemId=12306 The shown scene pack is available here: https://cg.tuwien.ac.at/~zsolnai/gfx/luxrender/ http://www.luxrender.net/wiki/Show-off_pack Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai-Fehér's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Dear Fellow Scholars, there is a really fantastic photorealistic renderer program out there that not many of you know about. So let me give you a quick rundown of my top 7 LuxRender features, and note that this is not an official list or anything, just my personal favorite features. LuxRender is a completely free, open source physically based renderer with many contributors and it is led by Jean-Philippe Grimalti. What does a renderer do exactly? Well, there are many modeling programs where artists can sculpt objects, assign materials to them, and the renderer will run a light simulation process and show an image of how this object would look like in real life. You'll see in a second how cool these tools really are. So now that you know what LuxRender is, let's jump into the best features. Hold on to your pants, because this is going to be good. LuxRender supports a multitude of material models, matte, glossy materials, less objects of different roughness, translucent materials, subsurface scattering, metals, car paint, velvet, and you can mix all of these together to obtain an even more complex appearance. That's so great. Love it. With light groups, you can adjust the influence of light sources on your scene without needing to render your image. That's the most interesting point. So you can, for instance, fiddle with the intensities of the sunlight, the light fixtures, and the TV in the scene. If you feel that any one of those are not useful for the artistic effect that you're trying to achieve, you can just turn them off instantly. And apart from intensities, you can also adjust the color temperature of these individual light sources. Such a gorgeous feature. I have played way too much with this. A great thing about LuxRender is that it supports network rendering. It means that you can use multiple machines that will work together if they are connected. However, what is even better is that this render offers you many unbiased algorithms, which means that you can do network rendering without using a network. Now this sounds flat out impossible. But take a look at this noisy image. Not really convincing, right? Now imagine that you have 10 computers running in parallel on the same scene. There's a tool called LuxMurder, which can combine together many noisy images of the same scene together better, smoother output. So after merging together 10 images that have roughly the same amount of noise we get, this. Note that this is without using a network. So these computers have never heard of each other. We have RenderRender on them completely, independently. LuxRender has sophisticated rendering algorithms like Metropolis Light Transport to render notoriously difficult scenes like this. Most renders use path tracing or bidirectional path tracing, both of which struggle here. Here you can see the result of Metropolis Light Transport running for the same amount of time. It indeed makes a world of a difference. And this is the true final image. Different film brands and models have different color profiles, which means that they react to the same, for instance, red light differently. LuxRender is able to get you this look, which may bump up the realism of your render images as they will have the color profiles that people are used to see in real world photographs. It also supports GPU rendering. How much of a difference does it make? Here's a test run after 60 seconds, one with the CPU and one on the GPU. I don't think I'm ever going back to CPU rendering. And finally, LuxRender is cross-platform. It works on Windows, Linux and OSX as well. And it also works with a huge number of modeling software out there. Blender, 3D Studio Max, Maya, you name it. If you like these features, please come and be a member of the LuxRender community. There is a professional and quite welcoming bunch of people over at the LuxRender forums. If you have any questions or just want to show off your work, we'll be happy to have you there. We also have a nice scene repository with some truly spectacular scenes to get you started. There are also lots of goodies in the description box. Make sure to take a look. Hope you liked my quick rundown and I'll see you on the other side.
[{"start": 0.0, "end": 6.08, "text": " Dear Fellow Scholars, there is a really fantastic photorealistic renderer program out there"}, {"start": 6.08, "end": 8.56, "text": " that not many of you know about."}, {"start": 8.56, "end": 14.0, "text": " So let me give you a quick rundown of my top 7 LuxRender features, and note that this"}, {"start": 14.0, "end": 18.72, "text": " is not an official list or anything, just my personal favorite features."}, {"start": 18.72, "end": 23.68, "text": " LuxRender is a completely free, open source physically based renderer with many contributors"}, {"start": 23.68, "end": 26.72, "text": " and it is led by Jean-Philippe Grimalti."}, {"start": 26.72, "end": 28.44, "text": " What does a renderer do exactly?"}, {"start": 28.44, "end": 33.96, "text": " Well, there are many modeling programs where artists can sculpt objects, assign materials"}, {"start": 33.96, "end": 39.400000000000006, "text": " to them, and the renderer will run a light simulation process and show an image of how"}, {"start": 39.400000000000006, "end": 42.2, "text": " this object would look like in real life."}, {"start": 42.2, "end": 45.52, "text": " You'll see in a second how cool these tools really are."}, {"start": 45.52, "end": 49.480000000000004, "text": " So now that you know what LuxRender is, let's jump into the best features."}, {"start": 49.480000000000004, "end": 53.28, "text": " Hold on to your pants, because this is going to be good."}, {"start": 53.28, "end": 59.24, "text": " LuxRender supports a multitude of material models, matte, glossy materials, less objects"}, {"start": 59.24, "end": 64.28, "text": " of different roughness, translucent materials, subsurface scattering, metals, car paint,"}, {"start": 64.28, "end": 69.52000000000001, "text": " velvet, and you can mix all of these together to obtain an even more complex appearance."}, {"start": 69.52000000000001, "end": 71.12, "text": " That's so great."}, {"start": 71.12, "end": 73.92, "text": " Love it."}, {"start": 73.92, "end": 78.0, "text": " With light groups, you can adjust the influence of light sources on your scene without"}, {"start": 78.0, "end": 80.36, "text": " needing to render your image."}, {"start": 80.36, "end": 82.24000000000001, "text": " That's the most interesting point."}, {"start": 82.24, "end": 87.72, "text": " So you can, for instance, fiddle with the intensities of the sunlight, the light fixtures,"}, {"start": 87.72, "end": 89.8, "text": " and the TV in the scene."}, {"start": 89.8, "end": 93.91999999999999, "text": " If you feel that any one of those are not useful for the artistic effect that you're trying"}, {"start": 93.91999999999999, "end": 97.24, "text": " to achieve, you can just turn them off instantly."}, {"start": 97.24, "end": 101.84, "text": " And apart from intensities, you can also adjust the color temperature of these individual"}, {"start": 101.84, "end": 103.96, "text": " light sources."}, {"start": 103.96, "end": 105.28, "text": " Such a gorgeous feature."}, {"start": 105.28, "end": 109.03999999999999, "text": " I have played way too much with this."}, {"start": 109.04, "end": 114.0, "text": " A great thing about LuxRender is that it supports network rendering."}, {"start": 114.0, "end": 119.32000000000001, "text": " It means that you can use multiple machines that will work together if they are connected."}, {"start": 119.32000000000001, "end": 124.76, "text": " However, what is even better is that this render offers you many unbiased algorithms,"}, {"start": 124.76, "end": 130.08, "text": " which means that you can do network rendering without using a network."}, {"start": 130.08, "end": 133.16, "text": " Now this sounds flat out impossible."}, {"start": 133.16, "end": 135.56, "text": " But take a look at this noisy image."}, {"start": 135.56, "end": 137.72, "text": " Not really convincing, right?"}, {"start": 137.72, "end": 143.24, "text": " Now imagine that you have 10 computers running in parallel on the same scene."}, {"start": 143.24, "end": 149.0, "text": " There's a tool called LuxMurder, which can combine together many noisy images of the same"}, {"start": 149.0, "end": 152.07999999999998, "text": " scene together better, smoother output."}, {"start": 152.07999999999998, "end": 157.76, "text": " So after merging together 10 images that have roughly the same amount of noise we get,"}, {"start": 157.76, "end": 159.68, "text": " this."}, {"start": 159.68, "end": 162.28, "text": " Note that this is without using a network."}, {"start": 162.28, "end": 164.96, "text": " So these computers have never heard of each other."}, {"start": 164.96, "end": 170.88, "text": " We have RenderRender on them completely, independently."}, {"start": 170.88, "end": 176.60000000000002, "text": " LuxRender has sophisticated rendering algorithms like Metropolis Light Transport to render notoriously"}, {"start": 176.60000000000002, "end": 178.92000000000002, "text": " difficult scenes like this."}, {"start": 178.92000000000002, "end": 185.04000000000002, "text": " Most renders use path tracing or bidirectional path tracing, both of which struggle here."}, {"start": 185.04000000000002, "end": 189.0, "text": " Here you can see the result of Metropolis Light Transport running for the same amount"}, {"start": 189.0, "end": 190.0, "text": " of time."}, {"start": 190.0, "end": 193.88, "text": " It indeed makes a world of a difference."}, {"start": 193.88, "end": 198.4, "text": " And this is the true final image."}, {"start": 198.4, "end": 202.92, "text": " Different film brands and models have different color profiles, which means that they react"}, {"start": 202.92, "end": 206.24, "text": " to the same, for instance, red light differently."}, {"start": 206.24, "end": 210.35999999999999, "text": " LuxRender is able to get you this look, which may bump up the realism of your render"}, {"start": 210.35999999999999, "end": 218.51999999999998, "text": " images as they will have the color profiles that people are used to see in real world photographs."}, {"start": 218.51999999999998, "end": 221.04, "text": " It also supports GPU rendering."}, {"start": 221.04, "end": 223.28, "text": " How much of a difference does it make?"}, {"start": 223.28, "end": 228.24, "text": " Here's a test run after 60 seconds, one with the CPU and one on the GPU."}, {"start": 228.24, "end": 233.92000000000002, "text": " I don't think I'm ever going back to CPU rendering."}, {"start": 233.92000000000002, "end": 236.4, "text": " And finally, LuxRender is cross-platform."}, {"start": 236.4, "end": 239.72, "text": " It works on Windows, Linux and OSX as well."}, {"start": 239.72, "end": 243.28, "text": " And it also works with a huge number of modeling software out there."}, {"start": 243.28, "end": 247.24, "text": " Blender, 3D Studio Max, Maya, you name it."}, {"start": 247.24, "end": 250.96, "text": " If you like these features, please come and be a member of the LuxRender community."}, {"start": 250.96, "end": 255.92000000000002, "text": " There is a professional and quite welcoming bunch of people over at the LuxRender forums."}, {"start": 255.92000000000002, "end": 259.68, "text": " If you have any questions or just want to show off your work, we'll be happy to have you"}, {"start": 259.68, "end": 261.2, "text": " there."}, {"start": 261.2, "end": 267.08, "text": " We also have a nice scene repository with some truly spectacular scenes to get you started."}, {"start": 267.08, "end": 269.68, "text": " There are also lots of goodies in the description box."}, {"start": 269.68, "end": 271.2, "text": " Make sure to take a look."}, {"start": 271.2, "end": 298.48, "text": " Hope you liked my quick rundown and I'll see you on the other side."}]
Two Minute Papers
https://www.youtube.com/watch?v=rCWTOOgVXyE
Artificial Neural Networks and Deep Learning | Two Minute Papers #3
Artificial neural networks provide us incredibly powerful tools in machine learning that are useful for a variety of tasks ranging from image classification to voice translation. So what is all the deep learning rage about? The media seems to be all over the newest neural network research of the DeepMind company that was recently acquired by Google. They used neural networks to create algorithms that are able to play Atari games, learn them like a human would, eventually achieving superhuman performance. Deep learning means that we use artificial neural network with multiple layers, making it even more powerful for more difficult tasks. These machine learning techniques proved to be useful for many tasks beyond image recognition: they also excel at weather predictions, breast cancer cell mitosis detection, brain image segmentation and toxicity prediction among many others. If you would like to know more about neural networks and deep learning, make sure to check out these talks from Andrew Ng: https://www.youtube.com/watch?v=n1ViNeWhC24 https://www.youtube.com/watch?v=W15K9PegQt0 You can also check out this gorgeous application of neural networks and reinforcement learning from Google DeepMind: http://www.wired.co.uk/news/archive/2015-02/25/google-deepmind-atari Disclaimer: I was not part of this research project, I am merely providing commentary on this work. In Two Minute Papers, I attempt to bring the most awesome research discoveries to everyone a couple minutes at a time. Music: "Watercolors" by John Deley and the 41 Players Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
I am Karo Jolene Fahir and this is two-minute papers where I explain awesome research in simple words. First of all, I am very happy to see that you liked the series. Also, thanks for sharing it on the social media sites and please, keep them coming. This episode is going to be about artificial neural networks. I will quickly explain what the huge deep learning range is all about. This graph depicts a neural network that we build and simulate on a computer. It is a very crude approximation of the human brain. The leftmost layer denotes inputs, which can be, for instance, the pixels of an input image. The rightmost layer is the output, which can be, for instance, a decision whether to image the picture horse or not. After we have given many inputs to the neural network, in its hidden layers, it will learn to figure out a way to recognize different classes of inputs, such as horses, people, or school buses. What is really surprising is that it's quite faithful to the way the brain does represent objects on a lower level. It has a very similar edge detector. And it also works for audio. Here you can find the difference between the neurons in the hearing system of a cat versus a simulated neural network on the same audio signals. I mean, come on. This is amazing. What is the deep learning part all about? Well, it means that our neural network has multiple hidden layers on top of each other. The first layer for an image consists of edges, and as we go up, a combination of edges gives us object parts. A combination of object parts, eared object models, and so on. This kind of hierarchy provides us very powerful capabilities. For instance, in this traffic sign recognition contest, the second place was taken by humans. But what's more interesting is that the first place was not taken by humans. It was taken by a neural network algorithm. Think about that. And if you find these topics interesting, you feel you would like to hear about the newest research discoveries in an understandable way. Please become a Fellow Scholar and hit that subscribe button. And for now, thanks for watching, and I'll see you next time.
[{"start": 0.0, "end": 8.0, "text": " I am Karo Jolene Fahir and this is two-minute papers where I explain awesome research in simple words."}, {"start": 8.0, "end": 16.0, "text": " First of all, I am very happy to see that you liked the series. Also, thanks for sharing it on the social media sites and please, keep them coming."}, {"start": 16.0, "end": 24.0, "text": " This episode is going to be about artificial neural networks. I will quickly explain what the huge deep learning range is all about."}, {"start": 24.0, "end": 32.0, "text": " This graph depicts a neural network that we build and simulate on a computer. It is a very crude approximation of the human brain."}, {"start": 32.0, "end": 38.0, "text": " The leftmost layer denotes inputs, which can be, for instance, the pixels of an input image."}, {"start": 38.0, "end": 45.0, "text": " The rightmost layer is the output, which can be, for instance, a decision whether to image the picture horse or not."}, {"start": 45.0, "end": 57.0, "text": " After we have given many inputs to the neural network, in its hidden layers, it will learn to figure out a way to recognize different classes of inputs, such as horses, people, or school buses."}, {"start": 57.0, "end": 66.0, "text": " What is really surprising is that it's quite faithful to the way the brain does represent objects on a lower level. It has a very similar edge detector."}, {"start": 66.0, "end": 78.0, "text": " And it also works for audio. Here you can find the difference between the neurons in the hearing system of a cat versus a simulated neural network on the same audio signals."}, {"start": 78.0, "end": 81.0, "text": " I mean, come on. This is amazing."}, {"start": 81.0, "end": 88.0, "text": " What is the deep learning part all about? Well, it means that our neural network has multiple hidden layers on top of each other."}, {"start": 88.0, "end": 96.0, "text": " The first layer for an image consists of edges, and as we go up, a combination of edges gives us object parts."}, {"start": 96.0, "end": 100.0, "text": " A combination of object parts, eared object models, and so on."}, {"start": 100.0, "end": 110.0, "text": " This kind of hierarchy provides us very powerful capabilities. For instance, in this traffic sign recognition contest, the second place was taken by humans."}, {"start": 110.0, "end": 117.0, "text": " But what's more interesting is that the first place was not taken by humans. It was taken by a neural network algorithm."}, {"start": 117.0, "end": 128.0, "text": " Think about that. And if you find these topics interesting, you feel you would like to hear about the newest research discoveries in an understandable way."}, {"start": 128.0, "end": 155.0, "text": " Please become a Fellow Scholar and hit that subscribe button. And for now, thanks for watching, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=TRNUTN01SEg
Capturing Waves of Light With Femto-photography | Two Minute Papers #2
What is femto-photography? To be able to capture how waves of light propagate in space, one would need to build a camera that is able to take one trillion frames per second. At first, this sounds impossible, but researchers at MIT and the University of Zaragoza have managed to crack this nut: in their newest work they published to SIGGRAPH that they call femto-photography, we can observe how a mirror lights up with its image as light propagates from the light source to the camera. All this in slow motion! ________________________ The paper "Femto-Photography: Capturing and Visualizing the Propagation of Light" is available here: http://dspace.mit.edu/openaccess-disseminate/1721.1/82039 http://giga.cps.unizar.es/~diegog/ficheros/pdf_papers/femto.pdf Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
A movie that we watch in the TV shows us from 25 to about 60 images per second. In computer graphics, these images were referred to as frames. A slow motion camera can capture up to the order of thousands of frames per second, providing breathtaking footage like this. One can quickly discover the beauty of even the most ordinary, Monday moments of nature. But if you think this is slow motion, then take a look at this. Computer graphics researchers have been working on a system that is able to capture 1 trillion frames per second. How much is that exactly? Well, it means that if every single person who lives on Earth would be able to help us, then every single one of us would have to take about 140 photographs in one second. And we would then need to add all of these photographs up to obtain only one second of footage. What is all this good for? Well, for example, capturing light as an electromagnetic wave as it hits and travels along objects in space like the wall that you see here. Physicists used to say that there is a really, really short instance of time when you stand in front of the mirror, you look at it and there is no mirror image in it. It is completely black. What is this wizardry and how is this possible? Since Einstein, we know that the speed of light is finite, it is not instantaneous. It takes time to travel from the light source, hit the mirror and end up hitting your eye for you to see your mirror reflection. Pictures at MIT and the University of Saragosa have captured this very moment. Take a look, it is an enlightening experience. The paper is available in the description box and it's a really enjoyable read. A sizable portion of it is understandable for everyone even without mathematical knowledge. All you need is just a little imagination. Thanks for watching and I'll see you next week.
[{"start": 0.0, "end": 7.04, "text": " A movie that we watch in the TV shows us from 25 to about 60 images per second."}, {"start": 7.04, "end": 11.0, "text": " In computer graphics, these images were referred to as frames."}, {"start": 11.0, "end": 16.44, "text": " A slow motion camera can capture up to the order of thousands of frames per second, providing"}, {"start": 16.44, "end": 19.080000000000002, "text": " breathtaking footage like this."}, {"start": 19.080000000000002, "end": 25.52, "text": " One can quickly discover the beauty of even the most ordinary, Monday moments of nature."}, {"start": 25.52, "end": 32.08, "text": " But if you think this is slow motion, then take a look at this."}, {"start": 32.08, "end": 36.76, "text": " Computer graphics researchers have been working on a system that is able to capture 1 trillion"}, {"start": 36.76, "end": 38.56, "text": " frames per second."}, {"start": 38.56, "end": 40.36, "text": " How much is that exactly?"}, {"start": 40.36, "end": 46.239999999999995, "text": " Well, it means that if every single person who lives on Earth would be able to help us,"}, {"start": 46.239999999999995, "end": 53.0, "text": " then every single one of us would have to take about 140 photographs in one second."}, {"start": 53.0, "end": 58.0, "text": " And we would then need to add all of these photographs up to obtain only one second of"}, {"start": 58.0, "end": 59.68, "text": " footage."}, {"start": 59.68, "end": 61.32, "text": " What is all this good for?"}, {"start": 61.32, "end": 68.0, "text": " Well, for example, capturing light as an electromagnetic wave as it hits and travels"}, {"start": 68.0, "end": 72.88, "text": " along objects in space like the wall that you see here."}, {"start": 72.88, "end": 78.08, "text": " Physicists used to say that there is a really, really short instance of time when you stand"}, {"start": 78.08, "end": 83.8, "text": " in front of the mirror, you look at it and there is no mirror image in it."}, {"start": 83.8, "end": 86.44, "text": " It is completely black."}, {"start": 86.44, "end": 90.28, "text": " What is this wizardry and how is this possible?"}, {"start": 90.28, "end": 95.72, "text": " Since Einstein, we know that the speed of light is finite, it is not instantaneous."}, {"start": 95.72, "end": 101.32, "text": " It takes time to travel from the light source, hit the mirror and end up hitting your eye"}, {"start": 101.32, "end": 104.16, "text": " for you to see your mirror reflection."}, {"start": 104.16, "end": 109.75999999999999, "text": " Pictures at MIT and the University of Saragosa have captured this very moment."}, {"start": 109.75999999999999, "end": 115.0, "text": " Take a look, it is an enlightening experience."}, {"start": 115.0, "end": 119.6, "text": " The paper is available in the description box and it's a really enjoyable read."}, {"start": 119.6, "end": 125.72, "text": " A sizable portion of it is understandable for everyone even without mathematical knowledge."}, {"start": 125.72, "end": 128.88, "text": " All you need is just a little imagination."}, {"start": 128.88, "end": 137.12, "text": " Thanks for watching and I'll see you next week."}]
Two Minute Papers
https://www.youtube.com/watch?v=5xLSbj5SsSE
Fluid Simulations with Blender and Wavelet Turbulence | Two Minute Papers #1
Creating detailed fluid and smoke simulations in Blender and other modeling software is a slow and laborious process that requires a ton of time and resources. Wavelet Turbulence is a technique that helps achieving similar effects orders of magnitude faster. It is also much lighter on memory and is now widely used in the industry, so it's definitely not an accident that Theodore Kim won an Academy Award (a technical Oscar, if you will) for this SIGGRAPH publication. It is implemented in Blender and is available for everyone free of charge, so make sure to try it out! In Two Minute Papers, I attempt to bring the most awesome research discoveries to everyone a couple minutes at a time. Here is a tutorial and a Blender download link to get you started: http://blender.org/ https://www.youtube.com/watch?v=iV43xNQDOFs Kim et al.'s Wavelet Turbulence paper is available here: http://www.cs.cornell.edu/~tedkim/wturb/ Disclaimer: I was not part of this research project, I am merely providing commentary on this work. Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Károly Zsolnai's links: Patreon → https://www.patreon.com/TwoMinutePapers Facebook → https://www.facebook.com/TwoMinutePapers/ Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
How can we simulate the motion of fluids and smoke? If we had a block of plastic in our computer program and we would add the laws of physics that control the motion of fluids, it would immediately start behaving like water. In these simulations we're mostly interested in the velocity and the pressure of the fluid. How these quantities exactly change in time? This we need to compute in every point in space, which would take an infinite amount of resources. What we usually do is we try to compute them not everywhere, but in many different places and we try to guess these quantities between these points. By discussing a lot of information is lost. And it still takes a lot of resources. For a really detailed simulation it is not uncommon that one has to wait for days to get only a few seconds of video footage. And this is where wavelet turbulence comes into play. We know exactly what frequencies are lost and where they are lost. And this technique enables us to synthesize this information and that did back very cheaply. This way one can get really detailed simulations at the very reasonable cost. Here are some examples of smoke simulations with and with out wavelet turbulence. It really makes a great difference. It is no accident that the technique won a technical Oscar award. Among many other systems it is implemented in Blender so anyone can give it a try. Make sure to do so because it's lots of fun. The paper and the supplementary video is also available in the description box. This is an amazing paper. Easily one of my favorites. So if you know some math, make sure to take a look and if you don't just enjoy the footage. Thank you for watching and see you next time.
[{"start": 0.0, "end": 3.7600000000000002, "text": " How can we simulate the motion of fluids and smoke?"}, {"start": 3.7600000000000002, "end": 8.64, "text": " If we had a block of plastic in our computer program and we would add the laws of physics"}, {"start": 8.64, "end": 13.200000000000001, "text": " that control the motion of fluids, it would immediately start behaving like water."}, {"start": 14.64, "end": 19.6, "text": " In these simulations we're mostly interested in the velocity and the pressure of the fluid."}, {"start": 19.6, "end": 25.04, "text": " How these quantities exactly change in time? This we need to compute in every point in space,"}, {"start": 25.04, "end": 30.4, "text": " which would take an infinite amount of resources. What we usually do is we try to compute them"}, {"start": 30.4, "end": 35.6, "text": " not everywhere, but in many different places and we try to guess these quantities between these"}, {"start": 35.6, "end": 42.16, "text": " points. By discussing a lot of information is lost. And it still takes a lot of resources."}, {"start": 42.8, "end": 48.480000000000004, "text": " For a really detailed simulation it is not uncommon that one has to wait for days to get only a"}, {"start": 48.480000000000004, "end": 53.6, "text": " few seconds of video footage. And this is where wavelet turbulence comes into play."}, {"start": 53.6, "end": 59.6, "text": " We know exactly what frequencies are lost and where they are lost. And this technique enables us"}, {"start": 59.6, "end": 65.28, "text": " to synthesize this information and that did back very cheaply. This way one can get really"}, {"start": 65.28, "end": 71.76, "text": " detailed simulations at the very reasonable cost. Here are some examples of smoke simulations with"}, {"start": 71.76, "end": 78.32, "text": " and with out wavelet turbulence. It really makes a great difference. It is no accident that the"}, {"start": 78.32, "end": 84.8, "text": " technique won a technical Oscar award. Among many other systems it is implemented in Blender"}, {"start": 84.8, "end": 89.28, "text": " so anyone can give it a try. Make sure to do so because it's lots of fun."}, {"start": 90.16, "end": 94.39999999999999, "text": " The paper and the supplementary video is also available in the description box."}, {"start": 94.39999999999999, "end": 99.83999999999999, "text": " This is an amazing paper. Easily one of my favorites. So if you know some math,"}, {"start": 99.83999999999999, "end": 106.08, "text": " make sure to take a look and if you don't just enjoy the footage. Thank you for watching and see"}, {"start": 106.08, "end": 116.08, "text": " you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bYGL3fLYudM
TU Wien Rendering #39 - Assignment 4, Farewell
The last assignment is handed out in this segment. Hope you have enjoyed the journey at least as much as I did! If you would like to know more, make sure to check out the course website below and download the slides. There are plenty of links to materials for those who would like to expand their knowledge! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, so the very last assignment, please go to this unofficial extruder scene repository. There is also a link below it that shows you how the individual scenes look like. Please choose a scene and render it with an unbiased method several times and merge the results together. I hope that you remember from the previous lecture how you can merge together individual runs of unbiased algorithms and hopefully get something better than the individual images. Do it with other algorithms both biased and unbiased algorithms. I've also uploaded a settings file to help you with these different algorithms and see what happens. I don't want to spoil the fun, but obviously we expect a given class of algorithms to perform well in this regard and some of them. Hmm, not so much. Also try to experiment with photo mapping type algorithms. Place your observations in the observations.txt file. Tell me what kind of algorithm worked where, what are the failure cases and why, and is this what you have expected or did you get something different. Remember when we were doing mathematics in the very first lectures we first always listed our expectations and then after we got the results we discussed whether reality was in line with our expectations or not. This is a really good methodology, so please do it all the time. There will be a rendering competition afterwards where a really prestigious international committee will judge your work and there are lots of valuable prizes. We will get three tickets to the next year's CEGC conference and Pixel Vienna. So that's a total of six free conference tickets for you. I'm also holding a talk at this CEGC so I will be more than excited to meet you there. So the CEGC is the Center European Games Conference. This is Pixel Vienna. They're flyer from last year. So after you hand it in your work you may be getting one of the three prizes. The third prize is plus half of a grade on the exam and this provided that you would already pass the course. The second prize is plus one grade on the exam and the first prize is perhaps this is the official description perhaps in even greater influence on the exam grade. If you don't really have artistic veins or you would like to do some programming assignment instead of the Luxrender Sim contest you're free to do that. Please contact me. Let's cook up a realistic and exciting problem for you that you can solve. So don't just start pounding away at your keyboard and doing something. Please write to me so we can discuss what you're exactly going to do. And if you do that you are going to be subject to the very same prize. Okay so what about the rendering contest? The contest theme this year is going to be fluids. It's great because we have a great fluid simulator in blender. You have to create a scene and hand in converged images. Not noisy, converged images of this scene. Okay so what is the list of things that you need to hand in? We would like to get the Luxrender scene. Please copy every asset, every texture, every mesh, everything that you have in the scene. In this Luxrender scene directory that you give to us so we can run it ourselves. We need also like I said a completely noise free render image. We would also need the blend file or if you're using a different model or program you're absolutely free to use that please be my guest. Just please send us the project file. And also send us one text file with a few lines on what you try to accomplish and why you think that your work is the greatest work ever created by humanity. Third party sources for meshes are fine but you have to give credit to the people who created it. Important. We also ask for a halfway progress report. What does it mean? There will be a final deadline for the assignment and halfway through that there will be another deadline where I expect you to send me an email with the very same subject as the assignment itself. You send me one render image which is just a rough, really rough draft of what you're going to do. So I'd like to see your current progress and at least one line of text with your plans. What are you exactly trying to accomplish? This will do because we would like to discourage people from trying to put together some scene in the last two days of the assignment and not have enough time to render it correctly or to develop it correctly. We would just like to make sure that you are on time. And please check the course website to see what exactly are the deadlines for this. Okay so we'll be on the committee. First Jean-Philippe Grimaldi he is the head developer of LuxRender, the kindest, kindest person who has been on this committee for the third year now and he's always very excited to see your work. Vojta Kiaros, you hopefully remember the name from before, he's the head of the rendering group of Disney Research Zurich and an excellent, a truly excellent researcher. Michal Vimer, our beloved professor who is the head of the rendering group at our university. What about the programming guys? If you don't want to participate in the rendering contest that's fine, that's perfectly fine. There's two different things that you can do. One, do something with LuxRender. We have for instance a bug tracker where people are asking for features and people are also asking for bugs to be fixed. So if you're interested in this then please take a look and if you commit something that is useful then you will be subject to a first prize. Now note that the first prize can be won by multiple people. If you cross a given threshold with the quality of your work then you will be subjected to the first prize and there may be many of you who do. And there's also the small paint line where you can improve small paint by practically anything. You can add by directional past tracing, multiple important sampling, photo mapping, whatever you have in mind, but before you decide on anything, contact me. Last year's theme was volumetric caustics. I think that's an amazing theme but this year this is not what we're going to be interested in. What we're going to be interested in is fluid simulations. This scene was created in blender so you can do sophisticated simulations like this and even much more sophisticated than this. I have prepared some blender fluid simulation tutorials for you so please take a look and please make sure that your simulation is the very least 300 cube. And also an example video to set the tone. This is taken from the real flow real from last year. It is absolutely amazing. Make sure to take a look. And the subject of the email that we're looking for is the very same. You only need to increment the number of the assignment. And that's it. It's been a wonderful journey for me so thanks for tuning in. I was trying my very best to teach you the intricacies of light transport and I hope that now you indeed see the world differently. I got some student feedbacks from many of you and I got the kindest of words so thank you very much. I'm really grateful. And if you're watching this through the internet then we have a comment section. Let us know if you like the course. So thank you very much and despite the fact that it seems that the course ends here we have a lecture from before that we haven't published yet there will be some more videos with Thomas who teaches you how to compute subsurface scattering. So one more time thank you very much and it's been a wonderful journey. Thanks. I'll see you later.
[{"start": 0.0, "end": 6.24, "text": " Okay, so the very last assignment, please go to this unofficial"}, {"start": 6.24, "end": 10.32, "text": " extruder scene repository. There is also a link below it that shows you how the"}, {"start": 10.32, "end": 14.48, "text": " individual scenes look like. Please choose a scene and render it with an"}, {"start": 14.48, "end": 20.16, "text": " unbiased method several times and merge the results together. I hope that you"}, {"start": 20.16, "end": 25.04, "text": " remember from the previous lecture how you can merge together individual runs"}, {"start": 25.04, "end": 29.28, "text": " of unbiased algorithms and hopefully get something better than the individual"}, {"start": 29.28, "end": 34.160000000000004, "text": " images. Do it with other algorithms both biased and unbiased algorithms. I've"}, {"start": 34.160000000000004, "end": 38.8, "text": " also uploaded a settings file to help you with these different algorithms and"}, {"start": 38.8, "end": 43.84, "text": " see what happens. I don't want to spoil the fun, but obviously we expect a"}, {"start": 43.84, "end": 47.84, "text": " given class of algorithms to perform well in this regard and some of them."}, {"start": 47.84, "end": 53.2, "text": " Hmm, not so much. Also try to experiment with photo mapping type algorithms."}, {"start": 53.2, "end": 58.96, "text": " Place your observations in the observations.txt file. Tell me what kind of algorithm"}, {"start": 58.96, "end": 63.52, "text": " worked where, what are the failure cases and why, and is this what you have"}, {"start": 63.52, "end": 67.52, "text": " expected or did you get something different. Remember when we were doing"}, {"start": 67.52, "end": 71.36, "text": " mathematics in the very first lectures we first always listed our"}, {"start": 71.36, "end": 75.84, "text": " expectations and then after we got the results we discussed whether"}, {"start": 75.84, "end": 80.16, "text": " reality was in line with our expectations or not. This is a really good"}, {"start": 80.16, "end": 85.44, "text": " methodology, so please do it all the time. There will be a rendering competition"}, {"start": 85.44, "end": 89.52, "text": " afterwards where a really prestigious international committee will judge your"}, {"start": 89.52, "end": 95.2, "text": " work and there are lots of valuable prizes. We will get three tickets to the"}, {"start": 95.2, "end": 101.84, "text": " next year's CEGC conference and Pixel Vienna. So that's a total of six free"}, {"start": 101.84, "end": 106.56, "text": " conference tickets for you. I'm also holding a talk at this CEGC so I will be"}, {"start": 106.56, "end": 112.88, "text": " more than excited to meet you there. So the CEGC is the Center European Games"}, {"start": 112.88, "end": 119.03999999999999, "text": " Conference. This is Pixel Vienna. They're flyer from last year."}, {"start": 119.03999999999999, "end": 123.19999999999999, "text": " So after you hand it in your work you may be getting one of the three prizes."}, {"start": 123.19999999999999, "end": 128.0, "text": " The third prize is plus half of a grade on the exam and this provided that you"}, {"start": 128.0, "end": 133.6, "text": " would already pass the course. The second prize is plus one grade on the exam"}, {"start": 133.6, "end": 139.35999999999999, "text": " and the first prize is perhaps this is the official description perhaps in"}, {"start": 139.36, "end": 144.88000000000002, "text": " even greater influence on the exam grade. If you don't really have artistic"}, {"start": 144.88000000000002, "end": 149.28, "text": " veins or you would like to do some programming assignment instead of the"}, {"start": 149.28, "end": 154.32000000000002, "text": " Luxrender Sim contest you're free to do that. Please contact me. Let's cook up"}, {"start": 154.32000000000002, "end": 158.96, "text": " a realistic and exciting problem for you that you can solve. So don't just"}, {"start": 158.96, "end": 163.04000000000002, "text": " start pounding away at your keyboard and doing something. Please write to me so"}, {"start": 163.04000000000002, "end": 166.8, "text": " we can discuss what you're exactly going to do. And if you do that you are"}, {"start": 166.8, "end": 170.32000000000002, "text": " going to be subject to the very same prize."}, {"start": 170.32000000000002, "end": 174.4, "text": " Okay so what about the rendering contest? The contest theme this year is going to"}, {"start": 174.4, "end": 179.60000000000002, "text": " be fluids. It's great because we have a great fluid simulator in blender. You"}, {"start": 179.60000000000002, "end": 184.08, "text": " have to create a scene and hand in converged images. Not noisy,"}, {"start": 184.08, "end": 187.92000000000002, "text": " converged images of this scene. Okay so what is the list of things that you need"}, {"start": 187.92000000000002, "end": 192.8, "text": " to hand in? We would like to get the Luxrender scene. Please copy every asset,"}, {"start": 192.8, "end": 197.76000000000002, "text": " every texture, every mesh, everything that you have in the scene. In this"}, {"start": 197.76000000000002, "end": 202.4, "text": " Luxrender scene directory that you give to us so we can run it ourselves."}, {"start": 202.4, "end": 206.4, "text": " We need also like I said a completely noise free render image."}, {"start": 206.4, "end": 210.0, "text": " We would also need the blend file or if you're using a different model or"}, {"start": 210.0, "end": 214.8, "text": " program you're absolutely free to use that please be my guest. Just please send"}, {"start": 214.8, "end": 220.24, "text": " us the project file. And also send us one text file with a few lines on what you"}, {"start": 220.24, "end": 225.04000000000002, "text": " try to accomplish and why you think that your work is the greatest work ever"}, {"start": 225.04000000000002, "end": 230.48000000000002, "text": " created by humanity. Third party sources for meshes are fine but you have to"}, {"start": 230.48000000000002, "end": 235.60000000000002, "text": " give credit to the people who created it. Important. We also ask for a"}, {"start": 235.60000000000002, "end": 240.8, "text": " halfway progress report. What does it mean? There will be a final deadline for"}, {"start": 240.8, "end": 245.44, "text": " the assignment and halfway through that there will be another deadline where"}, {"start": 245.44, "end": 250.24, "text": " I expect you to send me an email with the very same subject as the assignment"}, {"start": 250.24, "end": 255.28, "text": " itself. You send me one render image which is just a rough, really rough"}, {"start": 255.28, "end": 260.72, "text": " draft of what you're going to do. So I'd like to see your current progress"}, {"start": 260.72, "end": 265.2, "text": " and at least one line of text with your plans. What are you exactly trying to"}, {"start": 265.2, "end": 269.68, "text": " accomplish? This will do because we would like to discourage people from"}, {"start": 269.68, "end": 274.96, "text": " trying to put together some scene in the last two days of the assignment and not"}, {"start": 274.96, "end": 279.35999999999996, "text": " have enough time to render it correctly or to develop it correctly."}, {"start": 279.35999999999996, "end": 283.76, "text": " We would just like to make sure that you are on time. And please check the course"}, {"start": 283.76, "end": 288.0, "text": " website to see what exactly are the deadlines for this."}, {"start": 288.0, "end": 292.88, "text": " Okay so we'll be on the committee. First Jean-Philippe Grimaldi he is the head"}, {"start": 292.88, "end": 297.52, "text": " developer of LuxRender, the kindest, kindest person who has been on this"}, {"start": 297.52, "end": 301.91999999999996, "text": " committee for the third year now and he's always very excited to see your work."}, {"start": 301.92, "end": 305.68, "text": " Vojta Kiaros, you hopefully remember the name from before, he's the head of the"}, {"start": 305.68, "end": 310.0, "text": " rendering group of Disney Research Zurich and an excellent, a truly"}, {"start": 310.0, "end": 314.32, "text": " excellent researcher. Michal Vimer, our beloved professor who is the head of the"}, {"start": 314.32, "end": 318.32, "text": " rendering group at our university."}, {"start": 318.32, "end": 322.40000000000003, "text": " What about the programming guys? If you don't want to participate in the rendering"}, {"start": 322.40000000000003, "end": 326.32, "text": " contest that's fine, that's perfectly fine. There's two different things that"}, {"start": 326.32, "end": 331.28000000000003, "text": " you can do. One, do something with LuxRender. We have for instance a"}, {"start": 331.28, "end": 336.23999999999995, "text": " bug tracker where people are asking for features and people are also asking for"}, {"start": 336.23999999999995, "end": 340.32, "text": " bugs to be fixed. So if you're interested in this then please take a look and if"}, {"start": 340.32, "end": 344.71999999999997, "text": " you commit something that is useful then you will be subject to a first prize."}, {"start": 344.71999999999997, "end": 349.28, "text": " Now note that the first prize can be won by multiple people. If you cross a given"}, {"start": 349.28, "end": 354.32, "text": " threshold with the quality of your work then you will be subjected to the first"}, {"start": 354.32, "end": 358.79999999999995, "text": " prize and there may be many of you who do. And there's also the small paint line"}, {"start": 358.8, "end": 363.12, "text": " where you can improve small paint by practically anything. You can add by"}, {"start": 363.12, "end": 367.28000000000003, "text": " directional past tracing, multiple important sampling, photo mapping,"}, {"start": 367.28000000000003, "end": 373.04, "text": " whatever you have in mind, but before you decide on anything, contact me."}, {"start": 373.04, "end": 379.76, "text": " Last year's theme was volumetric caustics. I think that's an amazing theme"}, {"start": 379.76, "end": 383.2, "text": " but this year this is not what we're going to be interested in. What we're going"}, {"start": 383.2, "end": 387.92, "text": " to be interested in is fluid simulations. This scene was created in blender"}, {"start": 387.92, "end": 393.04, "text": " so you can do sophisticated simulations like this and even much more sophisticated than this."}, {"start": 393.04, "end": 397.52000000000004, "text": " I have prepared some blender fluid simulation tutorials for you so please take a look"}, {"start": 399.52000000000004, "end": 404.24, "text": " and please make sure that your simulation is the very least 300 cube."}, {"start": 405.76, "end": 412.32, "text": " And also an example video to set the tone. This is taken from the real flow real from last year."}, {"start": 412.32, "end": 418.4, "text": " It is absolutely amazing. Make sure to take a look. And the subject of the email that we're"}, {"start": 418.4, "end": 423.52, "text": " looking for is the very same. You only need to increment the number of the assignment."}, {"start": 425.2, "end": 430.8, "text": " And that's it. It's been a wonderful journey for me so thanks for tuning in."}, {"start": 430.8, "end": 437.28, "text": " I was trying my very best to teach you the intricacies of light transport and I hope that now you"}, {"start": 437.28, "end": 442.55999999999995, "text": " indeed see the world differently. I got some student feedbacks from many of you and I got the"}, {"start": 442.55999999999995, "end": 447.44, "text": " kindest of words so thank you very much. I'm really grateful. And if you're watching this"}, {"start": 447.44, "end": 451.67999999999995, "text": " through the internet then we have a comment section. Let us know if you like the course."}, {"start": 451.67999999999995, "end": 457.35999999999996, "text": " So thank you very much and despite the fact that it seems that the course ends here we have a"}, {"start": 457.35999999999996, "end": 462.47999999999996, "text": " lecture from before that we haven't published yet there will be some more videos with Thomas"}, {"start": 462.47999999999996, "end": 466.96, "text": " who teaches you how to compute subsurface scattering. So one more time thank you very"}, {"start": 466.96, "end": 474.56, "text": " much and it's been a wonderful journey. Thanks. I'll see you later."}]
Two Minute Papers
https://www.youtube.com/watch?v=C3DtGTr0jX8
TU Wien Rendering #38 - Awesome Rendering Papers from 2013-2015
There are tons of really inspiring research works from the last two years, many of which were presented at the SIGGRAPH conference. Path space manipulation, more accurate spectral rendering with hero wavelength spectral sampling, rapid rendering of heterogeneous participating media with residual ratio tracking, sampling light paths in the gradient domain, rendering granular materials like billions of sand speckles, you name it! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Path space manipulation, this is again from the kit guys, this is a wonderful tool. Now, what happens if the artist would create a scene that he really likes but there are some artifacts or some small effects that he would like to get rid of? What do you have to do? Well obviously you have to change the physical parameters because this is what physical reality would look like. But still you could say that if you take a look at the left image you don't really like the reflections on the wall behind or you don't like the incoming direction of the sunlight or maybe you don't like the reflection of the car in the mirror. So for instance for the mirror what if we could pretend that the normal of the mirror wasn't what it is but it would have been something different and this is exactly what this work gives you. On the right you can see a side by side comparison of the original image and the manipulated scene. It looks much better and you don't have to change a single thing in your scene. You just manipulate and you just bend the some of the light paths there are in the scene. Imagine that you don't like the caustics on the bunny and you can just basically grab and pull the thing onto the face of the bunny. It is really really amazing. This work may be one of the reasons one of many if I may add for Pixar to change from their race render that has a long history of more than 25 years of movies and wonderful works and now they have changed to path tracing. They use global illumination in their newest movies and imagine how powerful this tool can be in the hands of a professional artist let alone a team of professional artists. We have some amazing times ahead in global illumination research. Residue or ratio tracking this is a Disney paper. This is basically about how to render heterogeneous participating media. What does it mean? heterogeneous means that either the density or the scattering or absorption properties of the medium are changing in space. They are not uniform. This technique helps you to render this kind of light transport much more quickly than previous methods. It builds on woodcock tracking. It improves woodcock tracking but it is basically the industry standard way of rendering heterogeneous materials and what it does essentially is a mathematically really appealing way of probabilistically treating the scene as if it was homogenous instead. So trying to reduce the problem to a simpler problem that we can solve easily and doing some probabilistic tricks over this classification and this gives you an unbiased estimator for the heterogeneous participating medium. And this piece of work is an improvement even over that. This was done by Jan Novak and colleagues at Disney. In this work by Alexander Wilkie who by the way used to be a PhD student here at our university and he graduated here. Now he moved to the Czech Republic and is doing wonderful work. We discussed earlier that if you would like to do fully spectral rendering then you take a random sample in the spectrum of visible wavelengths. And he came up with a trick that if you do this in a way that is just a bit smarter than what we do naively then you can get results like this using the same number of samples. You can see that the noise is much more representative to the actual image that we're rendering. Let's take a look at another example. How about this, this is the naive spectral rendering and his technique called the hero wavelength spectral sampling. Amazing piece of work. You should definitely, definitely check it out. I promise to you that we would start out with algorithms from 1986 up to until last week. So this literally appeared last week. This is the gradient domain part tracing algorithm. But I will also use a figure from the gradient domain metropolis paper for better understandability. So the key idea is that we are not seeking the light anymore. We're seeking changes. Now what does it mean? Take a look at the image on the upper left. It says that we're basically interested in this small region that is a hard shadow boundary. And below it the image says that let's say that this whatever function that we're computing is zero in the shadow boundary and one outside. You can intuitively imagine that this means that yes, we have no radiance in the shadow boundary and we have a really bright region outside. What would the regular metropolis sampler do? Well, it is a mark of chain that in its stationary distribution would like to do optimal important sampling. What does it mean? It means that the brighter regions would be sampled more. So you can see the red dots in there. We would sample this region that is one all the time and we would never put any sample in the zero. But if we are not seeking for the light, we are seeking for changes. So imagine that we are interested in putting samples at the shadow boundary because we know that there is some change happening in there, but right and from the left to it, there is absolutely no change. So if I get enough information only of the shadow boundary, then I can reconstruct the whole image with a technique that is called Poisson image reconstruction. This means intuitively something like reconstructing a function from its gradients. You can imagine it in 1D as something like you have a 1D function. You are interested in the function, but the only thing you have is how the function changes. You have derivatives. And from these derivatives, you would like to reconstruct the function. This is exactly what the algorithm does and it's an amazing idea. Love it. You can see that it significantly outperforms past racing with a much lesser number of samples. Now let's note that because of the Poisson reconstruction step, the 5K SPP is compared to the 2K SPP. This is probably because it is more expensive to draw samples with this gradient domain past racing. You can see that this smart algorithm is really worth the additional time. Another great paper from last week from our friends at Disney. What if we would have a scene where we build a castle out of sand? And what if we are crazy enough that we would like to render every small grain of sand that is in the castle? That would mean billions upon billions of objects. That's a lot of intersections. That's a lot of problems. Even if you have some kind of spatial exploration structure. So this would take forever and a day. And they came up with a really cool solution that can give you these beautiful, beautiful results, at least an order of magnitude faster. I also promised to you that I would refer you to implementations of many of the discussed algorithms. So this is a huge list. Some of them are implemented only on the CPU. Some of them have also GPU implementations. So take a look and play with them. It's lots of fun. And if you're watching this lecture on the internet and don't worry about the links, in the video description box, I provide a link to these slides and you can just click away at them. And there are some must see videos. Some awesome, slow motion fracture tests with SLG. Well, smashing virtual objects is a lot of fun. Slow motion videos are a lot of fun. So this absolutely has to be a crazy good video. And it really is. And the remarkable thing about this is that the whole thing took 25 seconds to render per HD frame. I've also uploaded a bonus unit on how to use together blender and lux render. Basically this means that you model something in blender and you would like to export this scene and render it in lux render. This is going to be useful in the next assignment.
[{"start": 0.0, "end": 5.68, "text": " Path space manipulation, this is again from the kit guys, this is a wonderful tool."}, {"start": 5.68, "end": 12.88, "text": " Now, what happens if the artist would create a scene that he really likes but there are"}, {"start": 12.88, "end": 18.400000000000002, "text": " some artifacts or some small effects that he would like to get rid of?"}, {"start": 18.400000000000002, "end": 19.400000000000002, "text": " What do you have to do?"}, {"start": 19.400000000000002, "end": 23.36, "text": " Well obviously you have to change the physical parameters because this is what physical"}, {"start": 23.36, "end": 25.52, "text": " reality would look like."}, {"start": 25.52, "end": 29.92, "text": " But still you could say that if you take a look at the left image you don't really like"}, {"start": 29.92, "end": 36.92, "text": " the reflections on the wall behind or you don't like the incoming direction of the sunlight"}, {"start": 36.92, "end": 40.44, "text": " or maybe you don't like the reflection of the car in the mirror."}, {"start": 40.44, "end": 45.4, "text": " So for instance for the mirror what if we could pretend that the normal of the mirror wasn't"}, {"start": 45.4, "end": 50.2, "text": " what it is but it would have been something different and this is exactly what this work"}, {"start": 50.2, "end": 51.72, "text": " gives you."}, {"start": 51.72, "end": 56.92, "text": " On the right you can see a side by side comparison of the original image and the manipulated scene."}, {"start": 56.92, "end": 61.239999999999995, "text": " It looks much better and you don't have to change a single thing in your scene."}, {"start": 61.239999999999995, "end": 67.28, "text": " You just manipulate and you just bend the some of the light paths there are in the scene."}, {"start": 67.28, "end": 71.44, "text": " Imagine that you don't like the caustics on the bunny and you can just basically grab"}, {"start": 71.44, "end": 74.48, "text": " and pull the thing onto the face of the bunny."}, {"start": 74.48, "end": 77.6, "text": " It is really really amazing."}, {"start": 77.6, "end": 83.52, "text": " This work may be one of the reasons one of many if I may add for Pixar to change from"}, {"start": 83.52, "end": 90.32, "text": " their race render that has a long history of more than 25 years of movies and wonderful"}, {"start": 90.32, "end": 93.39999999999999, "text": " works and now they have changed to path tracing."}, {"start": 93.39999999999999, "end": 98.28, "text": " They use global illumination in their newest movies and imagine how powerful this tool"}, {"start": 98.28, "end": 104.56, "text": " can be in the hands of a professional artist let alone a team of professional artists."}, {"start": 104.56, "end": 110.84, "text": " We have some amazing times ahead in global illumination research."}, {"start": 110.84, "end": 114.36, "text": " Residue or ratio tracking this is a Disney paper."}, {"start": 114.36, "end": 120.12, "text": " This is basically about how to render heterogeneous participating media."}, {"start": 120.12, "end": 121.12, "text": " What does it mean?"}, {"start": 121.12, "end": 125.72, "text": " heterogeneous means that either the density or the scattering or absorption properties"}, {"start": 125.72, "end": 128.24, "text": " of the medium are changing in space."}, {"start": 128.24, "end": 129.92000000000002, "text": " They are not uniform."}, {"start": 129.92, "end": 134.88, "text": " This technique helps you to render this kind of light transport much more quickly than"}, {"start": 134.88, "end": 136.07999999999998, "text": " previous methods."}, {"start": 136.07999999999998, "end": 137.83999999999997, "text": " It builds on woodcock tracking."}, {"start": 137.83999999999997, "end": 143.11999999999998, "text": " It improves woodcock tracking but it is basically the industry standard way of rendering"}, {"start": 143.11999999999998, "end": 149.95999999999998, "text": " heterogeneous materials and what it does essentially is a mathematically really appealing way of"}, {"start": 149.95999999999998, "end": 154.27999999999997, "text": " probabilistically treating the scene as if it was homogenous instead."}, {"start": 154.27999999999997, "end": 159.2, "text": " So trying to reduce the problem to a simpler problem that we can solve easily and doing"}, {"start": 159.2, "end": 165.28, "text": " some probabilistic tricks over this classification and this gives you an unbiased estimator for"}, {"start": 165.28, "end": 167.76, "text": " the heterogeneous participating medium."}, {"start": 167.76, "end": 171.35999999999999, "text": " And this piece of work is an improvement even over that."}, {"start": 171.35999999999999, "end": 176.2, "text": " This was done by Jan Novak and colleagues at Disney."}, {"start": 176.2, "end": 181.79999999999998, "text": " In this work by Alexander Wilkie who by the way used to be a PhD student here at our"}, {"start": 181.79999999999998, "end": 183.76, "text": " university and he graduated here."}, {"start": 183.76, "end": 188.04, "text": " Now he moved to the Czech Republic and is doing wonderful work."}, {"start": 188.04, "end": 192.95999999999998, "text": " We discussed earlier that if you would like to do fully spectral rendering then you take"}, {"start": 192.95999999999998, "end": 196.68, "text": " a random sample in the spectrum of visible wavelengths."}, {"start": 196.68, "end": 201.12, "text": " And he came up with a trick that if you do this in a way that is just a bit smarter than"}, {"start": 201.12, "end": 206.51999999999998, "text": " what we do naively then you can get results like this using the same number of samples."}, {"start": 206.51999999999998, "end": 210.92, "text": " You can see that the noise is much more representative to the actual image that we're"}, {"start": 210.92, "end": 211.92, "text": " rendering."}, {"start": 211.92, "end": 215.12, "text": " Let's take a look at another example."}, {"start": 215.12, "end": 220.84, "text": " How about this, this is the naive spectral rendering and his technique called the hero"}, {"start": 220.84, "end": 223.56, "text": " wavelength spectral sampling."}, {"start": 223.56, "end": 224.56, "text": " Amazing piece of work."}, {"start": 224.56, "end": 228.28, "text": " You should definitely, definitely check it out."}, {"start": 228.28, "end": 235.52, "text": " I promise to you that we would start out with algorithms from 1986 up to until last week."}, {"start": 235.52, "end": 238.28, "text": " So this literally appeared last week."}, {"start": 238.28, "end": 241.28, "text": " This is the gradient domain part tracing algorithm."}, {"start": 241.28, "end": 247.2, "text": " But I will also use a figure from the gradient domain metropolis paper for better understandability."}, {"start": 247.2, "end": 251.08, "text": " So the key idea is that we are not seeking the light anymore."}, {"start": 251.08, "end": 253.48, "text": " We're seeking changes."}, {"start": 253.48, "end": 254.76, "text": " Now what does it mean?"}, {"start": 254.76, "end": 257.16, "text": " Take a look at the image on the upper left."}, {"start": 257.16, "end": 262.92, "text": " It says that we're basically interested in this small region that is a hard shadow boundary."}, {"start": 262.92, "end": 267.76, "text": " And below it the image says that let's say that this whatever function that we're computing"}, {"start": 267.76, "end": 271.26, "text": " is zero in the shadow boundary and one outside."}, {"start": 271.26, "end": 275.96, "text": " You can intuitively imagine that this means that yes, we have no radiance in the shadow"}, {"start": 275.96, "end": 279.56, "text": " boundary and we have a really bright region outside."}, {"start": 279.56, "end": 282.28, "text": " What would the regular metropolis sampler do?"}, {"start": 282.28, "end": 288.32, "text": " Well, it is a mark of chain that in its stationary distribution would like to do optimal important"}, {"start": 288.32, "end": 289.48, "text": " sampling."}, {"start": 289.48, "end": 290.48, "text": " What does it mean?"}, {"start": 290.48, "end": 293.32, "text": " It means that the brighter regions would be sampled more."}, {"start": 293.32, "end": 295.15999999999997, "text": " So you can see the red dots in there."}, {"start": 295.15999999999997, "end": 299.92, "text": " We would sample this region that is one all the time and we would never put any sample"}, {"start": 299.92, "end": 300.92, "text": " in the zero."}, {"start": 300.92, "end": 304.64000000000004, "text": " But if we are not seeking for the light, we are seeking for changes."}, {"start": 304.64000000000004, "end": 310.0, "text": " So imagine that we are interested in putting samples at the shadow boundary because we know"}, {"start": 310.0, "end": 315.0, "text": " that there is some change happening in there, but right and from the left to it, there is"}, {"start": 315.0, "end": 316.64000000000004, "text": " absolutely no change."}, {"start": 316.64000000000004, "end": 321.16, "text": " So if I get enough information only of the shadow boundary, then I can reconstruct the"}, {"start": 321.16, "end": 325.6, "text": " whole image with a technique that is called Poisson image reconstruction."}, {"start": 325.6, "end": 331.02000000000004, "text": " This means intuitively something like reconstructing a function from its gradients."}, {"start": 331.02000000000004, "end": 335.26000000000005, "text": " You can imagine it in 1D as something like you have a 1D function."}, {"start": 335.26000000000005, "end": 339.86, "text": " You are interested in the function, but the only thing you have is how the function changes."}, {"start": 339.86, "end": 341.16, "text": " You have derivatives."}, {"start": 341.16, "end": 344.48, "text": " And from these derivatives, you would like to reconstruct the function."}, {"start": 344.48, "end": 348.32000000000005, "text": " This is exactly what the algorithm does and it's an amazing idea."}, {"start": 348.32000000000005, "end": 350.40000000000003, "text": " Love it."}, {"start": 350.4, "end": 357.28, "text": " You can see that it significantly outperforms past racing with a much lesser number of samples."}, {"start": 357.28, "end": 363.15999999999997, "text": " Now let's note that because of the Poisson reconstruction step, the 5K SPP is compared"}, {"start": 363.15999999999997, "end": 364.64, "text": " to the 2K SPP."}, {"start": 364.64, "end": 369.44, "text": " This is probably because it is more expensive to draw samples with this gradient domain"}, {"start": 369.44, "end": 370.44, "text": " past racing."}, {"start": 370.44, "end": 382.32, "text": " You can see that this smart algorithm is really worth the additional time."}, {"start": 382.32, "end": 386.2, "text": " Another great paper from last week from our friends at Disney."}, {"start": 386.2, "end": 390.56, "text": " What if we would have a scene where we build a castle out of sand?"}, {"start": 390.56, "end": 396.4, "text": " And what if we are crazy enough that we would like to render every small grain of sand"}, {"start": 396.4, "end": 397.76, "text": " that is in the castle?"}, {"start": 397.76, "end": 401.03999999999996, "text": " That would mean billions upon billions of objects."}, {"start": 401.03999999999996, "end": 402.64, "text": " That's a lot of intersections."}, {"start": 402.64, "end": 404.32, "text": " That's a lot of problems."}, {"start": 404.32, "end": 408.88, "text": " Even if you have some kind of spatial exploration structure."}, {"start": 408.88, "end": 411.56, "text": " So this would take forever and a day."}, {"start": 411.56, "end": 416.12, "text": " And they came up with a really cool solution that can give you these beautiful, beautiful"}, {"start": 416.12, "end": 421.28, "text": " results, at least an order of magnitude faster."}, {"start": 421.28, "end": 426.68, "text": " I also promised to you that I would refer you to implementations of many of the discussed"}, {"start": 426.68, "end": 427.68, "text": " algorithms."}, {"start": 427.68, "end": 429.0, "text": " So this is a huge list."}, {"start": 429.0, "end": 431.16, "text": " Some of them are implemented only on the CPU."}, {"start": 431.16, "end": 434.48, "text": " Some of them have also GPU implementations."}, {"start": 434.48, "end": 436.32, "text": " So take a look and play with them."}, {"start": 436.32, "end": 437.32, "text": " It's lots of fun."}, {"start": 437.32, "end": 440.96000000000004, "text": " And if you're watching this lecture on the internet and don't worry about the links,"}, {"start": 440.96000000000004, "end": 445.44, "text": " in the video description box, I provide a link to these slides and you can just click"}, {"start": 445.44, "end": 448.96000000000004, "text": " away at them."}, {"start": 448.96000000000004, "end": 451.28000000000003, "text": " And there are some must see videos."}, {"start": 451.28000000000003, "end": 455.6, "text": " Some awesome, slow motion fracture tests with SLG."}, {"start": 455.6, "end": 460.08000000000004, "text": " Well, smashing virtual objects is a lot of fun."}, {"start": 460.08000000000004, "end": 462.64000000000004, "text": " Slow motion videos are a lot of fun."}, {"start": 462.64000000000004, "end": 465.24, "text": " So this absolutely has to be a crazy good video."}, {"start": 465.24, "end": 466.24, "text": " And it really is."}, {"start": 466.24, "end": 471.52000000000004, "text": " And the remarkable thing about this is that the whole thing took 25 seconds to render per"}, {"start": 471.52000000000004, "end": 472.52000000000004, "text": " HD frame."}, {"start": 472.52000000000004, "end": 477.48, "text": " I've also uploaded a bonus unit on how to use together blender and lux render."}, {"start": 477.48, "end": 481.52000000000004, "text": " Basically this means that you model something in blender and you would like to export this"}, {"start": 481.52000000000004, "end": 483.56, "text": " scene and render it in lux render."}, {"start": 483.56, "end": 486.16, "text": " This is going to be useful in the next assignment."}]
Two Minute Papers
https://www.youtube.com/watch?v=-WQu7cLuniM
TU Wien Rendering #37 - Manifold Exploration
That pesky torus enclosed in the glass cube again! Since it contains lots of SDS light paths, it is the bane of many rendering algorithms. However, it is no match for Wenzel Jakob's Manifold Exploration technique that explicitly looks for these paths and runs an equation solving system to find and render these light paths that are difficult or impossible to render with traditional techniques. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
This is the work of Vensal Jakob, he's a super smart, really brilliant guy. He extended the beach metropolis algorithm to handle SDS transport better. Now how is this possible? What I have written here is the very scientific way of stating what is really happening. The most difficult parts form a manifold in path space and you can grab this manifold and sample this exhaustively with an equation solving system. Let's take a look at the intuition. This is super useful but very challenging to understand and implement for ordinary people. What is exactly happening here? So we have a diffuse bounce. This is xb and we hit the light source after that which is xc on the upper right. And between the b and the c we have two specular bounces. And imagine that I am fixing the xb and xc. These are two fixed vertices. And if I have this glass egg in between that is perfectly specular, then I can write an algorithm that computes what should be the exact outgoing direction from this diffuse vertex in order to exactly hit that xc point. There is only one possible path because we have perfectly specular inter-reflections in between. So what should be this outgoing direction from xb? This is the equation solving system that we are interested in. How do the results look like? Well you can compare it to Metropolis light transport that is either very noisy or it misses some of the light paths completely in these very difficult test cases. The manifold exploration path eraser outperforms all of the existing algorithms. PSSMLT is the Kalamaz time MLT, MLT is the original feature Metropolis. One more example. A, Vitch Metropolis B, ERPT. I will tell you in a second what that is. C, Kalamaz Metropolis light transport algorithm, and D, manifold exploration path tracing. Vancell was kind enough to put I think 20 minutes talk about this work on his website so make sure to check it out. It is really well illustrated. It is really well explained. Make sure to check it out. Let's take a look how the algorithm converges in time. Take a look at this beauty. Lots of SDS light paths and in the first 10 minutes you already have some degree of convergence that would take days or possibly forever with other algorithms. Pretty amazing. Pretty amazing. One of my favorites out there. Here you can see side by side. Elements Metropolis light transport versus manifold exploration path tracing. It's difficult not to get excited about this right?
[{"start": 0.0, "end": 7.6000000000000005, "text": " This is the work of Vensal Jakob, he's a super smart, really brilliant guy."}, {"start": 7.6000000000000005, "end": 12.16, "text": " He extended the beach metropolis algorithm to handle SDS transport better."}, {"start": 12.16, "end": 14.88, "text": " Now how is this possible?"}, {"start": 14.88, "end": 19.8, "text": " What I have written here is the very scientific way of stating what is really happening."}, {"start": 19.8, "end": 24.560000000000002, "text": " The most difficult parts form a manifold in path space and you can grab this manifold"}, {"start": 24.560000000000002, "end": 29.6, "text": " and sample this exhaustively with an equation solving system."}, {"start": 29.6, "end": 31.28, "text": " Let's take a look at the intuition."}, {"start": 31.28, "end": 39.52, "text": " This is super useful but very challenging to understand and implement for ordinary people."}, {"start": 39.52, "end": 41.24, "text": " What is exactly happening here?"}, {"start": 41.24, "end": 43.480000000000004, "text": " So we have a diffuse bounce."}, {"start": 43.480000000000004, "end": 50.16, "text": " This is xb and we hit the light source after that which is xc on the upper right."}, {"start": 50.16, "end": 54.400000000000006, "text": " And between the b and the c we have two specular bounces."}, {"start": 54.400000000000006, "end": 58.72, "text": " And imagine that I am fixing the xb and xc."}, {"start": 58.72, "end": 60.68, "text": " These are two fixed vertices."}, {"start": 60.68, "end": 66.4, "text": " And if I have this glass egg in between that is perfectly specular, then I can write an"}, {"start": 66.4, "end": 72.2, "text": " algorithm that computes what should be the exact outgoing direction from this diffuse vertex"}, {"start": 72.2, "end": 75.88, "text": " in order to exactly hit that xc point."}, {"start": 75.88, "end": 81.36, "text": " There is only one possible path because we have perfectly specular inter-reflections in"}, {"start": 81.36, "end": 82.36, "text": " between."}, {"start": 82.36, "end": 85.32, "text": " So what should be this outgoing direction from xb?"}, {"start": 85.32, "end": 89.03999999999999, "text": " This is the equation solving system that we are interested in."}, {"start": 89.03999999999999, "end": 90.67999999999999, "text": " How do the results look like?"}, {"start": 90.67999999999999, "end": 95.75999999999999, "text": " Well you can compare it to Metropolis light transport that is either very noisy or it"}, {"start": 95.75999999999999, "end": 100.75999999999999, "text": " misses some of the light paths completely in these very difficult test cases."}, {"start": 100.75999999999999, "end": 106.24, "text": " The manifold exploration path eraser outperforms all of the existing algorithms."}, {"start": 106.24, "end": 116.56, "text": " PSSMLT is the Kalamaz time MLT, MLT is the original feature Metropolis."}, {"start": 116.56, "end": 117.56, "text": " One more example."}, {"start": 117.56, "end": 121.47999999999999, "text": " A, Vitch Metropolis B, ERPT."}, {"start": 121.47999999999999, "end": 123.52, "text": " I will tell you in a second what that is."}, {"start": 123.52, "end": 131.35999999999999, "text": " C, Kalamaz Metropolis light transport algorithm, and D, manifold exploration path tracing."}, {"start": 131.36, "end": 147.48000000000002, "text": " Vancell was kind enough to put I think 20 minutes talk about this work on his website so make"}, {"start": 147.48000000000002, "end": 148.96, "text": " sure to check it out."}, {"start": 148.96, "end": 151.16000000000003, "text": " It is really well illustrated."}, {"start": 151.16000000000003, "end": 152.68, "text": " It is really well explained."}, {"start": 152.68, "end": 154.76000000000002, "text": " Make sure to check it out."}, {"start": 154.76000000000002, "end": 158.8, "text": " Let's take a look how the algorithm converges in time."}, {"start": 158.8, "end": 161.16000000000003, "text": " Take a look at this beauty."}, {"start": 161.16, "end": 167.12, "text": " Lots of SDS light paths and in the first 10 minutes you already have some degree of"}, {"start": 167.12, "end": 175.0, "text": " convergence that would take days or possibly forever with other algorithms."}, {"start": 175.0, "end": 176.4, "text": " Pretty amazing."}, {"start": 176.4, "end": 177.4, "text": " Pretty amazing."}, {"start": 177.4, "end": 185.0, "text": " One of my favorites out there."}, {"start": 185.0, "end": 187.12, "text": " Here you can see side by side."}, {"start": 187.12, "end": 194.12, "text": " Elements Metropolis light transport versus manifold exploration path tracing."}, {"start": 194.12, "end": 223.12, "text": " It's difficult not to get excited about this right?"}]
Two Minute Papers
https://www.youtube.com/watch?v=Hc9zu5-O7Eo
TU Wien Rendering #36 - Vertex Connection and Merging, Path Space Regularization
The two main branches of global illumination algorithms were biased and unbiased techniques. Iliyan Georgiev came up with a method that finally, after so long, unifies these two worlds. The algorithm starts out by cutting some corners, while progressively decreasing the bias as time goes by. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Now, let's proceed to vertex connection and merging by Iliangar GF and colleagues. So what he proposes to do is that we conditionally accept this path, the vertex next to XS, but we pretend that we indeed have the hit. What this basically means is that we have a biased connection, something that didn't really happen, but we pretend that it did, and we have this R, that's the merging radius. So what this means is that on the left side, this XS star, I would put on XS instead if it is close by. And by close by, we mean that it is in a circle that is of radius R. Okay, but what does this give to me? Because this is a biased technique. If you add one more trick and this trick would be making R decay over time. So this would shrink and shrink and shrink, and eventually it would get to an epsilon value and that's something that's very close to zero in an infinite amount of time. So the bias would disappear from the renderer in time. That's quite remarkable. I'll tell you in a second why. Some results with the vertex connection and merging technique. You can see that it can render this difficult, difficult SDS light transport situation. So this is indeed a historical moment in global illumination. Why? Because this kind of unifies biased and unbiased photorealistic rendering. And that's huge. That's huge because biased and unbiased rendering was the two biggest schools in photorealistic image synthesis. There were the unbiased guys who were the rigorous scientific, let's sample all the light paths and let's not cut corners type of people. And there were the biased guys who said that, okay, let's cut corners because this thing takes forever. So let's use some optimization techniques. And what vertex merging gives you is essentially an algorithm that starts out biased, but it has less and less bias as time goes by eventually ending up as an unbiased technique. So this is a historical moment that unifies unbiased and biased photorealistic rendering. Wonderful piece of work. Now comparison first by directional path tracing, then progressive photo mapping, and vertex connection and merging. Make sure to check out the paper here. Onwards to path space regularization. This is a work of Anton Kaplanyan and colleagues. He's a super smart guy at the kit. And this is essentially a generalization of vertex connection and merging. What is happening is essentially not spatial but angular regularization. What does this mean? What we're looking for is connecting the diffuse vertex to the specular. With Vcm, what you would do is you would continue the light path from the light source. And you would hit a point that is nearby this next specular vertex. And you would set this tolerance, this radius, this merging radius, or and if it's inside, then you accept the light path. Now this you can call spatial regularization. What Anton is proposing is angular regularization. So you would say that you will take a tolerance value in terms of outgoing values. And this intuition is slightly different because what this essentially means is that we have delta distributions for specular reflections, but we start out with a large angular tolerance. And this means that the specular interreflections will be treated as if they were diffuse. So the mirror will show up as if it were a completely white or some colored wall. And then it will slowly, slowly converge to being a mirror. We can imagine this distribution as what you see in the right side that you have the blue. A diffuse ish, vrdf. And you put your two fingers on the sides of this and you start pushing them together. And this push happens in time. So as time goes by, we go from the blue to the orange to the green. And we would even squeeze this green more and more and more until it gets to be a delta distribution. So over time mirrors are going to be mirrors. But in the meantime, we will be able to render SDS light paths, brilliant piece of work. And the comparison to other algorithms. What you should be looking out for is path tracing with regularization on the right. And this is the only technique that can render this eG, the urographics logo reflected in the mirror.
[{"start": 0.0, "end": 7.0, "text": " Now, let's proceed to vertex connection and merging by Iliangar GF and colleagues."}, {"start": 7.0, "end": 13.0, "text": " So what he proposes to do is that we conditionally accept this path, the vertex next to XS,"}, {"start": 13.0, "end": 16.0, "text": " but we pretend that we indeed have the hit."}, {"start": 16.0, "end": 21.0, "text": " What this basically means is that we have a biased connection, something that didn't really happen,"}, {"start": 21.0, "end": 26.0, "text": " but we pretend that it did, and we have this R, that's the merging radius."}, {"start": 26.0, "end": 35.0, "text": " So what this means is that on the left side, this XS star, I would put on XS instead if it is close by."}, {"start": 35.0, "end": 39.0, "text": " And by close by, we mean that it is in a circle that is of radius R."}, {"start": 39.0, "end": 41.0, "text": " Okay, but what does this give to me?"}, {"start": 41.0, "end": 43.0, "text": " Because this is a biased technique."}, {"start": 43.0, "end": 48.0, "text": " If you add one more trick and this trick would be making R decay over time."}, {"start": 48.0, "end": 54.0, "text": " So this would shrink and shrink and shrink, and eventually it would get to an epsilon value"}, {"start": 54.0, "end": 58.0, "text": " and that's something that's very close to zero in an infinite amount of time."}, {"start": 58.0, "end": 62.0, "text": " So the bias would disappear from the renderer in time."}, {"start": 62.0, "end": 67.0, "text": " That's quite remarkable. I'll tell you in a second why."}, {"start": 67.0, "end": 70.0, "text": " Some results with the vertex connection and merging technique."}, {"start": 70.0, "end": 80.0, "text": " You can see that it can render this difficult, difficult SDS light transport situation."}, {"start": 80.0, "end": 84.0, "text": " So this is indeed a historical moment in global illumination."}, {"start": 84.0, "end": 91.0, "text": " Why? Because this kind of unifies biased and unbiased photorealistic rendering."}, {"start": 91.0, "end": 100.0, "text": " And that's huge. That's huge because biased and unbiased rendering was the two biggest schools in photorealistic image synthesis."}, {"start": 100.0, "end": 105.0, "text": " There were the unbiased guys who were the rigorous scientific,"}, {"start": 105.0, "end": 110.0, "text": " let's sample all the light paths and let's not cut corners type of people."}, {"start": 110.0, "end": 115.0, "text": " And there were the biased guys who said that, okay, let's cut corners because this thing takes forever."}, {"start": 115.0, "end": 117.0, "text": " So let's use some optimization techniques."}, {"start": 117.0, "end": 123.0, "text": " And what vertex merging gives you is essentially an algorithm that starts out biased,"}, {"start": 123.0, "end": 130.0, "text": " but it has less and less bias as time goes by eventually ending up as an unbiased technique."}, {"start": 130.0, "end": 137.0, "text": " So this is a historical moment that unifies unbiased and biased photorealistic rendering."}, {"start": 137.0, "end": 141.0, "text": " Wonderful piece of work."}, {"start": 141.0, "end": 147.0, "text": " Now comparison first by directional path tracing, then progressive photo mapping,"}, {"start": 147.0, "end": 151.0, "text": " and vertex connection and merging."}, {"start": 151.0, "end": 155.0, "text": " Make sure to check out the paper here."}, {"start": 155.0, "end": 161.0, "text": " Onwards to path space regularization. This is a work of Anton Kaplanyan and colleagues."}, {"start": 161.0, "end": 164.0, "text": " He's a super smart guy at the kit."}, {"start": 164.0, "end": 169.0, "text": " And this is essentially a generalization of vertex connection and merging."}, {"start": 169.0, "end": 173.0, "text": " What is happening is essentially not spatial but angular regularization."}, {"start": 173.0, "end": 177.0, "text": " What does this mean?"}, {"start": 177.0, "end": 181.0, "text": " What we're looking for is connecting the diffuse vertex to the specular."}, {"start": 181.0, "end": 186.0, "text": " With Vcm, what you would do is you would continue the light path from the light source."}, {"start": 186.0, "end": 191.0, "text": " And you would hit a point that is nearby this next specular vertex."}, {"start": 191.0, "end": 196.0, "text": " And you would set this tolerance, this radius, this merging radius, or"}, {"start": 196.0, "end": 200.0, "text": " and if it's inside, then you accept the light path."}, {"start": 200.0, "end": 204.0, "text": " Now this you can call spatial regularization."}, {"start": 204.0, "end": 207.0, "text": " What Anton is proposing is angular regularization."}, {"start": 207.0, "end": 214.0, "text": " So you would say that you will take a tolerance value in terms of outgoing values."}, {"start": 214.0, "end": 219.0, "text": " And this intuition is slightly different because what this essentially means is that"}, {"start": 219.0, "end": 228.0, "text": " we have delta distributions for specular reflections, but we start out with a large angular tolerance."}, {"start": 228.0, "end": 234.0, "text": " And this means that the specular interreflections will be treated as if they were diffuse."}, {"start": 234.0, "end": 239.0, "text": " So the mirror will show up as if it were a completely white or some colored wall."}, {"start": 239.0, "end": 243.0, "text": " And then it will slowly, slowly converge to being a mirror."}, {"start": 243.0, "end": 249.0, "text": " We can imagine this distribution as what you see in the right side that you have the blue."}, {"start": 249.0, "end": 251.0, "text": " A diffuse ish, vrdf."}, {"start": 251.0, "end": 257.0, "text": " And you put your two fingers on the sides of this and you start pushing them together."}, {"start": 257.0, "end": 259.0, "text": " And this push happens in time."}, {"start": 259.0, "end": 265.0, "text": " So as time goes by, we go from the blue to the orange to the green."}, {"start": 265.0, "end": 271.0, "text": " And we would even squeeze this green more and more and more until it gets to be a delta distribution."}, {"start": 271.0, "end": 274.0, "text": " So over time mirrors are going to be mirrors."}, {"start": 274.0, "end": 282.0, "text": " But in the meantime, we will be able to render SDS light paths, brilliant piece of work."}, {"start": 282.0, "end": 284.0, "text": " And the comparison to other algorithms."}, {"start": 284.0, "end": 289.0, "text": " What you should be looking out for is path tracing with regularization on the right."}, {"start": 289.0, "end": 316.0, "text": " And this is the only technique that can render this eG, the urographics logo reflected in the mirror."}]
Two Minute Papers
https://www.youtube.com/watch?v=lc93pVlewGM
TU Wien Rendering #35 - Stochastic Progressive Photon Mapping
Photon mapping is working great for a variety of scenes. Ideally, we would like to have a large number of photons for caustics, indirect illumination, etc. but having only a finite amount of photons in our photon maps introduces problems. To remedy this, Toshiya Hachisuka came up with Stochastic Progressive Photon Mapping, a technique where we progressively discard and re-generate the photon maps with fresh samples. This way we are not stuck with the only one photon map we have and we get more and more information about the scene as time goes by. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
So, hectic progressive photo mapping. What is this thing about? Well, you would need an infinite amount of photons to ensure consistency. You cannot do that. But what you could do is that you could, from time to time, generate a new photo map and use that. And this means discarding previous symbols and creating new ones. So we start out with a regular ray tracing pass that we call Ipass. And we use this photo map that we have. And then we generate a new photo map and then we are going to use that from the next pass. There's also an addition and you start out with bigger photons, so to say, and the size or the radius of these photons would shrink in time. Why is this useful? Well, because you have practically an infinite number of photons. And you can see how the rendered image evolves over time with progressive photo mapping. So this method is consistent. This is a big deal because you can make photo mapping consistent in practical cases. So this is our previous scene with heavy SDS transport. And you can see how it converges in the first 10 minutes of the rendering process with SVPM. Another set of results with the classical algorithms that we all know and love. And you can see that photo mapping kind of works, you don't have higher frequency noise, but you can see that it overblers many of the important features of the image. And this is the result with BPM, much sharper images, slightly more noise, but it is practically consistent. What about this difficult previous scene with lots of SDS transport? Well, photo mapping kind of worked, but it again overblurred many of the important features. Progressive photo mapping takes care of this. You can read the papers here. So SVPM doesn't just render SDS light paths, but it does it efficiently. It is a wonderful previewing algorithm. So you can just fire it up and in a matter of seconds you can get a good idea on how your scene actually is going to look like. However, if you set this starting radius to a setting that's too high, then you're going to have large photons for the longest time. And this means that the image will be again overblurred for a very long time in the rendering process. However, if you set it for too low, it will be a very sharp image, but it will take a very long time to fill the image. So as you can see, this is a more complex technique that can possibly outperform the algorithms that you have previously seen, but this comes at a cost. This is a more complex algorithm. This is slightly more difficult to implement. And it has more parameters than previous methods. You can see that this is not like the large mutation probability with Metropolis Light Transport. If you set up one of the parameters incorrectly, you may have to wait for way too long. And if you set up a simple photo map, not SPPM, a simple photo map incorrectly, you may even get an incorrect image because you don't have enough photons at the most important regions of the image. This work was created by Toshiyah Hachiska and his colleagues, and it's a brilliant piece of work.
[{"start": 0.0, "end": 4.5200000000000005, "text": " So, hectic progressive photo mapping."}, {"start": 4.5200000000000005, "end": 5.88, "text": " What is this thing about?"}, {"start": 5.88, "end": 9.52, "text": " Well, you would need an infinite amount of photons to ensure consistency."}, {"start": 9.52, "end": 10.68, "text": " You cannot do that."}, {"start": 10.68, "end": 16.8, "text": " But what you could do is that you could, from time to time, generate a new photo map and"}, {"start": 16.8, "end": 17.88, "text": " use that."}, {"start": 17.88, "end": 23.36, "text": " And this means discarding previous symbols and creating new ones."}, {"start": 23.36, "end": 27.68, "text": " So we start out with a regular ray tracing pass that we call Ipass."}, {"start": 27.68, "end": 29.96, "text": " And we use this photo map that we have."}, {"start": 29.96, "end": 35.04, "text": " And then we generate a new photo map and then we are going to use that from the next pass."}, {"start": 35.04, "end": 41.519999999999996, "text": " There's also an addition and you start out with bigger photons, so to say, and the size"}, {"start": 41.519999999999996, "end": 47.28, "text": " or the radius of these photons would shrink in time."}, {"start": 47.28, "end": 48.480000000000004, "text": " Why is this useful?"}, {"start": 48.480000000000004, "end": 52.28, "text": " Well, because you have practically an infinite number of photons."}, {"start": 52.28, "end": 57.84, "text": " And you can see how the rendered image evolves over time with progressive photo mapping."}, {"start": 57.84, "end": 60.120000000000005, "text": " So this method is consistent."}, {"start": 60.120000000000005, "end": 66.68, "text": " This is a big deal because you can make photo mapping consistent in practical cases."}, {"start": 66.68, "end": 70.68, "text": " So this is our previous scene with heavy SDS transport."}, {"start": 70.68, "end": 92.44000000000001, "text": " And you can see how it converges in the first 10 minutes of the rendering process with SVPM."}, {"start": 92.44000000000001, "end": 96.64000000000001, "text": " Another set of results with the classical algorithms that we all know and love."}, {"start": 96.64, "end": 100.76, "text": " And you can see that photo mapping kind of works, you don't have higher frequency noise,"}, {"start": 100.76, "end": 107.72, "text": " but you can see that it overblers many of the important features of the image."}, {"start": 107.72, "end": 114.08, "text": " And this is the result with BPM, much sharper images, slightly more noise, but it is practically"}, {"start": 114.08, "end": 116.72, "text": " consistent."}, {"start": 116.72, "end": 120.96000000000001, "text": " What about this difficult previous scene with lots of SDS transport?"}, {"start": 120.96, "end": 127.11999999999999, "text": " Well, photo mapping kind of worked, but it again overblurred many of the important features."}, {"start": 127.11999999999999, "end": 130.6, "text": " Progressive photo mapping takes care of this."}, {"start": 130.6, "end": 138.4, "text": " You can read the papers here."}, {"start": 138.4, "end": 143.24, "text": " So SVPM doesn't just render SDS light paths, but it does it efficiently."}, {"start": 143.24, "end": 146.16, "text": " It is a wonderful previewing algorithm."}, {"start": 146.16, "end": 151.92, "text": " So you can just fire it up and in a matter of seconds you can get a good idea on how your"}, {"start": 151.92, "end": 154.76, "text": " scene actually is going to look like."}, {"start": 154.76, "end": 159.51999999999998, "text": " However, if you set this starting radius to a setting that's too high, then you're going"}, {"start": 159.51999999999998, "end": 162.68, "text": " to have large photons for the longest time."}, {"start": 162.68, "end": 167.16, "text": " And this means that the image will be again overblurred for a very long time in the rendering"}, {"start": 167.16, "end": 168.16, "text": " process."}, {"start": 168.16, "end": 173.0, "text": " However, if you set it for too low, it will be a very sharp image, but it will take a"}, {"start": 173.0, "end": 175.8, "text": " very long time to fill the image."}, {"start": 175.8, "end": 181.8, "text": " So as you can see, this is a more complex technique that can possibly outperform the algorithms"}, {"start": 181.8, "end": 185.12, "text": " that you have previously seen, but this comes at a cost."}, {"start": 185.12, "end": 186.84, "text": " This is a more complex algorithm."}, {"start": 186.84, "end": 190.24, "text": " This is slightly more difficult to implement."}, {"start": 190.24, "end": 193.44, "text": " And it has more parameters than previous methods."}, {"start": 193.44, "end": 197.44, "text": " You can see that this is not like the large mutation probability with Metropolis Light"}, {"start": 197.44, "end": 198.44, "text": " Transport."}, {"start": 198.44, "end": 203.52, "text": " If you set up one of the parameters incorrectly, you may have to wait for way too long."}, {"start": 203.52, "end": 208.56, "text": " And if you set up a simple photo map, not SPPM, a simple photo map incorrectly, you"}, {"start": 208.56, "end": 213.84, "text": " may even get an incorrect image because you don't have enough photons at the most important"}, {"start": 213.84, "end": 215.52, "text": " regions of the image."}, {"start": 215.52, "end": 221.04000000000002, "text": " This work was created by Toshiyah Hachiska and his colleagues, and it's a brilliant piece"}, {"start": 221.04, "end": 236.04, "text": " of work."}]
Two Minute Papers
https://www.youtube.com/watch?v=-1C2kL5pTbs
TU Wien Rendering #34 - SDS Transport, Photon Mapping
We have learned quite a few powerful algorithms for global illumination, but there still seems to be a peculiar scene with a torus inside a block of glass that just doesn't want to give in and render. It contains specular-diffuse-specular interactions that are particularly difficult, or often impossible to sample with traditional random sampling. We also talk about the first biased algorithm, Henrik Wann Jensen's masterpiece called photon mapping. This algorithm relies on the years of knowledge - for instance, we know that indirect illumination is usually a low-frequency signal that lends itself to the idea of using interpolation for missing samples instead of complete exhaustive sampling. This introduces bias, so the algorithm cuts corners - see for yourself if it's worth it! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Welcome to the last lecture where we are going to be talking about the state of the arting global illumination. Now, if you remember this scene, you hopefully also remember that we were able to render caustics insanely fast with Metropolis Light Transport, and the rest of the scene is also cleaning up pretty quickly. But what about this? This was a not so difficult looking scene, and you can see that the Kalaman style Metropolis is doing pretty awful here. I'm not even sure if it would converge if we would wait for a very long time. What the problem is here is called SDS or Specular Diffuse Specular Transport. Let's talk about this for a bit. So imagine that I have a light path that starts out from a light source, hits this glass object that is a Specular Bounce, and now it has another Specular Bounce after the refraction. Then we hit the diffuse object, then the mirror, and then we hit the eye. Let's put up there Hacbert's notation, and let's rip out the middle part of the light path. Now this says SDS. What is the intuition of this? It is reflected caustics because 1s and 1d gives you caustics like we discussed before, and then if you have another Specular Bounce, then this says that I am seeing the caustics through the mirror. The intuition for SDS light paths is reflected caustics. So what is exactly the problem here? Imagine that we start out from a diffuse surface, we sample the BRDF, and therefore we arrive to this other Specular surface, and off of this Specular surface we are supposed to hit the pinhole camera. Also note that this diffuse point on the surface was chosen by the Specular Interaction before, and it was chosen explicitly. Now you can hopefully see that this means that depending on the material models, if we have perfect Specular Inter reflections, then sampling such a light path is impossible. And this is a problem that you can encounter very often, because imagine that if you have a light source that is covered by a Specular surface, so for instance a glass light bulb, then even if you have a regular DS path, so 1d fuse and 1 Specular Bounce, then you add 1 more S because all the light that is exiting the light source is going to hit the cover, the last part of the light bulb. And therefore every DS is going to be SDS. Another image for the intuition and better understanding of what is exactly going on. You can also imagine that you are starting out a light path from the light source and from the eye, and you have the SD from the light source and you have the SS from the eye. Now what you would like to do is you would like to connect this diffused to the Specular vertex. Now this is impossible. The Specular vertex would have to be chosen by the diffuse, and the diffuse BRDF would be choosing one outgoing direction on the hemisphere, uniformly sampled, and it is only one possible direction that we would be happy with. The probability of sampling this one possible direction is exactly the same as the probability of sampling one point, and that is zero. So this is the SDS problem, and we are going to look at bias diagrams that try to address this problem. So this looks like SDS to me because we hit the glass cube, that's a Specular Bounce, then we hit the donut inside, and then we hit the glass cube again. So this is SDS. This is why it is so difficult to sample with Metropolis Light Transport. Photo mapping, the key idea is that we don't want to evaluate all possible light paths. What we would like to do is sending photons out of light sources, and we are going to store all of these photons in the map. And when we are computing actual light paths, we are going to rely on interpolation, we are going to use this knowledge that we have in the photo map. Some visualization images to get an idea how it exactly looks. Let's take a look at the first bounce. This is an image with only the very first bounces in the path-tracer, this is the direct light map. Now let's take a look at the indirect light map. This is the second and higher order bounces. This is basically indirect illumination, color bleeding. And you can see that this is actually low frequency information. You can see that the colors don't really seem to change so quickly. If we have indirect illumination, this is a mostly low frequency signal which lends itself to the idea of interpolation. This is an example on how to use all this information in the photo map. So I would be interested in the incident radians at the red dot. And what I can do is use the information from the nearby photons and I would average all this information to get an estimation for the red dot. And you can see that the brighter regions of the image seem to have more photons in the photo map. Why is that? Well, it's simple. It's because we are shooting photons out of the light sources. And these are the regions that are very visible from the light source. Let's take a look at some results. You can see a difficult scene rendered with path tracing. By direction path tracing is much, much better. But you can still see the firefly noise. We also have some results with metropolis light transport. Also doesn't help a lot with SDS transport and photo mapping. You can see that all this high frequency noise is gone. But the result is slightly more blurry than it should be because of the interpolation. We are averaging samples. And therefore, this smoothing behavior is inherent to that interpolation. What are the upsides of photo mapping? Well, caustics indirect illumination they converge really quickly. caustics why? Because you have a lot of samples because you see it from the light source indirect illumination why? Because it's mostly a low frequency signal that you can interpolate very easily. Note that it also helps with the SDS problem because of the interpolation. You don't really get high frequency noise for most cases. However, don't forget that you may need to shoot and store a lot of photons depending on how complex your scene is. And this can be very computationally intensive and also memory intensive. And interpolation can cause artifacts to appear. And this actually happens quite often for more complex scenes because you are looking up photons in the photon map nearby. If these nearby photons are on the same object that you would like to query, then these are usually depending on textures and many other things. This is usually usable information. But you are looking up nearby photons and you may see many examples in the room you're sitting in where you have discontinuities nearby. So there may be a wall that is one color and there may be a wall nearby at an intersection that is a different color. It may be that during the interpolation you use samples from the other wall because it is nearby and it doesn't really take this property into consideration. So therefore artifacts may appear. What about this algorithm? Well, we are cutting corners. We are using interpolation. We are not computing all the possible light paths there are. Therefore, this algorithm has to be biased. What about the consistency? Well, it is consistent provided that you have an infinite number of photons in the photon map and therefore you always get perfect information. However, this is only of theoretical value because obviously having an infinite number of photons may make sense in a mathematical way. But in a practical implementation, you cannot even shoot and you cannot even store an infinite number of photons. Some literature. This is the place where you can look up the original photon mapping paper from Henry Kvanyansen. Some delightful news. This is an image I shot at the geographic symposium of rendering EGSR last year. If you take a look at these people, you can see for instance Vojta Kiaros, lead of the rendering group at Disney Research. And he and the EGSR organizer crew gave out the test of time award to Henry Kvanyansen because of the photon mapping algorithm. It's been around for a while and it had seen a lot of use and it's a fantastic piece of work and he got recognized for that.
[{"start": 0.0, "end": 6.0, "text": " Welcome to the last lecture where we are going to be talking about the state of the arting global illumination."}, {"start": 6.0, "end": 16.0, "text": " Now, if you remember this scene, you hopefully also remember that we were able to render caustics insanely fast with Metropolis Light Transport,"}, {"start": 16.0, "end": 20.0, "text": " and the rest of the scene is also cleaning up pretty quickly."}, {"start": 20.0, "end": 22.0, "text": " But what about this?"}, {"start": 22.0, "end": 31.0, "text": " This was a not so difficult looking scene, and you can see that the Kalaman style Metropolis is doing pretty awful here."}, {"start": 31.0, "end": 36.0, "text": " I'm not even sure if it would converge if we would wait for a very long time."}, {"start": 38.0, "end": 45.0, "text": " What the problem is here is called SDS or Specular Diffuse Specular Transport. Let's talk about this for a bit."}, {"start": 45.0, "end": 54.0, "text": " So imagine that I have a light path that starts out from a light source, hits this glass object that is a Specular Bounce,"}, {"start": 54.0, "end": 65.0, "text": " and now it has another Specular Bounce after the refraction. Then we hit the diffuse object, then the mirror, and then we hit the eye."}, {"start": 65.0, "end": 72.0, "text": " Let's put up there Hacbert's notation, and let's rip out the middle part of the light path."}, {"start": 72.0, "end": 82.0, "text": " Now this says SDS. What is the intuition of this? It is reflected caustics because 1s and 1d gives you caustics like we discussed before,"}, {"start": 82.0, "end": 90.0, "text": " and then if you have another Specular Bounce, then this says that I am seeing the caustics through the mirror."}, {"start": 90.0, "end": 94.0, "text": " The intuition for SDS light paths is reflected caustics."}, {"start": 94.0, "end": 97.0, "text": " So what is exactly the problem here?"}, {"start": 97.0, "end": 105.0, "text": " Imagine that we start out from a diffuse surface, we sample the BRDF, and therefore we arrive to this other Specular surface,"}, {"start": 105.0, "end": 111.0, "text": " and off of this Specular surface we are supposed to hit the pinhole camera."}, {"start": 111.0, "end": 121.0, "text": " Also note that this diffuse point on the surface was chosen by the Specular Interaction before, and it was chosen explicitly."}, {"start": 121.0, "end": 129.0, "text": " Now you can hopefully see that this means that depending on the material models, if we have perfect Specular Inter reflections,"}, {"start": 129.0, "end": 133.0, "text": " then sampling such a light path is impossible."}, {"start": 133.0, "end": 142.0, "text": " And this is a problem that you can encounter very often, because imagine that if you have a light source that is covered by a Specular surface,"}, {"start": 142.0, "end": 150.0, "text": " so for instance a glass light bulb, then even if you have a regular DS path, so 1d fuse and 1 Specular Bounce,"}, {"start": 150.0, "end": 159.0, "text": " then you add 1 more S because all the light that is exiting the light source is going to hit the cover, the last part of the light bulb."}, {"start": 159.0, "end": 163.0, "text": " And therefore every DS is going to be SDS."}, {"start": 163.0, "end": 169.0, "text": " Another image for the intuition and better understanding of what is exactly going on."}, {"start": 169.0, "end": 175.0, "text": " You can also imagine that you are starting out a light path from the light source and from the eye,"}, {"start": 175.0, "end": 179.0, "text": " and you have the SD from the light source and you have the SS from the eye."}, {"start": 179.0, "end": 186.0, "text": " Now what you would like to do is you would like to connect this diffused to the Specular vertex."}, {"start": 186.0, "end": 188.0, "text": " Now this is impossible."}, {"start": 188.0, "end": 198.0, "text": " The Specular vertex would have to be chosen by the diffuse, and the diffuse BRDF would be choosing one outgoing direction on the hemisphere, uniformly sampled,"}, {"start": 198.0, "end": 203.0, "text": " and it is only one possible direction that we would be happy with."}, {"start": 203.0, "end": 211.0, "text": " The probability of sampling this one possible direction is exactly the same as the probability of sampling one point, and that is zero."}, {"start": 211.0, "end": 221.0, "text": " So this is the SDS problem, and we are going to look at bias diagrams that try to address this problem."}, {"start": 221.0, "end": 231.0, "text": " So this looks like SDS to me because we hit the glass cube, that's a Specular Bounce, then we hit the donut inside, and then we hit the glass cube again."}, {"start": 231.0, "end": 238.0, "text": " So this is SDS. This is why it is so difficult to sample with Metropolis Light Transport."}, {"start": 238.0, "end": 245.0, "text": " Photo mapping, the key idea is that we don't want to evaluate all possible light paths."}, {"start": 245.0, "end": 253.0, "text": " What we would like to do is sending photons out of light sources, and we are going to store all of these photons in the map."}, {"start": 253.0, "end": 263.0, "text": " And when we are computing actual light paths, we are going to rely on interpolation, we are going to use this knowledge that we have in the photo map."}, {"start": 263.0, "end": 272.0, "text": " Some visualization images to get an idea how it exactly looks."}, {"start": 272.0, "end": 282.0, "text": " Let's take a look at the first bounce. This is an image with only the very first bounces in the path-tracer, this is the direct light map."}, {"start": 282.0, "end": 290.0, "text": " Now let's take a look at the indirect light map. This is the second and higher order bounces. This is basically indirect illumination, color bleeding."}, {"start": 290.0, "end": 300.0, "text": " And you can see that this is actually low frequency information. You can see that the colors don't really seem to change so quickly."}, {"start": 300.0, "end": 310.0, "text": " If we have indirect illumination, this is a mostly low frequency signal which lends itself to the idea of interpolation."}, {"start": 310.0, "end": 318.0, "text": " This is an example on how to use all this information in the photo map. So I would be interested in the incident radians at the red dot."}, {"start": 318.0, "end": 328.0, "text": " And what I can do is use the information from the nearby photons and I would average all this information to get an estimation for the red dot."}, {"start": 328.0, "end": 336.0, "text": " And you can see that the brighter regions of the image seem to have more photons in the photo map. Why is that?"}, {"start": 336.0, "end": 346.0, "text": " Well, it's simple. It's because we are shooting photons out of the light sources. And these are the regions that are very visible from the light source."}, {"start": 346.0, "end": 355.0, "text": " Let's take a look at some results. You can see a difficult scene rendered with path tracing. By direction path tracing is much, much better."}, {"start": 355.0, "end": 361.0, "text": " But you can still see the firefly noise. We also have some results with metropolis light transport."}, {"start": 361.0, "end": 369.0, "text": " Also doesn't help a lot with SDS transport and photo mapping. You can see that all this high frequency noise is gone."}, {"start": 369.0, "end": 380.0, "text": " But the result is slightly more blurry than it should be because of the interpolation. We are averaging samples. And therefore, this smoothing behavior is inherent to that interpolation."}, {"start": 380.0, "end": 395.0, "text": " What are the upsides of photo mapping? Well, caustics indirect illumination they converge really quickly. caustics why? Because you have a lot of samples because you see it from the light source indirect illumination why?"}, {"start": 395.0, "end": 404.0, "text": " Because it's mostly a low frequency signal that you can interpolate very easily. Note that it also helps with the SDS problem because of the interpolation."}, {"start": 404.0, "end": 417.0, "text": " You don't really get high frequency noise for most cases. However, don't forget that you may need to shoot and store a lot of photons depending on how complex your scene is."}, {"start": 417.0, "end": 422.0, "text": " And this can be very computationally intensive and also memory intensive."}, {"start": 422.0, "end": 434.0, "text": " And interpolation can cause artifacts to appear. And this actually happens quite often for more complex scenes because you are looking up photons in the photon map nearby."}, {"start": 434.0, "end": 445.0, "text": " If these nearby photons are on the same object that you would like to query, then these are usually depending on textures and many other things. This is usually usable information."}, {"start": 445.0, "end": 453.0, "text": " But you are looking up nearby photons and you may see many examples in the room you're sitting in where you have discontinuities nearby."}, {"start": 453.0, "end": 460.0, "text": " So there may be a wall that is one color and there may be a wall nearby at an intersection that is a different color."}, {"start": 460.0, "end": 469.0, "text": " It may be that during the interpolation you use samples from the other wall because it is nearby and it doesn't really take this property into consideration."}, {"start": 469.0, "end": 472.0, "text": " So therefore artifacts may appear."}, {"start": 472.0, "end": 480.0, "text": " What about this algorithm? Well, we are cutting corners. We are using interpolation. We are not computing all the possible light paths there are."}, {"start": 480.0, "end": 494.0, "text": " Therefore, this algorithm has to be biased. What about the consistency? Well, it is consistent provided that you have an infinite number of photons in the photon map and therefore you always get perfect information."}, {"start": 494.0, "end": 502.0, "text": " However, this is only of theoretical value because obviously having an infinite number of photons may make sense in a mathematical way."}, {"start": 502.0, "end": 510.0, "text": " But in a practical implementation, you cannot even shoot and you cannot even store an infinite number of photons."}, {"start": 510.0, "end": 518.0, "text": " Some literature. This is the place where you can look up the original photon mapping paper from Henry Kvanyansen."}, {"start": 518.0, "end": 526.0, "text": " Some delightful news. This is an image I shot at the geographic symposium of rendering EGSR last year."}, {"start": 526.0, "end": 532.0, "text": " If you take a look at these people, you can see for instance Vojta Kiaros, lead of the rendering group at Disney Research."}, {"start": 532.0, "end": 541.0, "text": " And he and the EGSR organizer crew gave out the test of time award to Henry Kvanyansen because of the photon mapping algorithm."}, {"start": 541.0, "end": 549.0, "text": " It's been around for a while and it had seen a lot of use and it's a fantastic piece of work and he got recognized for that."}]
Two Minute Papers
https://www.youtube.com/watch?v=Zl36H9pwsHE
TU Wien Rendering #33 - Metropolis Light Transport
Metropolis Light Transport is a powerful technique that can outperform the convergence speed of Bidirectional Path Tracing on most difficult scenes (what makes a scene difficult is a story on its own). It promises optimal importance sampling "along multiple steps" in the stationary distribution of the Markov chain. This means that it gets better and better over time in seeking the brighter parts of the path space, therefore caustics pop up converged in almost every case. The two earliest and most well-known variants are the Veach and Kelemen-style Metropolis Light Transport (also called Primary Sample Space MLT). I will add that despite considering MLT as an unbiased algorithm, it suffers from an effect that we call start-up bias. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Okay, let's continue with even more good stuff. Metropolis light transports, straight from 97. The key idea is to seek the light. This is the thing that I always hear from the artists who use the algorithm. What we are trying to do is sampling brighter light paths, more often, than darker light paths. That's it. That's the basic principle. That's what we're trying to do. And educated people would immediately say that, hey, but isn't this what we have been talking about at important sampling? Isn't this conflicting with important sampling? What is important sampling? Well, it means that if I have, for instance, a glossy reflection, a glossy BRDF, which has a really high probability of sending rays out in the perfect reflection direction. So almost like a mirror. With a higher probability, it would behave like a mirror. Then I would like to have a high probability of actually sampling that light path. Proportional to the shape of the BRDF. And this we can do through important sampling, okay? But imagine a case where you would have a glossary reflection covered from almost every direction by black bodies. So it doesn't matter if I important sample the BRDF correctly, because after I important sample the BRDF and the light is coming to the next bounce, it's always going to hit the black body and it's going to be absorbed. I'm never going to continue my light path afterwards. So even though I would important sample this one bounce, I am not important sampling along the whole path, because I have important sample this one bounce correctly. But I didn't know that globally I'm just heading to a region that's really dark. And what Metropolis Light Transport gives you is something that is not really referred to, but I like to call it multibounds important sampling. So it may take some optimal decisions, and it may send out rays in a direction that is not so likely for your BRDF. If it knows that it's going to end up being a bright light path. So for instance, if you have a glossy inter-reflection that would be mostly sending rays out in this direction, but there is complete darkness in there, then it would do is it would actually send more rays towards the light source, which looks like a suboptimal decision in there in that BRDF, but over the whole light path that is actually going to be something bright. So this is the key idea behind Metropolis Light Transport, and I'd like to give you an intuitive example of that. So imagine that you have the camera in this room in the scene, and you have a light source only in the adjacent room in the next room. And this next room is separated by a wall and the door that is slightly redjar, so it is just opened just a bit. And all the light that you see is coming through that door. And if you imagine, for now naive path tracing, what am I doing? I am sending the ray through the first pixel, and I'm going to bounce it around the scene, and it is very likely that I will never find the light source. And I cannot even connect to the light source. It's in the other room. I'm going to hit the wall or the door. And imagine that I'm computing thousands and thousands of samples, and I finally get to hit the light path that is actually connectable to a light source. If we are doing path tracing, you can imagine that I'm starting from here. If you take a look at the arrow in there, it gives you the intuition that maybe we are doing light tracing. We are shooting light rays of light from the light source. And we finally get into this room and hit the camera. This is finally a good connection. After thousands and thousands of samples, I finally have one contribution. Before that, 0000, and my CPU is basically dying on 100% load. Nothing gets out of there. Now imagine that I finally found a light path that makes sense that has a contribution, and then I would suddenly forget about the whole thing, and I would again start sampling completely random light paths and get the 0000 again. Now it would be a crime to do that, wouldn't it? What Metropolis is doing is essentially trying to remember which are the paths that make sense. And if they find something like that, they are going to explore nearby. So they are not going to shoot out a completely random sample. For the next sample, it's going to take this one sample that made sense, finally a connection, and then it's going to add very small perturbations to this light path. What if I shoot this in an angle that's just a bit changed? And what you can expect is that most of the time it will give you again some amount of contribution and you don't have to start from scratch. So basically you can use all of these knowledge into your advantage. How does the difference look like? Well, this is the scene with bi-direction of path tracing after 40 samples to pixel. And now if you look closely, you will see Metropolis after the very same number of samples per pixel. So this is bi-direction of path tracing and now Metropolis with the same number of samples. So if you take this knowledge into account, most of your samples are going to be useful. Just another look. Bi-direction of path tracing, Metropolis. And bi-direction was already a good algorithm. It's not a naive path tracer. It's a good algorithm. A naive path tracing would be even worse. Not an example. Some nice volumetric caustics with naive path tracing and an equal time comparison with Metropolis life transport. How does it work exactly? Mathematical details, but just enough to understand the intuition. What we're trying to do is important sampling. What does it mean? It means that I am computing discrete samples of f over p. F is the function that I would like to integrate p is a sampling distribution. What I'm looking for is to match the blue lines with the green bars if you remember. So it means that if the function is large somewhere, it means that the image is bright somewhere, or the path space is bright somewhere, then I would like to put more samples in there. So if f is large, then p should also be large. This is what I'm trying to achieve. Now, how do I actually do this? I have some high-dimensional function. Or if I'm doing local important sampling, then I have a BRDF function. How do I important sample this? The trivial solution is called rejection sampling. Basically, what it means is that I would like to compute samples from a sampling distribution. So here you see something that is almost a Gaussian, but imagine that I cannot generate samples out of this function. Because what do I have in my C++ code? Well, I can generate uniform random numbers, but this function is not uniform random numbers. So what I can do is I can sample an arbitrary distribution function. If I enclose it in a box, and I throw completely uniform random samples on this box. So it is almost like drawing your function on a sheet of paper and throwing random samples at it. Now, I cannot generate random samples out of this function, but I can generate random uniformly distributed samples. And the scheme is very simple. If it is under the function, I'm going to take this sample and pretend that I've generated that sample in the first place. And if it's out there, I'm just going to kick it out. So if I do this, I would have samples according to this almost Gaussian. This works well, but this is not what we're doing in practice. This is very inefficient, and hopefully you can see from the image why this is an inefficient technique to do so. Someone please raise your hand and help me out. Why this is not efficient? Do you reject me? Okay, that's true. Thanks. So there's tons of rejected samples. Most of my uniformly generated random numbers are completely wasted again. So there must be some technique that's better than this. Well, there is, but I guarantee that it's not going to make you any happier when you see how it is done. So there's lots of rejection. There's lots of rejections. This can be analytically, this problem can almost always be analytically solved by a technique called the inverse transform sampling or the smear of transform. And this takes a bit of work, but I'll just briefly show you how it works. And if you are really interested in the details, then please take a look at this document. So I'll show you what you have to do. You have to do all of these calculations and then you will have your sampling distribution. Okay, what do we have at the end? Let's start with the intuition. We have uniformly generated random numbers. This is the xe1 and xe2 at the end. And I want to do some transform to these numbers in order to get an arbitrary sampling distribution. And what they are essentially doing is you have a probability distribution function. You want to sample from that. It can be like a Poisson distribution, an exponential distribution, or some custom BRDF. And if you integrate the PDF, you are going to have a CDF. So you integrate the probability distribution function. You will get a cumulative distribution function. And this can help you in this transformation from uniformly generated random numbers to the actual function. Now this is very intimidating, isn't it? Imagine that whenever you come up with a new BRDF or any kind of function that you would like to sample, you would have to compute all this. And not only that, we were doing this for BRDFs. So I can import an example of one bounds. Again, I emphasize that it means that if I hit the table, I locally know what are the good outgoing directions because of the material model. But it doesn't mean that it's globally a good idea because there may be this completely black curtain next to it, which I'm going to hit. And all of the energy is going to be absorbed. What does Metropolis give us? A solution to this. So it's important sampling not only for one BRDF, not only for all possible BRDFs, but an optimal important sampling along the whole path. So this means that it will know that if there is a path that is 15 bounds long, but it hits something that is really bright and it transfers a lot of energy, it will know that I will need to sample this light path and nearby. And it is not going to trace many rays towards the shadow regions. But what does it work? Again, intuitively. It wants a Markov chain process. And there is for Markov chains. There is a steady state distribution. This means that we have been running the Markov chain for a while. And if you do that, then it promises you optimal important sampling for any kind of function without doing anything. I hope that it is understandable how really amazing this is because it is actually a simple sampling scheme that you can write down the pseudocode in five or six lines. And it gives you optimal important sampling. So this is really amazing. And I emphasize again that this is over multiple bounces, not important sampling one BRDF, but over whole light paths. There are different variants of metropolis light transport. The original is the each type metropolis. This is the one that was published in 97. It is a great algorithm. It has different mutation strategies. It means that it has different strategies of changing the current light path into a new one in a smart way and not randomly. The problem is that almost no one in the world can implement it correctly. So it was published in 97. And the first viable implementation that came out was in the Mitsubar under implemented by Vensal Yakov. And it was around I think 2010. So just a few years ago. The original metropolis light transport also attributed to Eric Vich. No one in the world could implement it. I honestly don't know what was going on because he published it in 97 and it took the very least 13 years for the first super smart guy to implement correctly. I don't know what he was doing in the meantime. Maybe he was laughing on humanity that no one is smart enough to deal with this. And maybe we don't deserve this algorithm. I don't know. It's not for the fent of the heart. It's a really difficult algorithm. Yes. He's working for Google. He's working for Google. That's true. He was. After the PhD, did he go immediately to Google? So he's basically working on Edwards. How to get more money out of advertisements. It pays. It definitely pays well. And who knows. I mean that if Eric Vich is working on it, there's going to be some good stuff in there. I guarantee you. But I have to say that that his face looked actually quite delighted when he got the Academy Award just recently for his work that was the very least 15 years ago. It's still used all over the industry, multiple important sampling, but actually pop tracing, metropolis is all over the industry. The rich style metropolis is really difficult. Fortunately, there are also smart people at my former university, namely, Chobok element and last little see my colors. They came up with a simplified version of the algorithm, which is almost as robust, but is actually quite revealed to implement. It is also implemented in small paint. It is called the primary sample space metropolis. It is now implemented by one of my students from a previous year rendering course, and it is in small paint, so you can give it a try. Basically, it does complicated sounding, but otherwise simple mapping from an infinite dimensional cube, where I can generate infinite dimensional cube means arbitrarily long vectors of independent randomly uniform random generated numbers. And these random numbers are somehow transformed into light paths. So what the algorithm does is there's a probability that I am computing a completely new light path. And if I don't have this probability, then I'm going to stay around this light path and explore nearby. What does it mean practically? If I find this super difficult light path from the other room to here, then I find a really bright light path. The algorithm will know that, okay, I'm just going to add slight perturbations to this light path. I'm going to stay around here. And sometimes it will start to look around for random samples. There's also a visualization video on YouTube. If you take a look, you will immediately understand what is going on. And some literature about these algorithms. Now, it is also a sampling scheme. So metropolis, you can implement together with path tracing or bi-direction path tracing. And therefore, this is also an unbiased and consistent algorithm. And it is very robust. It is tailored for really difficult scenes. So if you have a scene with a lot of occlusions difficult to sample light sources, difficult to reach light sources, use the metropolis. But if you have an easy scene, this is not going to give you much because the metropolis is a smart algorithm. It takes longer to compute one sample than a path tracer. And if this smart behavior of the algorithm, it does not pay off, then there may be scenes where the metropolis is actually worse than a path tracer. So if you have an outdoor scene with large light sources and environment maps that you hit all the time, don't use metropolis. It doesn't give you anything. Path tracing would give you better results because it can dish out more samples per pixel because it's dumb. And it parallelizes even better. And only the number of samples matter in this case. And there may be algorithms that take this into consideration. So what if we had an algorithm that could determine if we have an easy scene or a difficult scene, and it would use for easy scenes, easy naive path tracing by direction of path tracing, or if there is a difficult scene, then it would use metropolis light transport. Now this would need an algorithm that can somehow determine whether the scene is easy or hard. And that's not trivial at all to do. But behind this link there is a work that deals with it. And I would also like to note that metropolis light transport is unbiased, but it starts out biased. So what it means is that I'm running a mark of chain that will give me optimal importance sampling, but this mark of chain also evolves in time. So I have to wait and wait and wait and it will get better and better estimations on where the bright paths are and where the dark paths are. And this takes time. This effect is what we call start-up bias. Now what do we get for it? We'll see plenty of examples. So for instance, on caustics, it's even better than by direction of path tracing. For caustics, you will get almost immediate convergence. Now what about this scene? This scene was rendered with Lux render. Here you have not glass spheres, but some kind of prism material spheres because you can see a pronounced effect of dispersion. And you can see volumetric caustics. So there is a participating media that we are in. And these caustics are going to be reflected multiple times and refracted multiple times. Let's say that this is a disgustingly difficult scene. The only light source there is is actually this laser that comes in from the upper left. Let's try to render such a scene with the different algorithms that we have learned about. Now if I start a path tracing, this is what I will get after 10 minutes. So the high scoring light paths, the bright light paths are not the greatest probability light paths. And therefore most of the connections will be also obstructed towards the light source. So it is very difficult to sample with path tracing. By direction of path tracing, it's better, but I mean if I get this after 10 minutes, I don't know how long it would take to render the actual scene. And if you are on a tropolis, it will find the light paths that matter and find the ones that are actually needed to be sampled. And this is the simplified version, PSS and LT. And the number next to it is just a ratio of these small perturbations to large perturbations. Sorry, the opposite. So a large number means that most of my light paths are going to be random. So most of the 75% probability I'm going to do by direction of path tracing, 25% metropolis. And if I pull down this probability, 0.25, then most of the time I'm going to do metropolis sampling, I'm going to explore nearby. And you can see that this renders the scene much, much faster. So this is definitely a very useful technique to have. Now, I've done this animation just for fun. This is a primary sample space metropolis light transport algorithm, only with small mutations. So just very small adjustments to the light paths. And this is how an image would converge with these small steps. And you can see that the caustics converge ridiculously quickly. Now let's take a look at one more example. Take a look at this. Most of the scene is still noisy, but the caustics are completely converged as we start out. Why? Because it is really bright. And this is exactly what the metropolis is going to focus on. So it is even better on caustics. Something that takes a brutal amount of samples with a normal path tracer is going to be immediately converged with the metropolis. So this is the first, I think, 10 minutes of rendering with the metropolis on a not so powerful machine. So it seems that we have solved everything. We're looking good. We got this. But I will show you a failure case that we actually still have problems that we couldn't solve. This is sophisticated scene that is for some reason even harder in some sense than the previous scenes. And it just doesn't want to converge with the primary sample space metropolis. I'm just rendering and rendering and still fireflies. If I have really large, really bright noisy spots, then this means that I have light paths that have a ridiculously low probability to be visited. And that means that my sampling strategies are not smart enough. And this is a classical longstanding problem in global illumination. Metropolis is not a solution for this. It is still not good enough, but there are techniques that can give you really smooth results on ridiculously difficult scenes like this. And I will also explain you during the next lecture why is this essentially difficult? Because it doesn't seem to intuitive, does it? But I will explain to you during the next lecture. Thank you very much.
[{"start": 0.0, "end": 3.0, "text": " Okay, let's continue with even more good stuff."}, {"start": 3.0, "end": 6.0, "text": " Metropolis light transports, straight from 97."}, {"start": 6.0, "end": 8.0, "text": " The key idea is to seek the light."}, {"start": 8.0, "end": 13.0, "text": " This is the thing that I always hear from the artists who use the algorithm."}, {"start": 13.0, "end": 19.0, "text": " What we are trying to do is sampling brighter light paths, more often, than darker light paths."}, {"start": 19.0, "end": 24.0, "text": " That's it. That's the basic principle. That's what we're trying to do."}, {"start": 24.0, "end": 28.0, "text": " And educated people would immediately say that,"}, {"start": 28.0, "end": 33.0, "text": " hey, but isn't this what we have been talking about at important sampling?"}, {"start": 33.0, "end": 36.0, "text": " Isn't this conflicting with important sampling?"}, {"start": 36.0, "end": 38.0, "text": " What is important sampling?"}, {"start": 38.0, "end": 44.0, "text": " Well, it means that if I have, for instance, a glossy reflection, a glossy BRDF,"}, {"start": 44.0, "end": 52.0, "text": " which has a really high probability of sending rays out in the perfect reflection direction."}, {"start": 52.0, "end": 54.0, "text": " So almost like a mirror."}, {"start": 54.0, "end": 57.0, "text": " With a higher probability, it would behave like a mirror."}, {"start": 57.0, "end": 64.0, "text": " Then I would like to have a high probability of actually sampling that light path."}, {"start": 64.0, "end": 68.0, "text": " Proportional to the shape of the BRDF."}, {"start": 68.0, "end": 71.0, "text": " And this we can do through important sampling, okay?"}, {"start": 71.0, "end": 80.0, "text": " But imagine a case where you would have a glossary reflection covered from almost every direction by black bodies."}, {"start": 80.0, "end": 85.0, "text": " So it doesn't matter if I important sample the BRDF correctly,"}, {"start": 85.0, "end": 90.0, "text": " because after I important sample the BRDF and the light is coming to the next bounce,"}, {"start": 90.0, "end": 94.0, "text": " it's always going to hit the black body and it's going to be absorbed."}, {"start": 94.0, "end": 98.0, "text": " I'm never going to continue my light path afterwards."}, {"start": 98.0, "end": 103.0, "text": " So even though I would important sample this one bounce,"}, {"start": 103.0, "end": 107.0, "text": " I am not important sampling along the whole path,"}, {"start": 107.0, "end": 111.0, "text": " because I have important sample this one bounce correctly."}, {"start": 111.0, "end": 116.0, "text": " But I didn't know that globally I'm just heading to a region that's really dark."}, {"start": 116.0, "end": 122.0, "text": " And what Metropolis Light Transport gives you is something that is not really referred to,"}, {"start": 122.0, "end": 126.0, "text": " but I like to call it multibounds important sampling."}, {"start": 126.0, "end": 130.0, "text": " So it may take some optimal decisions,"}, {"start": 130.0, "end": 136.0, "text": " and it may send out rays in a direction that is not so likely for your BRDF."}, {"start": 136.0, "end": 141.0, "text": " If it knows that it's going to end up being a bright light path."}, {"start": 141.0, "end": 149.0, "text": " So for instance, if you have a glossy inter-reflection that would be mostly sending rays out in this direction,"}, {"start": 149.0, "end": 157.0, "text": " but there is complete darkness in there, then it would do is it would actually send more rays towards the light source,"}, {"start": 157.0, "end": 161.0, "text": " which looks like a suboptimal decision in there in that BRDF,"}, {"start": 161.0, "end": 166.0, "text": " but over the whole light path that is actually going to be something bright."}, {"start": 166.0, "end": 170.0, "text": " So this is the key idea behind Metropolis Light Transport,"}, {"start": 170.0, "end": 175.0, "text": " and I'd like to give you an intuitive example of that."}, {"start": 175.0, "end": 183.0, "text": " So imagine that you have the camera in this room in the scene,"}, {"start": 183.0, "end": 189.0, "text": " and you have a light source only in the adjacent room in the next room."}, {"start": 189.0, "end": 194.0, "text": " And this next room is separated by a wall and the door that is slightly redjar,"}, {"start": 194.0, "end": 197.0, "text": " so it is just opened just a bit."}, {"start": 197.0, "end": 201.0, "text": " And all the light that you see is coming through that door."}, {"start": 201.0, "end": 206.0, "text": " And if you imagine, for now naive path tracing, what am I doing?"}, {"start": 206.0, "end": 211.0, "text": " I am sending the ray through the first pixel,"}, {"start": 211.0, "end": 214.0, "text": " and I'm going to bounce it around the scene,"}, {"start": 214.0, "end": 217.0, "text": " and it is very likely that I will never find the light source."}, {"start": 217.0, "end": 219.0, "text": " And I cannot even connect to the light source."}, {"start": 219.0, "end": 223.0, "text": " It's in the other room. I'm going to hit the wall or the door."}, {"start": 223.0, "end": 227.0, "text": " And imagine that I'm computing thousands and thousands of samples,"}, {"start": 227.0, "end": 235.0, "text": " and I finally get to hit the light path that is actually connectable to a light source."}, {"start": 235.0, "end": 239.0, "text": " If we are doing path tracing, you can imagine that I'm starting from here."}, {"start": 239.0, "end": 242.0, "text": " If you take a look at the arrow in there,"}, {"start": 242.0, "end": 246.0, "text": " it gives you the intuition that maybe we are doing light tracing."}, {"start": 246.0, "end": 250.0, "text": " We are shooting light rays of light from the light source."}, {"start": 250.0, "end": 254.0, "text": " And we finally get into this room and hit the camera."}, {"start": 254.0, "end": 256.0, "text": " This is finally a good connection."}, {"start": 256.0, "end": 260.0, "text": " After thousands and thousands of samples, I finally have one contribution."}, {"start": 260.0, "end": 266.0, "text": " Before that, 0000, and my CPU is basically dying on 100% load."}, {"start": 266.0, "end": 268.0, "text": " Nothing gets out of there."}, {"start": 268.0, "end": 274.0, "text": " Now imagine that I finally found a light path that makes sense that has a contribution,"}, {"start": 274.0, "end": 277.0, "text": " and then I would suddenly forget about the whole thing,"}, {"start": 277.0, "end": 286.0, "text": " and I would again start sampling completely random light paths and get the 0000 again."}, {"start": 286.0, "end": 289.0, "text": " Now it would be a crime to do that, wouldn't it?"}, {"start": 289.0, "end": 296.0, "text": " What Metropolis is doing is essentially trying to remember which are the paths that make sense."}, {"start": 296.0, "end": 301.0, "text": " And if they find something like that, they are going to explore nearby."}, {"start": 301.0, "end": 305.0, "text": " So they are not going to shoot out a completely random sample."}, {"start": 305.0, "end": 310.0, "text": " For the next sample, it's going to take this one sample that made sense,"}, {"start": 310.0, "end": 316.0, "text": " finally a connection, and then it's going to add very small perturbations to this light path."}, {"start": 316.0, "end": 321.0, "text": " What if I shoot this in an angle that's just a bit changed?"}, {"start": 321.0, "end": 328.0, "text": " And what you can expect is that most of the time it will give you again some amount of contribution"}, {"start": 328.0, "end": 332.0, "text": " and you don't have to start from scratch."}, {"start": 332.0, "end": 337.0, "text": " So basically you can use all of these knowledge into your advantage."}, {"start": 337.0, "end": 339.0, "text": " How does the difference look like?"}, {"start": 339.0, "end": 346.0, "text": " Well, this is the scene with bi-direction of path tracing after 40 samples to pixel."}, {"start": 346.0, "end": 351.0, "text": " And now if you look closely, you will see Metropolis after the very same number of samples per pixel."}, {"start": 351.0, "end": 358.0, "text": " So this is bi-direction of path tracing and now Metropolis with the same number of samples."}, {"start": 358.0, "end": 365.0, "text": " So if you take this knowledge into account, most of your samples are going to be useful."}, {"start": 365.0, "end": 366.0, "text": " Just another look."}, {"start": 366.0, "end": 369.0, "text": " Bi-direction of path tracing, Metropolis."}, {"start": 369.0, "end": 374.0, "text": " And bi-direction was already a good algorithm. It's not a naive path tracer."}, {"start": 374.0, "end": 378.0, "text": " It's a good algorithm. A naive path tracing would be even worse."}, {"start": 378.0, "end": 384.0, "text": " Not an example. Some nice volumetric caustics with naive path tracing"}, {"start": 384.0, "end": 390.0, "text": " and an equal time comparison with Metropolis life transport."}, {"start": 390.0, "end": 392.0, "text": " How does it work exactly?"}, {"start": 392.0, "end": 397.0, "text": " Mathematical details, but just enough to understand the intuition."}, {"start": 397.0, "end": 402.0, "text": " What we're trying to do is important sampling. What does it mean?"}, {"start": 402.0, "end": 408.0, "text": " It means that I am computing discrete samples of f over p."}, {"start": 408.0, "end": 412.0, "text": " F is the function that I would like to integrate p is a sampling distribution."}, {"start": 412.0, "end": 418.0, "text": " What I'm looking for is to match the blue lines with the green bars if you remember."}, {"start": 418.0, "end": 425.0, "text": " So it means that if the function is large somewhere, it means that the image is bright somewhere,"}, {"start": 425.0, "end": 430.0, "text": " or the path space is bright somewhere, then I would like to put more samples in there."}, {"start": 430.0, "end": 436.0, "text": " So if f is large, then p should also be large. This is what I'm trying to achieve."}, {"start": 436.0, "end": 444.0, "text": " Now, how do I actually do this? I have some high-dimensional function."}, {"start": 444.0, "end": 448.0, "text": " Or if I'm doing local important sampling, then I have a BRDF function."}, {"start": 448.0, "end": 451.0, "text": " How do I important sample this?"}, {"start": 451.0, "end": 457.0, "text": " The trivial solution is called rejection sampling."}, {"start": 457.0, "end": 467.0, "text": " Basically, what it means is that I would like to compute samples from a sampling distribution."}, {"start": 467.0, "end": 477.0, "text": " So here you see something that is almost a Gaussian, but imagine that I cannot generate samples out of this function."}, {"start": 477.0, "end": 480.0, "text": " Because what do I have in my C++ code?"}, {"start": 480.0, "end": 487.0, "text": " Well, I can generate uniform random numbers, but this function is not uniform random numbers."}, {"start": 487.0, "end": 495.0, "text": " So what I can do is I can sample an arbitrary distribution function."}, {"start": 495.0, "end": 502.0, "text": " If I enclose it in a box, and I throw completely uniform random samples on this box."}, {"start": 502.0, "end": 508.0, "text": " So it is almost like drawing your function on a sheet of paper and throwing random samples at it."}, {"start": 508.0, "end": 516.0, "text": " Now, I cannot generate random samples out of this function, but I can generate random uniformly distributed samples."}, {"start": 516.0, "end": 525.0, "text": " And the scheme is very simple. If it is under the function, I'm going to take this sample and pretend that I've generated that sample in the first place."}, {"start": 525.0, "end": 528.0, "text": " And if it's out there, I'm just going to kick it out."}, {"start": 528.0, "end": 536.0, "text": " So if I do this, I would have samples according to this almost Gaussian."}, {"start": 536.0, "end": 539.0, "text": " This works well, but this is not what we're doing in practice."}, {"start": 539.0, "end": 546.0, "text": " This is very inefficient, and hopefully you can see from the image why this is an inefficient technique to do so."}, {"start": 546.0, "end": 551.0, "text": " Someone please raise your hand and help me out. Why this is not efficient?"}, {"start": 551.0, "end": 554.0, "text": " Do you reject me?"}, {"start": 554.0, "end": 559.0, "text": " Okay, that's true. Thanks."}, {"start": 559.0, "end": 567.0, "text": " So there's tons of rejected samples. Most of my uniformly generated random numbers are completely wasted again."}, {"start": 567.0, "end": 572.0, "text": " So there must be some technique that's better than this."}, {"start": 572.0, "end": 579.0, "text": " Well, there is, but I guarantee that it's not going to make you any happier when you see how it is done."}, {"start": 579.0, "end": 583.0, "text": " So there's lots of rejection. There's lots of rejections."}, {"start": 583.0, "end": 594.0, "text": " This can be analytically, this problem can almost always be analytically solved by a technique called the inverse transform sampling or the smear of transform."}, {"start": 594.0, "end": 599.0, "text": " And this takes a bit of work, but I'll just briefly show you how it works."}, {"start": 599.0, "end": 607.0, "text": " And if you are really interested in the details, then please take a look at this document."}, {"start": 607.0, "end": 609.0, "text": " So I'll show you what you have to do."}, {"start": 609.0, "end": 615.0, "text": " You have to do all of these calculations and then you will have your sampling distribution."}, {"start": 615.0, "end": 620.0, "text": " Okay, what do we have at the end? Let's start with the intuition."}, {"start": 620.0, "end": 626.0, "text": " We have uniformly generated random numbers. This is the xe1 and xe2 at the end."}, {"start": 626.0, "end": 632.0, "text": " And I want to do some transform to these numbers in order to get an arbitrary sampling distribution."}, {"start": 632.0, "end": 639.0, "text": " And what they are essentially doing is you have a probability distribution function."}, {"start": 639.0, "end": 646.0, "text": " You want to sample from that. It can be like a Poisson distribution, an exponential distribution, or some custom BRDF."}, {"start": 646.0, "end": 651.0, "text": " And if you integrate the PDF, you are going to have a CDF."}, {"start": 651.0, "end": 657.0, "text": " So you integrate the probability distribution function. You will get a cumulative distribution function."}, {"start": 657.0, "end": 665.0, "text": " And this can help you in this transformation from uniformly generated random numbers to the actual function."}, {"start": 665.0, "end": 670.0, "text": " Now this is very intimidating, isn't it?"}, {"start": 670.0, "end": 678.0, "text": " Imagine that whenever you come up with a new BRDF or any kind of function that you would like to sample, you would have to compute all this."}, {"start": 678.0, "end": 683.0, "text": " And not only that, we were doing this for BRDFs."}, {"start": 683.0, "end": 695.0, "text": " So I can import an example of one bounds. Again, I emphasize that it means that if I hit the table, I locally know what are the good outgoing directions because of the material model."}, {"start": 695.0, "end": 703.0, "text": " But it doesn't mean that it's globally a good idea because there may be this completely black curtain next to it, which I'm going to hit."}, {"start": 703.0, "end": 707.0, "text": " And all of the energy is going to be absorbed."}, {"start": 707.0, "end": 724.0, "text": " What does Metropolis give us? A solution to this. So it's important sampling not only for one BRDF, not only for all possible BRDFs, but an optimal important sampling along the whole path."}, {"start": 724.0, "end": 740.0, "text": " So this means that it will know that if there is a path that is 15 bounds long, but it hits something that is really bright and it transfers a lot of energy, it will know that I will need to sample this light path and nearby."}, {"start": 740.0, "end": 746.0, "text": " And it is not going to trace many rays towards the shadow regions."}, {"start": 746.0, "end": 758.0, "text": " But what does it work? Again, intuitively. It wants a Markov chain process. And there is for Markov chains. There is a steady state distribution."}, {"start": 758.0, "end": 770.0, "text": " This means that we have been running the Markov chain for a while. And if you do that, then it promises you optimal important sampling for any kind of function without doing anything."}, {"start": 770.0, "end": 784.0, "text": " I hope that it is understandable how really amazing this is because it is actually a simple sampling scheme that you can write down the pseudocode in five or six lines."}, {"start": 784.0, "end": 797.0, "text": " And it gives you optimal important sampling. So this is really amazing. And I emphasize again that this is over multiple bounces, not important sampling one BRDF, but over whole light paths."}, {"start": 797.0, "end": 808.0, "text": " There are different variants of metropolis light transport. The original is the each type metropolis. This is the one that was published in 97."}, {"start": 808.0, "end": 821.0, "text": " It is a great algorithm. It has different mutation strategies. It means that it has different strategies of changing the current light path into a new one in a smart way and not randomly."}, {"start": 821.0, "end": 835.0, "text": " The problem is that almost no one in the world can implement it correctly. So it was published in 97. And the first viable implementation that came out was in the Mitsubar under implemented by Vensal Yakov."}, {"start": 835.0, "end": 849.0, "text": " And it was around I think 2010. So just a few years ago. The original metropolis light transport also attributed to Eric Vich. No one in the world could implement it."}, {"start": 849.0, "end": 860.0, "text": " I honestly don't know what was going on because he published it in 97 and it took the very least 13 years for the first super smart guy to implement correctly."}, {"start": 860.0, "end": 871.0, "text": " I don't know what he was doing in the meantime. Maybe he was laughing on humanity that no one is smart enough to deal with this. And maybe we don't deserve this algorithm."}, {"start": 871.0, "end": 881.0, "text": " I don't know. It's not for the fent of the heart. It's a really difficult algorithm. Yes. He's working for Google. He's working for Google. That's true."}, {"start": 881.0, "end": 888.0, "text": " He was. After the PhD, did he go immediately to Google?"}, {"start": 888.0, "end": 902.0, "text": " So he's basically working on Edwards. How to get more money out of advertisements. It pays."}, {"start": 902.0, "end": 914.0, "text": " It definitely pays well. And who knows. I mean that if Eric Vich is working on it, there's going to be some good stuff in there. I guarantee you."}, {"start": 914.0, "end": 927.0, "text": " But I have to say that that his face looked actually quite delighted when he got the Academy Award just recently for his work that was the very least 15 years ago."}, {"start": 927.0, "end": 935.0, "text": " It's still used all over the industry, multiple important sampling, but actually pop tracing, metropolis is all over the industry."}, {"start": 935.0, "end": 946.0, "text": " The rich style metropolis is really difficult. Fortunately, there are also smart people at my former university, namely, Chobok element and last little see my colors."}, {"start": 946.0, "end": 954.0, "text": " They came up with a simplified version of the algorithm, which is almost as robust, but is actually quite revealed to implement."}, {"start": 954.0, "end": 972.0, "text": " It is also implemented in small paint. It is called the primary sample space metropolis. It is now implemented by one of my students from a previous year rendering course, and it is in small paint, so you can give it a try."}, {"start": 972.0, "end": 992.0, "text": " Basically, it does complicated sounding, but otherwise simple mapping from an infinite dimensional cube, where I can generate infinite dimensional cube means arbitrarily long vectors of independent randomly uniform random generated numbers."}, {"start": 992.0, "end": 1003.0, "text": " And these random numbers are somehow transformed into light paths. So what the algorithm does is there's a probability that I am computing a completely new light path."}, {"start": 1003.0, "end": 1010.0, "text": " And if I don't have this probability, then I'm going to stay around this light path and explore nearby."}, {"start": 1010.0, "end": 1027.0, "text": " What does it mean practically? If I find this super difficult light path from the other room to here, then I find a really bright light path. The algorithm will know that, okay, I'm just going to add slight perturbations to this light path. I'm going to stay around here."}, {"start": 1027.0, "end": 1032.0, "text": " And sometimes it will start to look around for random samples."}, {"start": 1032.0, "end": 1040.0, "text": " There's also a visualization video on YouTube. If you take a look, you will immediately understand what is going on."}, {"start": 1040.0, "end": 1052.0, "text": " And some literature about these algorithms. Now, it is also a sampling scheme. So metropolis, you can implement together with path tracing or bi-direction path tracing."}, {"start": 1052.0, "end": 1063.0, "text": " And therefore, this is also an unbiased and consistent algorithm. And it is very robust. It is tailored for really difficult scenes."}, {"start": 1063.0, "end": 1073.0, "text": " So if you have a scene with a lot of occlusions difficult to sample light sources, difficult to reach light sources, use the metropolis."}, {"start": 1073.0, "end": 1084.0, "text": " But if you have an easy scene, this is not going to give you much because the metropolis is a smart algorithm. It takes longer to compute one sample than a path tracer."}, {"start": 1084.0, "end": 1093.0, "text": " And if this smart behavior of the algorithm, it does not pay off, then there may be scenes where the metropolis is actually worse than a path tracer."}, {"start": 1093.0, "end": 1103.0, "text": " So if you have an outdoor scene with large light sources and environment maps that you hit all the time, don't use metropolis. It doesn't give you anything."}, {"start": 1103.0, "end": 1113.0, "text": " Path tracing would give you better results because it can dish out more samples per pixel because it's dumb. And it parallelizes even better."}, {"start": 1113.0, "end": 1119.0, "text": " And only the number of samples matter in this case."}, {"start": 1119.0, "end": 1126.0, "text": " And there may be algorithms that take this into consideration."}, {"start": 1126.0, "end": 1142.0, "text": " So what if we had an algorithm that could determine if we have an easy scene or a difficult scene, and it would use for easy scenes, easy naive path tracing by direction of path tracing, or if there is a difficult scene, then it would use metropolis light transport."}, {"start": 1142.0, "end": 1149.0, "text": " Now this would need an algorithm that can somehow determine whether the scene is easy or hard."}, {"start": 1149.0, "end": 1155.0, "text": " And that's not trivial at all to do. But behind this link there is a work that deals with it."}, {"start": 1155.0, "end": 1165.0, "text": " And I would also like to note that metropolis light transport is unbiased, but it starts out biased."}, {"start": 1165.0, "end": 1173.0, "text": " So what it means is that I'm running a mark of chain that will give me optimal importance sampling, but this mark of chain also evolves in time."}, {"start": 1173.0, "end": 1182.0, "text": " So I have to wait and wait and wait and it will get better and better estimations on where the bright paths are and where the dark paths are."}, {"start": 1182.0, "end": 1188.0, "text": " And this takes time. This effect is what we call start-up bias."}, {"start": 1188.0, "end": 1202.0, "text": " Now what do we get for it? We'll see plenty of examples. So for instance, on caustics, it's even better than by direction of path tracing. For caustics, you will get almost immediate convergence."}, {"start": 1202.0, "end": 1208.0, "text": " Now what about this scene? This scene was rendered with Lux render."}, {"start": 1208.0, "end": 1220.0, "text": " Here you have not glass spheres, but some kind of prism material spheres because you can see a pronounced effect of dispersion. And you can see volumetric caustics."}, {"start": 1220.0, "end": 1228.0, "text": " So there is a participating media that we are in. And these caustics are going to be reflected multiple times and refracted multiple times."}, {"start": 1228.0, "end": 1233.0, "text": " Let's say that this is a disgustingly difficult scene."}, {"start": 1233.0, "end": 1240.0, "text": " The only light source there is is actually this laser that comes in from the upper left."}, {"start": 1240.0, "end": 1246.0, "text": " Let's try to render such a scene with the different algorithms that we have learned about."}, {"start": 1246.0, "end": 1253.0, "text": " Now if I start a path tracing, this is what I will get after 10 minutes."}, {"start": 1253.0, "end": 1267.0, "text": " So the high scoring light paths, the bright light paths are not the greatest probability light paths. And therefore most of the connections will be also obstructed towards the light source."}, {"start": 1267.0, "end": 1279.0, "text": " So it is very difficult to sample with path tracing. By direction of path tracing, it's better, but I mean if I get this after 10 minutes, I don't know how long it would take to render the actual scene."}, {"start": 1279.0, "end": 1286.0, "text": " And if you are on a tropolis, it will find the light paths that matter and find the ones that are actually needed to be sampled."}, {"start": 1286.0, "end": 1300.0, "text": " And this is the simplified version, PSS and LT. And the number next to it is just a ratio of these small perturbations to large perturbations."}, {"start": 1300.0, "end": 1314.0, "text": " Sorry, the opposite. So a large number means that most of my light paths are going to be random. So most of the 75% probability I'm going to do by direction of path tracing, 25% metropolis."}, {"start": 1314.0, "end": 1330.0, "text": " And if I pull down this probability, 0.25, then most of the time I'm going to do metropolis sampling, I'm going to explore nearby. And you can see that this renders the scene much, much faster."}, {"start": 1330.0, "end": 1335.0, "text": " So this is definitely a very useful technique to have."}, {"start": 1335.0, "end": 1346.0, "text": " Now, I've done this animation just for fun. This is a primary sample space metropolis light transport algorithm, only with small mutations."}, {"start": 1346.0, "end": 1360.0, "text": " So just very small adjustments to the light paths. And this is how an image would converge with these small steps. And you can see that the caustics converge ridiculously quickly."}, {"start": 1360.0, "end": 1367.0, "text": " Now let's take a look at one more example."}, {"start": 1367.0, "end": 1378.0, "text": " Take a look at this. Most of the scene is still noisy, but the caustics are completely converged as we start out. Why? Because it is really bright."}, {"start": 1378.0, "end": 1383.0, "text": " And this is exactly what the metropolis is going to focus on. So it is even better on caustics."}, {"start": 1383.0, "end": 1392.0, "text": " Something that takes a brutal amount of samples with a normal path tracer is going to be immediately converged with the metropolis."}, {"start": 1392.0, "end": 1400.0, "text": " So this is the first, I think, 10 minutes of rendering with the metropolis on a not so powerful machine."}, {"start": 1400.0, "end": 1405.0, "text": " So it seems that we have solved everything. We're looking good. We got this."}, {"start": 1405.0, "end": 1420.0, "text": " But I will show you a failure case that we actually still have problems that we couldn't solve. This is sophisticated scene that is for some reason even harder in some sense than the previous scenes."}, {"start": 1420.0, "end": 1428.0, "text": " And it just doesn't want to converge with the primary sample space metropolis. I'm just rendering and rendering and still fireflies."}, {"start": 1428.0, "end": 1440.0, "text": " If I have really large, really bright noisy spots, then this means that I have light paths that have a ridiculously low probability to be visited."}, {"start": 1440.0, "end": 1448.0, "text": " And that means that my sampling strategies are not smart enough."}, {"start": 1448.0, "end": 1466.0, "text": " And this is a classical longstanding problem in global illumination. Metropolis is not a solution for this. It is still not good enough, but there are techniques that can give you really smooth results on ridiculously difficult scenes like this."}, {"start": 1466.0, "end": 1478.0, "text": " And I will also explain you during the next lecture why is this essentially difficult? Because it doesn't seem to intuitive, does it? But I will explain to you during the next lecture. Thank you very much."}]
Two Minute Papers
https://www.youtube.com/watch?v=RuBjYa4Q3dA
TU Wien Rendering #32 - Bidirectional Path Tracing, Multiple Importance Sampling
With a classical unidirectional path tracer, we'll have some scenes where it is difficult to connect to the light source, and therefore many of our computed samples will be wasted. What if we would start not only one light path from the camera, but one also from the light source, and connect the two together? It turns out that we get a much more robust technique that can render a variety of "packed" scenes with lots of occlusions with ease. This way, one light path can now be obtained with different probabilities - as if we were running multiple Monte Carlo integration processes. These samples can be weighted by Multiple Importance Sampling, arguably one of the most powerful technique in all photorealistic rendering research. We'll take a look at the implementation of this amazing noise suppression technique in Wenzel Jakob's Mitsuba renderer. Amazing results ahead! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Now, before we start the algorithms one more time, a disclaimer, these results are coming from scientific papers. And if you come up with a new method, you want to show that this method outperforms existing methods in the scenes or in the setups that you have tried. And some people are very open about the limitations of the techniques because if I have a technique that's better than the best technique out there on this scene, that's great. But it doesn't mean that it will be better on all possible scenes. And some people are very candid about the limitations of the algorithms and some of them are not so candid about this. But with time, as people start to use the algorithm, these possible corner cases or just simply difficult cases come up. So what do I mean by this? But I mean is that if you see great results that there's an algorithm, wonderful results, it's the best thing ever. Okay, but always have a slight doubt whether this algorithm would be robust enough. Would it always work? When would it not work? Because don't just extrapolate from one case. There may be drawbacks that are maybe not so clear where you first see the algorithm. Now mathematical details again will be omitted mostly. But what we are interested in, the motivation for each algorithm, what is the key idea? What are the advantages? The disadvantages? How do the results look like? Where can you access implementations? Where can you try these? And for most of them some additional literature, if you think that wow, this is a really great algorithm, I would like to know more than there will be links. You click them and then you can read either the paper or some writing about them. So let's get started. Park tracing from 1986. Super old stuff, but this is the very first and the easiest way to wrap your head around global illumination. You start your race from the eye or the camera. You bounce them around the scene if you would like to earn some style points. And after every bounce you would also trace shadow rays towards the light source. This is next event estimation. This usually lowers your variance. And then you end up somewhere, you compute all these light paths and jolly good. You don't do any simplifications to the integrand. You exhaustively sample all possible light paths. There's no interpolation, no tricks, no magic. So this should be an unbiased and consistent algorithm. Unbiased the error is predictable. I know that if I add more samples, there's going to be less error. And I know that sooner or later the image is going to converge because I am sampling all possible light paths. There are. It is impossible that I would miss something. Now there may be corner cases, but they are really difficult, but fortunately well understood corner cases where there are contributions that you may miss. I will discuss this during the next lecture. What are the advantages? It's simple. It's also very easy to implement. I didn't write it there, but it also parallelizes well. Why? Because it's a dumb algorithm. It doesn't do anything tricky. It doesn't build super difficult and super complicated data structures. You just put it on the GPU and you just cram out as many and you just dishought as many light paths per second as possible. What is a common problem that people encounter with this? For instance, caustics converge very slowly because caustics are usually light paths that are extremely improbable to be sampled and you would need to compute many, many samples in order to hit these caustics many times in order to clean them up. Onwards 1993 by directional path tracing. What is the motivation behind this guy? Well imagine a scene that this is your camera on the left and you have a light source for instance enclosed in this object which is for now for the sake of experiment a black body. You hit it from anywhere, it's not a glass, light bulb or anything like that, it's a black body. So whichever part of the container you hit you won't continue your light path. Now you would start a path tracer, what do you do? You start tracing the rays from the camera and it is not too likely to hit the light source. So it's not a point light source, it's an aerial light source, it is possible to hit it, but it's not very likely. Now after the previous lecture you would say no problem, next event estimation, what do I do? I don't wait until I hit the light source, I would send out shadow rays after every bounce and I would get some of the energy of the light source, the direct contribution of the light source. Great, but the problem is that this also doesn't work because most of the connections would be obstructed because if I hit this very first bounce I cannot hit the light source because there is the black body that contains it. After the second bounce I also cannot connect to the light source. It's again, even with next event estimation most of my samples are wasted. We are tracing random rays, it is very unlikely to hit the light source and even if I connect to the light source it is very unlikely that I will see an obstructed connection. What is the solution? By direction of path tracing what happens here is that I am not starting only one light path from the eye, I start two light paths, one from the eye as with regular path tracing and I also start light paths starting out from the light sources, this is called light tracing and I try to combine these two techniques into one framework. So what it means is that I start one or a given number of bounces from the eye, I start a given number of bounces from the light source and then I connect these light paths together and I pretend that I just built this light path instead. And now with this I have a much better chance to sample these light sources because I would have the opportunity to get out of that small zone that is otherwise difficult to hit from the eye. Now let's see the difference between the two techniques, these are taken after 10 seconds for the very same scene and you could say that there is a huge difference for this indoor scene between the two. So it's definitely worth looking into. Now what is actually difficult about biorectional path tracing is that theoretically it's very simple, there is not one light path, there are two and I connect them in all possible different ways. Now what you should take into consideration is that this is actually two Monte Carlo processes. One Monte Carlo process is when you start out from the eye and you hit a diffuse or a glossy object then you would start to important sample it, important sample the VRDF. This means that I would take the likely paths more often. Now if you start a light path from the light source then what you would be sampling is actually the distribution of the light source itself because regions that are visible from the light source would be sampled extensively with light tracing because you're always hitting them, they are in front of you. And that's a completely different sampling distribution. So you can imagine as if you had two different Monte Carlo processes that sampled the very same integrand. And one Monte Carlo process would have some variance and the other would have some other variance. So different regions of the path space, and also different regions of the image would converge quicker with light tracing and different images would converge quicker with standard path tracing. And I would like to combine these two techniques together. And this is entirely not trivial. Variance, I've written noise in there to be more intuitive but we're talking about variance, noise comes from variance, variance is an additive quantity. So this means that if I have two Monte Carlo estimators of given variance and if I would just add them together and average these samples, then I would also average the error of the two. And that doesn't give me a great result because there are some regions that are sampled by light tracing well and there are regions that are sampled by path tracing well. And I cannot just cut out the good parts from each sampling technique because the error would be averaged. And this can be solved in a meaningful way in a way that is actually proven to be optimal in some sense. And this technique is called multiple important sampling. Now multiple important sampling was brought to us by a person called Eric Weach in his landmark thesis of beautiful, beautiful works by direction of path tracing is one of them. And if I remember correctly last year he got an Academy Award for his work, this is basically gone. This is basically the technical Oscar award if you will. And in his acceptance speech, it was really funny because he has a daughter and his daughter had taken a look at his thesis which is hundreds of pages of heavy integral calculus. And she asked that that he do people actually read this huge poem of knowledge and he finally can say that yes people actually do read that. We read it like the Holy Bible. Multiple important sampling is among one of his discoveries and it is maybe, it's a bit subjective, maybe the most powerful technique in there in all rendering. And I will show you plenty of examples to convince you that this is so. So on the left, let's forget about the middle example for now. Let's just compare the left and the right. You can see that there are many artifacts and many of these fireflies that can be suppressed by this technique. So I can unify multiple sampling techniques in a way that wherever they do really bad I can just forget that and I will take only the best samples for each region. Let's take another look which is maybe even better. This is called, at least this is what we call a Veech pyramid. This is created with bidirectional path tracing and the code below each image means that we have taken a different number of steps from the light source and from the eye. So in every image you see one given number of bounces. So if you would have path tracing, you would get like 10 or something images, not in a pyramid. One image would be the first bounce, second image would be the second bounce, third image would be the third bounce. For bidirectional path tracing you have a pyramid like that because you subdivide them to the first bounce from the eye and the some bounce from the light source. So this is now a two dimensional thing. And you can see that some of the effects are captured really well in some of these images and there are some other images which are absolutely, absolutely terrible and really noisy. So for instance if you take a look at the two sides, these two sides mean that I am hitting either the camera or the light source by accident. And if you have a small light source which we actually do look here, then this is a relatively low probability event and if this is a low probability event and most of my samples are going to be wasted and I'm going to be have a noisy image, not a well-converged image. So on the sides I have really low probability events and these are samples that I really don't want to use. Imagine that I would add all of these images together, average them. I would have plenty of noise from the noisy ones. Now what if I could say that if you take a look at s equals 1, t equals 5, you can see that we have caustics in there. And the caustics is almost almost immediately converged in there. It is definitely good in a sense that I would, for caustics, I definitely would want to use these samples and not the ones for instance in s equals 0, t equals 6 because there is also caustics but it is really noisy. It is not systematically looking for caustics, it just happened to hit it but it is not good at sampling it. And I don't want to average these guys together. What I would want to do is I would want to give a large weight to s equals 1, t equals 5 on caustics and I would just grab it in there in my image. And I would just forget about the other contributions. And this is mathematically, doing this in a mathematically sound way is not easy but Eric has proven really good and super simple technique on how to do that. And now look closely to the image. This is without naive biorectional power pressing, without multiple important sampling. And now what you will see is if we add multiple important sampling. So look closely. See the difference? There are many noisy images that were completely shut down because they were not really good at sampling different parts of the space of light paths. Some images are not good at anything at all. Take a look at the two sides. And there are images where I can take caustics from for instance. Like the s equals 5, t equals 1, it seems to have been even better at sampling caustics because this s equals 1, t equals 5 was also pretty good. But it was shut down by the other technique that was even better. So this is an amazingly powerful technique in order to create even more converged images if you have multiple sampling strategies. Now you can also play with it, it is implemented in shader toy, the nice classical V-Sync where there is light source sampling and BSDF, DRDF sampling. And it doesn't matter if you sell BSDF or DRDF in this case, by the way. But you remember. So you can play with it. And I encourage you to do so. It is lots of fun. And you will see what kind of light transport situations are captured well with which sampling technique and how to unify them in a way that everything looks converged almost immediately. And also what does a good engineer do? Well, a good engineer obviously is interested in the problem. So I just set down and also implemented the same thing in a simple example in 1D to make sure that everyone really understands what is going on. So this is a simple Monte Carlo sampling problem in 1D. I have a function that I would want to integrate. If I remember correctly, I am integrating a Gaussian. And I would like to sample it with two different techniques. So this is two different Monte Carlo sampling processes. And I would want to take only the best samples in order to get an approximation which has the least variance. And there are multiple ways of combining them together. And there's also naive averaging, which just averages the error. So it would give you back all of these images from the side. And I write out what are the exact Monte Carlo estimators for different multiple important sampling estimators as well. So take a look. It is now part of small paint. And you can run it super simple and hopefully super understandable. I think it is less than 100 lines of code. So what we now know, bi-directional path tracing, definitely better convergence speed, especially in scenes where you are not that likely to hit light sources. So especially in indoor scenes. And you will also get quicker convergence for caustics because you will have sampling strategies that are very efficient in that. So caustics are usually visible from light sources and you will sample them very often. So there's going to be at least one estimator that captures it well. So if you use MIS, multiple important sampling, you're going to have caustics covered very quickly. Now it is definitely not easy to grasp and it is definitely not easy to implement. So it requires quite a bit of an effort. Even if it sounds very intuitive. It is, but it is not easy. This is also a brute force method. This also samples all possible light sources and therefore this is also unbiased and consistent. Some more literature on bi-directional path tracing and even better, there is a nice comparison quoted up also on shader toy. So when you are at home just fire it up and you will see the difference evolving in real time on your GPU on an indoor scene.
[{"start": 0.0, "end": 8.2, "text": " Now, before we start the algorithms one more time, a disclaimer, these results are coming"}, {"start": 8.2, "end": 10.16, "text": " from scientific papers."}, {"start": 10.16, "end": 15.88, "text": " And if you come up with a new method, you want to show that this method outperforms existing"}, {"start": 15.88, "end": 21.44, "text": " methods in the scenes or in the setups that you have tried."}, {"start": 21.44, "end": 27.28, "text": " And some people are very open about the limitations of the techniques because if I have a technique"}, {"start": 27.28, "end": 32.68, "text": " that's better than the best technique out there on this scene, that's great."}, {"start": 32.68, "end": 37.88, "text": " But it doesn't mean that it will be better on all possible scenes."}, {"start": 37.88, "end": 44.88, "text": " And some people are very candid about the limitations of the algorithms and some of them are"}, {"start": 44.88, "end": 46.8, "text": " not so candid about this."}, {"start": 46.8, "end": 53.52, "text": " But with time, as people start to use the algorithm, these possible corner cases or just"}, {"start": 53.52, "end": 58.040000000000006, "text": " simply difficult cases come up."}, {"start": 58.040000000000006, "end": 59.92, "text": " So what do I mean by this?"}, {"start": 59.92, "end": 64.52000000000001, "text": " But I mean is that if you see great results that there's an algorithm, wonderful results,"}, {"start": 64.52000000000001, "end": 66.12, "text": " it's the best thing ever."}, {"start": 66.12, "end": 73.28, "text": " Okay, but always have a slight doubt whether this algorithm would be robust enough."}, {"start": 73.28, "end": 74.92, "text": " Would it always work?"}, {"start": 74.92, "end": 76.76, "text": " When would it not work?"}, {"start": 76.76, "end": 81.24000000000001, "text": " Because don't just extrapolate from one case."}, {"start": 81.24, "end": 88.56, "text": " There may be drawbacks that are maybe not so clear where you first see the algorithm."}, {"start": 88.56, "end": 92.72, "text": " Now mathematical details again will be omitted mostly."}, {"start": 92.72, "end": 98.08, "text": " But what we are interested in, the motivation for each algorithm, what is the key idea?"}, {"start": 98.08, "end": 99.19999999999999, "text": " What are the advantages?"}, {"start": 99.19999999999999, "end": 100.44, "text": " The disadvantages?"}, {"start": 100.44, "end": 102.75999999999999, "text": " How do the results look like?"}, {"start": 102.75999999999999, "end": 104.56, "text": " Where can you access implementations?"}, {"start": 104.56, "end": 106.08, "text": " Where can you try these?"}, {"start": 106.08, "end": 110.64, "text": " And for most of them some additional literature, if you think that wow, this is a really great"}, {"start": 110.64, "end": 114.24, "text": " algorithm, I would like to know more than there will be links."}, {"start": 114.24, "end": 119.48, "text": " You click them and then you can read either the paper or some writing about them."}, {"start": 119.48, "end": 120.48, "text": " So let's get started."}, {"start": 120.48, "end": 123.4, "text": " Park tracing from 1986."}, {"start": 123.4, "end": 129.64, "text": " Super old stuff, but this is the very first and the easiest way to wrap your head around"}, {"start": 129.64, "end": 132.32, "text": " global illumination."}, {"start": 132.32, "end": 135.92000000000002, "text": " You start your race from the eye or the camera."}, {"start": 135.92000000000002, "end": 139.88, "text": " You bounce them around the scene if you would like to earn some style points."}, {"start": 139.88, "end": 144.88, "text": " And after every bounce you would also trace shadow rays towards the light source."}, {"start": 144.88, "end": 146.72, "text": " This is next event estimation."}, {"start": 146.72, "end": 148.88, "text": " This usually lowers your variance."}, {"start": 148.88, "end": 154.24, "text": " And then you end up somewhere, you compute all these light paths and jolly good."}, {"start": 154.24, "end": 157.16, "text": " You don't do any simplifications to the integrand."}, {"start": 157.16, "end": 160.72, "text": " You exhaustively sample all possible light paths."}, {"start": 160.72, "end": 164.28, "text": " There's no interpolation, no tricks, no magic."}, {"start": 164.28, "end": 169.44, "text": " So this should be an unbiased and consistent algorithm."}, {"start": 169.44, "end": 171.52, "text": " Unbiased the error is predictable."}, {"start": 171.52, "end": 175.48, "text": " I know that if I add more samples, there's going to be less error."}, {"start": 175.48, "end": 179.64, "text": " And I know that sooner or later the image is going to converge because I am sampling"}, {"start": 179.64, "end": 181.2, "text": " all possible light paths."}, {"start": 181.2, "end": 182.2, "text": " There are."}, {"start": 182.2, "end": 184.4, "text": " It is impossible that I would miss something."}, {"start": 184.4, "end": 190.96, "text": " Now there may be corner cases, but they are really difficult, but fortunately well understood"}, {"start": 190.96, "end": 194.84, "text": " corner cases where there are contributions that you may miss."}, {"start": 194.84, "end": 200.92000000000002, "text": " I will discuss this during the next lecture."}, {"start": 200.92000000000002, "end": 202.20000000000002, "text": " What are the advantages?"}, {"start": 202.20000000000002, "end": 203.2, "text": " It's simple."}, {"start": 203.2, "end": 205.04, "text": " It's also very easy to implement."}, {"start": 205.04, "end": 208.36, "text": " I didn't write it there, but it also parallelizes well."}, {"start": 208.36, "end": 209.36, "text": " Why?"}, {"start": 209.36, "end": 211.44, "text": " Because it's a dumb algorithm."}, {"start": 211.44, "end": 213.08, "text": " It doesn't do anything tricky."}, {"start": 213.08, "end": 218.6, "text": " It doesn't build super difficult and super complicated data structures."}, {"start": 218.6, "end": 225.04, "text": " You just put it on the GPU and you just cram out as many and you just dishought as many"}, {"start": 225.04, "end": 228.64, "text": " light paths per second as possible."}, {"start": 228.64, "end": 231.35999999999999, "text": " What is a common problem that people encounter with this?"}, {"start": 231.35999999999999, "end": 236.92, "text": " For instance, caustics converge very slowly because caustics are usually light paths that"}, {"start": 236.92, "end": 242.92, "text": " are extremely improbable to be sampled and you would need to compute many, many samples"}, {"start": 242.92, "end": 249.92, "text": " in order to hit these caustics many times in order to clean them up."}, {"start": 249.92, "end": 253.32, "text": " Onwards 1993 by directional path tracing."}, {"start": 253.32, "end": 255.56, "text": " What is the motivation behind this guy?"}, {"start": 255.56, "end": 260.91999999999996, "text": " Well imagine a scene that this is your camera on the left and you have a light source for"}, {"start": 260.91999999999996, "end": 266.96, "text": " instance enclosed in this object which is for now for the sake of experiment a black"}, {"start": 266.96, "end": 267.96, "text": " body."}, {"start": 267.96, "end": 273.35999999999996, "text": " You hit it from anywhere, it's not a glass, light bulb or anything like that, it's a black"}, {"start": 273.35999999999996, "end": 274.35999999999996, "text": " body."}, {"start": 274.35999999999996, "end": 278.64, "text": " So whichever part of the container you hit you won't continue your light path."}, {"start": 278.64, "end": 282.35999999999996, "text": " Now you would start a path tracer, what do you do?"}, {"start": 282.35999999999996, "end": 289.44, "text": " You start tracing the rays from the camera and it is not too likely to hit the light source."}, {"start": 289.44, "end": 293.56, "text": " So it's not a point light source, it's an aerial light source, it is possible to hit it,"}, {"start": 293.56, "end": 295.64, "text": " but it's not very likely."}, {"start": 295.64, "end": 302.08, "text": " Now after the previous lecture you would say no problem, next event estimation, what do"}, {"start": 302.08, "end": 303.08, "text": " I do?"}, {"start": 303.08, "end": 308.32, "text": " I don't wait until I hit the light source, I would send out shadow rays after every bounce"}, {"start": 308.32, "end": 312.12, "text": " and I would get some of the energy of the light source, the direct contribution of the"}, {"start": 312.12, "end": 313.12, "text": " light source."}, {"start": 313.12, "end": 318.24, "text": " Great, but the problem is that this also doesn't work because most of the connections would"}, {"start": 318.24, "end": 323.44, "text": " be obstructed because if I hit this very first bounce I cannot hit the light source because"}, {"start": 323.44, "end": 325.8, "text": " there is the black body that contains it."}, {"start": 325.8, "end": 329.24, "text": " After the second bounce I also cannot connect to the light source."}, {"start": 329.24, "end": 338.04, "text": " It's again, even with next event estimation most of my samples are wasted."}, {"start": 338.04, "end": 342.6, "text": " We are tracing random rays, it is very unlikely to hit the light source and even if I connect"}, {"start": 342.6, "end": 348.28, "text": " to the light source it is very unlikely that I will see an obstructed connection."}, {"start": 348.28, "end": 349.52, "text": " What is the solution?"}, {"start": 349.52, "end": 355.28, "text": " By direction of path tracing what happens here is that I am not starting only one light"}, {"start": 355.28, "end": 361.79999999999995, "text": " path from the eye, I start two light paths, one from the eye as with regular path tracing"}, {"start": 361.79999999999995, "end": 367.52, "text": " and I also start light paths starting out from the light sources, this is called light"}, {"start": 367.52, "end": 373.44, "text": " tracing and I try to combine these two techniques into one framework."}, {"start": 373.44, "end": 380.56, "text": " So what it means is that I start one or a given number of bounces from the eye, I start"}, {"start": 380.56, "end": 387.64, "text": " a given number of bounces from the light source and then I connect these light paths together"}, {"start": 387.64, "end": 391.92, "text": " and I pretend that I just built this light path instead."}, {"start": 391.92, "end": 398.0, "text": " And now with this I have a much better chance to sample these light sources because I would"}, {"start": 398.0, "end": 404.28, "text": " have the opportunity to get out of that small zone that is otherwise difficult to hit"}, {"start": 404.28, "end": 407.32, "text": " from the eye."}, {"start": 407.32, "end": 412.36, "text": " Now let's see the difference between the two techniques, these are taken after 10 seconds"}, {"start": 412.36, "end": 417.76, "text": " for the very same scene and you could say that there is a huge difference for this indoor"}, {"start": 417.76, "end": 419.4, "text": " scene between the two."}, {"start": 419.4, "end": 426.4, "text": " So it's definitely worth looking into."}, {"start": 426.4, "end": 433.71999999999997, "text": " Now what is actually difficult about biorectional path tracing is that theoretically it's"}, {"start": 433.71999999999997, "end": 438.76, "text": " very simple, there is not one light path, there are two and I connect them in all possible"}, {"start": 438.76, "end": 442.15999999999997, "text": " different ways."}, {"start": 442.15999999999997, "end": 447.56, "text": " Now what you should take into consideration is that this is actually two Monte Carlo"}, {"start": 447.56, "end": 449.15999999999997, "text": " processes."}, {"start": 449.15999999999997, "end": 455.47999999999996, "text": " One Monte Carlo process is when you start out from the eye and you hit a diffuse or a glossy"}, {"start": 455.48, "end": 461.24, "text": " object then you would start to important sample it, important sample the VRDF."}, {"start": 461.24, "end": 466.36, "text": " This means that I would take the likely paths more often."}, {"start": 466.36, "end": 472.36, "text": " Now if you start a light path from the light source then what you would be sampling is"}, {"start": 472.36, "end": 479.52000000000004, "text": " actually the distribution of the light source itself because regions that are visible from"}, {"start": 479.52000000000004, "end": 484.16, "text": " the light source would be sampled extensively with light tracing because you're always"}, {"start": 484.16, "end": 486.48, "text": " hitting them, they are in front of you."}, {"start": 486.48, "end": 489.16, "text": " And that's a completely different sampling distribution."}, {"start": 489.16, "end": 494.88000000000005, "text": " So you can imagine as if you had two different Monte Carlo processes that sampled the very"}, {"start": 494.88000000000005, "end": 496.76000000000005, "text": " same integrand."}, {"start": 496.76000000000005, "end": 502.44000000000005, "text": " And one Monte Carlo process would have some variance and the other would have some other variance."}, {"start": 502.44000000000005, "end": 509.40000000000003, "text": " So different regions of the path space, and also different regions of the image would"}, {"start": 509.4, "end": 515.4, "text": " converge quicker with light tracing and different images would converge quicker with standard"}, {"start": 515.4, "end": 516.8, "text": " path tracing."}, {"start": 516.8, "end": 520.36, "text": " And I would like to combine these two techniques together."}, {"start": 520.36, "end": 523.16, "text": " And this is entirely not trivial."}, {"start": 523.16, "end": 527.64, "text": " Variance, I've written noise in there to be more intuitive but we're talking about"}, {"start": 527.64, "end": 533.6, "text": " variance, noise comes from variance, variance is an additive quantity."}, {"start": 533.6, "end": 537.72, "text": " So this means that if I have two Monte Carlo estimators of given variance and if I would"}, {"start": 537.72, "end": 545.1600000000001, "text": " just add them together and average these samples, then I would also average the error of the"}, {"start": 545.1600000000001, "end": 546.32, "text": " two."}, {"start": 546.32, "end": 553.6800000000001, "text": " And that doesn't give me a great result because there are some regions that are sampled"}, {"start": 553.6800000000001, "end": 558.5600000000001, "text": " by light tracing well and there are regions that are sampled by path tracing well."}, {"start": 558.5600000000001, "end": 563.84, "text": " And I cannot just cut out the good parts from each sampling technique because the error"}, {"start": 563.84, "end": 565.52, "text": " would be averaged."}, {"start": 565.52, "end": 573.36, "text": " And this can be solved in a meaningful way in a way that is actually proven to be optimal"}, {"start": 573.36, "end": 574.4399999999999, "text": " in some sense."}, {"start": 574.4399999999999, "end": 577.84, "text": " And this technique is called multiple important sampling."}, {"start": 577.84, "end": 586.3199999999999, "text": " Now multiple important sampling was brought to us by a person called Eric Weach in his"}, {"start": 586.3199999999999, "end": 593.56, "text": " landmark thesis of beautiful, beautiful works by direction of path tracing is one of them."}, {"start": 593.56, "end": 600.56, "text": " And if I remember correctly last year he got an Academy Award for his work, this is basically"}, {"start": 600.56, "end": 602.8399999999999, "text": " gone."}, {"start": 602.8399999999999, "end": 607.3199999999999, "text": " This is basically the technical Oscar award if you will."}, {"start": 607.3199999999999, "end": 613.92, "text": " And in his acceptance speech, it was really funny because he has a daughter and his daughter"}, {"start": 613.92, "end": 620.56, "text": " had taken a look at his thesis which is hundreds of pages of heavy integral calculus."}, {"start": 620.56, "end": 628.3199999999999, "text": " And she asked that that he do people actually read this huge poem of knowledge and he finally"}, {"start": 628.3199999999999, "end": 631.16, "text": " can say that yes people actually do read that."}, {"start": 631.16, "end": 633.3599999999999, "text": " We read it like the Holy Bible."}, {"start": 633.3599999999999, "end": 641.8399999999999, "text": " Multiple important sampling is among one of his discoveries and it is maybe, it's a bit"}, {"start": 641.8399999999999, "end": 646.28, "text": " subjective, maybe the most powerful technique in there in all rendering."}, {"start": 646.28, "end": 651.24, "text": " And I will show you plenty of examples to convince you that this is so."}, {"start": 651.24, "end": 655.64, "text": " So on the left, let's forget about the middle example for now."}, {"start": 655.64, "end": 659.04, "text": " Let's just compare the left and the right."}, {"start": 659.04, "end": 664.8, "text": " You can see that there are many artifacts and many of these fireflies that can be suppressed"}, {"start": 664.8, "end": 666.16, "text": " by this technique."}, {"start": 666.16, "end": 673.8, "text": " So I can unify multiple sampling techniques in a way that wherever they do really bad"}, {"start": 673.8, "end": 680.4, "text": " I can just forget that and I will take only the best samples for each region."}, {"start": 680.4, "end": 683.4799999999999, "text": " Let's take another look which is maybe even better."}, {"start": 683.4799999999999, "end": 687.1999999999999, "text": " This is called, at least this is what we call a Veech pyramid."}, {"start": 687.1999999999999, "end": 693.24, "text": " This is created with bidirectional path tracing and the code below each image means that we"}, {"start": 693.24, "end": 698.92, "text": " have taken a different number of steps from the light source and from the eye."}, {"start": 698.92, "end": 703.7199999999999, "text": " So in every image you see one given number of bounces."}, {"start": 703.7199999999999, "end": 709.52, "text": " So if you would have path tracing, you would get like 10 or something images, not in a pyramid."}, {"start": 709.52, "end": 713.88, "text": " One image would be the first bounce, second image would be the second bounce, third image"}, {"start": 713.88, "end": 715.36, "text": " would be the third bounce."}, {"start": 715.36, "end": 720.8399999999999, "text": " For bidirectional path tracing you have a pyramid like that because you subdivide them to the"}, {"start": 720.8399999999999, "end": 727.4399999999999, "text": " first bounce from the eye and the some bounce from the light source."}, {"start": 727.44, "end": 731.4000000000001, "text": " So this is now a two dimensional thing."}, {"start": 731.4000000000001, "end": 737.9200000000001, "text": " And you can see that some of the effects are captured really well in some of these images"}, {"start": 737.9200000000001, "end": 744.6, "text": " and there are some other images which are absolutely, absolutely terrible and really noisy."}, {"start": 744.6, "end": 752.5600000000001, "text": " So for instance if you take a look at the two sides, these two sides mean that I am hitting"}, {"start": 752.56, "end": 758.04, "text": " either the camera or the light source by accident."}, {"start": 758.04, "end": 763.8399999999999, "text": " And if you have a small light source which we actually do look here, then this is a relatively"}, {"start": 763.8399999999999, "end": 769.56, "text": " low probability event and if this is a low probability event and most of my samples"}, {"start": 769.56, "end": 774.3199999999999, "text": " are going to be wasted and I'm going to be have a noisy image, not a well-converged"}, {"start": 774.3199999999999, "end": 775.3199999999999, "text": " image."}, {"start": 775.3199999999999, "end": 782.0, "text": " So on the sides I have really low probability events and these are samples that I really"}, {"start": 782.0, "end": 783.28, "text": " don't want to use."}, {"start": 783.28, "end": 787.0, "text": " Imagine that I would add all of these images together, average them."}, {"start": 787.0, "end": 790.52, "text": " I would have plenty of noise from the noisy ones."}, {"start": 790.52, "end": 797.48, "text": " Now what if I could say that if you take a look at s equals 1, t equals 5, you can see"}, {"start": 797.48, "end": 800.6, "text": " that we have caustics in there."}, {"start": 800.6, "end": 804.96, "text": " And the caustics is almost almost immediately converged in there."}, {"start": 804.96, "end": 810.64, "text": " It is definitely good in a sense that I would, for caustics, I definitely would want to"}, {"start": 810.64, "end": 817.12, "text": " use these samples and not the ones for instance in s equals 0, t equals 6 because there is"}, {"start": 817.12, "end": 819.48, "text": " also caustics but it is really noisy."}, {"start": 819.48, "end": 824.72, "text": " It is not systematically looking for caustics, it just happened to hit it but it is not"}, {"start": 824.72, "end": 826.4, "text": " good at sampling it."}, {"start": 826.4, "end": 828.52, "text": " And I don't want to average these guys together."}, {"start": 828.52, "end": 835.12, "text": " What I would want to do is I would want to give a large weight to s equals 1, t equals 5"}, {"start": 835.12, "end": 839.2, "text": " on caustics and I would just grab it in there in my image."}, {"start": 839.2, "end": 842.5200000000001, "text": " And I would just forget about the other contributions."}, {"start": 842.5200000000001, "end": 848.72, "text": " And this is mathematically, doing this in a mathematically sound way is not easy but Eric"}, {"start": 848.72, "end": 855.0, "text": " has proven really good and super simple technique on how to do that."}, {"start": 855.0, "end": 857.08, "text": " And now look closely to the image."}, {"start": 857.08, "end": 862.36, "text": " This is without naive biorectional power pressing, without multiple important sampling."}, {"start": 862.36, "end": 866.48, "text": " And now what you will see is if we add multiple important sampling."}, {"start": 866.48, "end": 870.32, "text": " So look closely."}, {"start": 870.32, "end": 871.5600000000001, "text": " See the difference?"}, {"start": 871.5600000000001, "end": 877.9200000000001, "text": " There are many noisy images that were completely shut down because they were not really good"}, {"start": 877.9200000000001, "end": 882.64, "text": " at sampling different parts of the space of light paths."}, {"start": 882.64, "end": 886.2, "text": " Some images are not good at anything at all."}, {"start": 886.2, "end": 888.28, "text": " Take a look at the two sides."}, {"start": 888.28, "end": 892.6, "text": " And there are images where I can take caustics from for instance."}, {"start": 892.6, "end": 898.76, "text": " Like the s equals 5, t equals 1, it seems to have been even better at sampling caustics"}, {"start": 898.76, "end": 903.16, "text": " because this s equals 1, t equals 5 was also pretty good."}, {"start": 903.16, "end": 907.0, "text": " But it was shut down by the other technique that was even better."}, {"start": 907.0, "end": 914.76, "text": " So this is an amazingly powerful technique in order to create even more converged images"}, {"start": 914.76, "end": 918.52, "text": " if you have multiple sampling strategies."}, {"start": 918.52, "end": 925.96, "text": " Now you can also play with it, it is implemented in shader toy, the nice classical V-Sync where"}, {"start": 925.96, "end": 930.0, "text": " there is light source sampling and BSDF, DRDF sampling."}, {"start": 930.0, "end": 934.52, "text": " And it doesn't matter if you sell BSDF or DRDF in this case, by the way."}, {"start": 934.52, "end": 936.28, "text": " But you remember."}, {"start": 936.28, "end": 937.56, "text": " So you can play with it."}, {"start": 937.56, "end": 940.28, "text": " And I encourage you to do so."}, {"start": 940.28, "end": 941.68, "text": " It is lots of fun."}, {"start": 941.68, "end": 948.1999999999999, "text": " And you will see what kind of light transport situations are captured well with which"}, {"start": 948.2, "end": 952.32, "text": " sampling technique and how to unify them in a way that everything looks converged almost"}, {"start": 952.32, "end": 954.4000000000001, "text": " immediately."}, {"start": 954.4000000000001, "end": 956.76, "text": " And also what does a good engineer do?"}, {"start": 956.76, "end": 961.2, "text": " Well, a good engineer obviously is interested in the problem."}, {"start": 961.2, "end": 969.84, "text": " So I just set down and also implemented the same thing in a simple example in 1D to make"}, {"start": 969.84, "end": 973.12, "text": " sure that everyone really understands what is going on."}, {"start": 973.12, "end": 978.4, "text": " So this is a simple Monte Carlo sampling problem in 1D."}, {"start": 978.4, "end": 980.88, "text": " I have a function that I would want to integrate."}, {"start": 980.88, "end": 985.0, "text": " If I remember correctly, I am integrating a Gaussian."}, {"start": 985.0, "end": 992.32, "text": " And I would like to sample it with two different techniques."}, {"start": 992.32, "end": 995.72, "text": " So this is two different Monte Carlo sampling processes."}, {"start": 995.72, "end": 1000.88, "text": " And I would want to take only the best samples in order to get an approximation which has"}, {"start": 1000.88, "end": 1002.4, "text": " the least variance."}, {"start": 1002.4, "end": 1005.56, "text": " And there are multiple ways of combining them together."}, {"start": 1005.56, "end": 1009.6, "text": " And there's also naive averaging, which just averages the error."}, {"start": 1009.6, "end": 1014.84, "text": " So it would give you back all of these images from the side."}, {"start": 1014.84, "end": 1020.4, "text": " And I write out what are the exact Monte Carlo estimators for different multiple important"}, {"start": 1020.4, "end": 1022.4, "text": " sampling estimators as well."}, {"start": 1022.4, "end": 1023.4, "text": " So take a look."}, {"start": 1023.4, "end": 1025.16, "text": " It is now part of small paint."}, {"start": 1025.16, "end": 1029.0, "text": " And you can run it super simple and hopefully super understandable."}, {"start": 1029.0, "end": 1034.32, "text": " I think it is less than 100 lines of code."}, {"start": 1034.32, "end": 1041.0, "text": " So what we now know, bi-directional path tracing, definitely better convergence speed, especially"}, {"start": 1041.0, "end": 1045.12, "text": " in scenes where you are not that likely to hit light sources."}, {"start": 1045.12, "end": 1047.72, "text": " So especially in indoor scenes."}, {"start": 1047.72, "end": 1052.36, "text": " And you will also get quicker convergence for caustics because you will have sampling strategies"}, {"start": 1052.36, "end": 1054.32, "text": " that are very efficient in that."}, {"start": 1054.32, "end": 1060.24, "text": " So caustics are usually visible from light sources and you will sample them very often."}, {"start": 1060.24, "end": 1063.6399999999999, "text": " So there's going to be at least one estimator that captures it well."}, {"start": 1063.6399999999999, "end": 1069.08, "text": " So if you use MIS, multiple important sampling, you're going to have caustics covered very"}, {"start": 1069.08, "end": 1071.6, "text": " quickly."}, {"start": 1071.6, "end": 1077.8, "text": " Now it is definitely not easy to grasp and it is definitely not easy to implement."}, {"start": 1077.8, "end": 1080.9199999999998, "text": " So it requires quite a bit of an effort."}, {"start": 1080.9199999999998, "end": 1083.48, "text": " Even if it sounds very intuitive."}, {"start": 1083.48, "end": 1088.08, "text": " It is, but it is not easy."}, {"start": 1088.08, "end": 1090.08, "text": " This is also a brute force method."}, {"start": 1090.08, "end": 1097.24, "text": " This also samples all possible light sources and therefore this is also unbiased and consistent."}, {"start": 1097.24, "end": 1102.84, "text": " Some more literature on bi-directional path tracing and even better, there is a nice"}, {"start": 1102.84, "end": 1105.68, "text": " comparison quoted up also on shader toy."}, {"start": 1105.68, "end": 1110.4, "text": " So when you are at home just fire it up and you will see the difference evolving in real"}, {"start": 1110.4, "end": 1113.5600000000002, "text": " time on your GPU on an indoor scene."}]
Two Minute Papers
https://www.youtube.com/watch?v=LB6NGEHtD7Y
TU Wien Rendering #31 - Unbiased, Consistent Algorithm Classes
We consider photorealistic rendering a mature subfield of computer graphics, and as many global illumination algorithms exist, it'd be great to classify them according to what behavior we can expect from them. Such an algorithm can be biased/unbiased and consistent/inconsistent. Choose your poison! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai
Let's talk about just briefly about the PBRT architecture. PBRT is not exactly the renderer that we are going to use. We're going to use LuxRender, but LuxRender was built upon PBRT and therefore the basic structure remained completely intact and this is a really good architecture that you would see that many of the renderer engines out there, globally illumination rendering engines out their use. Most of them use the very same architecture. So we have a main sampler render test that asks the sampler to provide random samples. So the sampler you can imagine as a random number generator. We need a lot of different random numbers because the pixel that we are sampling, some techniques choose it deterministically going from pixel to pixel. Some techniques take pixels randomly. I mean which pixel we choose to be sampled is usually deterministic but the displacement because we would be sampling the pixels not in not only in the midpoint like recursive ray tracing but you you would take completely random samples from nearby and use filtering to send them up in a meaningful way. Now this requires random numbers they come from the sampler. You would also send outgoing rays in the hemisphere of different objects. You also need random numbers for this. So in this sample these random numbers arrive and this sample you would send to the camera and the camera would give back to you array. So you tell the camera please give me a array that points to this pixel and this camera would give you back array which starts from the camera starting point and points exactly there. Now all you need to do is give this ray to the integrator and the integrator will tell you how much radiance is coming along this ray. And what you can do after that is write it to a film and this is not necessarily trivial because for instance you could just simply write it to a ppm or a png file and be done with it. In contrast what what LuxRender does is it has a film class and what you can do is that you can save different for instance different contributions in different buffers. So what you could do is for instance separate direct and the indirect illumination into different films, different images. And then you can in the end sum them up but maybe you could say that I don't need postics on this image and then you would just talk that image. So you can do tricky things if you have a correctly implemented film class. Okay so LuxRender just what I have been talking about is built upon VBRT and uses the very same architecture. This is how it looks so it has graphical user interface and you can also manipulate different tone wrapping algorithms in there, different denoising, image denoising algorithms in there and you can even manipulate light groups. This is another tricky thing with the film class. Basically what this means is that you save the contributions of different light sources into different films by films you can imagine. Image files. So every single light source has a different png file if you will and they are saved into there and the final image would come up as a sum, a sum of these individual films but you could say that one of the light sources is a bit too bright. I would like to tone it down but if you want to tone it down then you would have to revender your image because you changed the physical properties of what's going on. Now you can do this if you have this light groups option because they are stored into individual buffers so you can just dim one of these images and just add them up together and then you would have the effect of that light source a bit dimmer. You can for instance completely turn off sunlight or television that you don't that you don't want to use in the scene. It sounded like a good idea but it wasn't. You can just turn it off without rendering the scene. You can operate all of these things in the through the Luxor under GUI. Now before we go into algorithms let's talk about algorithm classes what kinds of algorithms are we interested in. First what we are interested in is consistent algorithms. Consistent means that if I use an infinite number of Monte Carlo samples then I would converge exactly to the right answer. I would get back the exact interval of the function. Intuitively it says if I run this algorithm sooner or later it will converge. It also is important to note that no one said anything about when this sooner or later happens. So if an algorithm is consistent it doesn't mean that it is fast it doesn't mean that it's slow it can be anything absolutely anything. It may be that there is an algorithm that's theoretically consistent. So after an infinite amount of samples you would get the right answer but it really feels like infinity. So it may be that after two weeks you still don't get the correct image. There are algorithms like that and theoretically that's consistent that's fine because you can prove that it's gonna converge sooner or later. The more difficult class that many people seem to mess up is unbiased algorithms. Now what does it mean? If you just read the formula then you can see that the expected error of the estimation is zero and we have to note that this is completely independent of n. n is the number of samples that we have taken. The expected error of the estimation is zero. It doesn't mean that the error is zero because it's independent of the number of samples. It doesn't mean that after one sample per pixel I get the right result. It says that the expected error is zero. I will give you many intuitions of this for this because it is very easy to misunderstand and misinterpret because in statistics there is a difference between expected value and variance and this doesn't say anything about the variance. This only tells you about the expected values. So for instance if you are a mathematician and think a bit about this and you could say that if I have an unbiased algorithm and I have two noisy images you render something on your machine. I render something on my machine that's two noisy images. I could merge them together. I could average them because they are unbiased samples. It doesn't matter where they come from. I would add these samples together every step and I would get a better solution. We will see an example for that. My favorite intuition is that the algorithm has the very same chance of over and underestimating the integrand. So it means that if I would try to estimate the outcome of a dice roll the expected value you can you can roll from 1 to 6 with equal probabilities. The expected value is 3.5. So this means that I would have the very same probability of saying 4 as I would have the probability for saying 3. So it's the very same chance to under and overestimate the integrand. And I'll give you my other favorite intuition. This is what journalists tend to like the best. It means that there is no systematic error in the algorithm. The algorithm doesn't cut corners. And if there are errors in the image then this can be only noise and this noise comes because you don't have enough samples and if you add more you're guaranteed to get better. Now let's take another look at this really good intuition. So I can combine together two noisy images. So this means that I should be able to do network rendering without actually using a network which sounds a bit mind-boggling. I really like the parallel to this which is a famous saying of Einstein from long ago where they talked about sending electromagnetic waves out and they talked about the telephone and people could not grasp the idea of a telephone. And he said that we would have a super super long cat. One the tail of the cat would be in Manhattan and if you would just pull the tail of the cat in Manhattan then the front of the cat would be in New York and if you pull the tail in Manhattan then she would say meow in New York. And he asked the people is this understandable? Yes this is understandable. Okay perfect we're almost there. Now imagine that there's no cat. And this is the exact same thing. So this is network rendering without an actual network. Well okay mathematical theories okay but let's actually let's give it a try. So what I did here is I rendered this interior scene and this is how it looks like after two minutes. It's really noisy right? Now what I did is I ran 10 of these rendering processes and saved the images 10 times. So I didn't run one rendering process for long. I ran many completely independent rendering processes for two two minutes and what I did is I merged the images together. What it means is that I averaged the images. I added them together and averaged them. Now basically this means that you could do this on completely independent computers that have never heard of each other. And now let's take a look. This is the noisy image that we had and now let's merge 10 of these together. This is what we will get. Look closely. Look at that. Now one more time. This is the noisy after two minutes and this is merging some of these noisy images together. So this is unbelievable that this actually works. So if you have unbiased algorithms you can expect this kind of behavior and you don't need to sophisticated networking to use your pathfacer for instance in a network because you don't need the network at all and this is really awesome. No because if you don't add any kind of seed to your computations then you're computing completely independent samples and it doesn't matter if the sample is computed on the same machine or in a different machine. If you have some kind of determinism then it may be possible that the same paths are computed by multiple machine and that's indeed wasted time. But otherwise it works just fine. Now let's practice a bit. Instead there's a question. Yeah. Just how's the biggest difference between one picture of renders 20 minutes and 10 pictures rendered two minutes each and then combined? Nothing. In terms of samples nothing. The only difference is that you actually need to fire up that scene on multiple machines. So if there is like 10 gigabytes of textures then it takes longer to load it up on multiple machines and and maybe transfer the data together but if you think only in terms of sample it doesn't matter where it comes from. Okay let's practice a bit. We have different techniques and this is how the error is evolving in time. Now the intuition of consistent means that the error tends to zero over time so if I render for long enough then the error is going to be zero. Is this black one a consistent algorithm? Nope because it converges here to the dashed line and not to zero. Now what about the other two guys? Are they consistent or not? Okay. The error seems to converge to zero. Okay now what about these techniques? Are they biased or unbiased? Which is which? What about this one? This is the darker gray. Is this biased or unbiased? Now if we have this intuition that if we render for more the image is guaranteed to get better or at least not worse then this darker is definitely not unbiased because it is possible that I'm rendering for 10 minutes that's this point for instance and I say okay I almost have a good enough image and I render for another five image and expect it to be better and then I get this maybe a completely garbled up image full of artifacts and errors and that is entirely possible with biased algorithms. No one said that it's likely but it is possible so you cannot really predict how the error would evolve in time and if you take a look at the other two lines you can see that they are unbiased algorithms so as you render for longer you are guaranteed to get a better image.
[{"start": 0.0, "end": 10.8, "text": " Let's talk about just briefly about the PBRT architecture. PBRT is not exactly the"}, {"start": 10.8, "end": 14.72, "text": " renderer that we are going to use. We're going to use LuxRender, but LuxRender was"}, {"start": 14.72, "end": 19.400000000000002, "text": " built upon PBRT and therefore the basic structure remained completely"}, {"start": 19.400000000000002, "end": 26.8, "text": " intact and this is a really good architecture that you would see that many of"}, {"start": 26.8, "end": 30.48, "text": " the renderer engines out there, globally illumination rendering engines out"}, {"start": 30.48, "end": 34.6, "text": " their use. Most of them use the very same architecture. So we have a main"}, {"start": 34.6, "end": 41.480000000000004, "text": " sampler render test that asks the sampler to provide random samples. So the"}, {"start": 41.480000000000004, "end": 45.16, "text": " sampler you can imagine as a random number generator. We need a lot of"}, {"start": 45.16, "end": 49.2, "text": " different random numbers because the pixel that we are sampling, some techniques"}, {"start": 49.2, "end": 53.72, "text": " choose it deterministically going from pixel to pixel. Some techniques take"}, {"start": 53.72, "end": 61.8, "text": " pixels randomly. I mean which pixel we choose to be sampled is usually"}, {"start": 61.8, "end": 68.12, "text": " deterministic but the displacement because we would be sampling the pixels not"}, {"start": 68.12, "end": 72.48, "text": " in not only in the midpoint like recursive ray tracing but you you would"}, {"start": 72.48, "end": 77.12, "text": " take completely random samples from nearby and use filtering to send them up in a"}, {"start": 77.12, "end": 80.72, "text": " meaningful way. Now this requires random numbers they come from the"}, {"start": 80.72, "end": 85.52, "text": " sampler. You would also send outgoing rays in the hemisphere of different objects."}, {"start": 85.52, "end": 90.52, "text": " You also need random numbers for this. So in this sample these random numbers"}, {"start": 90.52, "end": 97.8, "text": " arrive and this sample you would send to the camera and the camera would give"}, {"start": 97.8, "end": 105.48, "text": " back to you array. So you tell the camera please give me a array that points to"}, {"start": 105.48, "end": 109.8, "text": " this pixel and this camera would give you back array which starts from the"}, {"start": 109.8, "end": 114.32, "text": " camera starting point and points exactly there. Now all you need to do is give"}, {"start": 114.32, "end": 119.36, "text": " this ray to the integrator and the integrator will tell you how much radiance is"}, {"start": 119.36, "end": 127.8, "text": " coming along this ray. And what you can do after that is write it to a film and"}, {"start": 127.8, "end": 134.12, "text": " this is not necessarily trivial because for instance you could just simply write"}, {"start": 134.12, "end": 141.44, "text": " it to a ppm or a png file and be done with it. In contrast what what LuxRender"}, {"start": 141.44, "end": 147.68, "text": " does is it has a film class and what you can do is that you can save different"}, {"start": 147.68, "end": 152.52, "text": " for instance different contributions in different buffers. So what you could do"}, {"start": 152.52, "end": 156.36, "text": " is for instance separate direct and the indirect illumination into different"}, {"start": 156.36, "end": 161.12, "text": " films, different images. And then you can in the end sum them up but maybe you"}, {"start": 161.12, "end": 164.6, "text": " could say that I don't need postics on this image and then you would just talk"}, {"start": 164.6, "end": 169.12, "text": " that image. So you can do tricky things if you have a correctly implemented"}, {"start": 169.12, "end": 174.62, "text": " film class. Okay so LuxRender just what I have been talking about is built upon"}, {"start": 174.62, "end": 179.84, "text": " VBRT and uses the very same architecture. This is how it looks so it has"}, {"start": 179.84, "end": 187.04000000000002, "text": " graphical user interface and you can also manipulate different tone wrapping"}, {"start": 187.04000000000002, "end": 190.84, "text": " algorithms in there, different denoising, image denoising algorithms in there"}, {"start": 190.84, "end": 196.20000000000002, "text": " and you can even manipulate light groups. This is another tricky thing with the"}, {"start": 196.20000000000002, "end": 204.08, "text": " film class. Basically what this means is that you save the contributions of"}, {"start": 204.08, "end": 210.88, "text": " different light sources into different films by films you can imagine. Image files."}, {"start": 210.88, "end": 217.24, "text": " So every single light source has a different png file if you will and they are"}, {"start": 217.24, "end": 223.48000000000002, "text": " saved into there and the final image would come up as a sum, a sum of these"}, {"start": 223.48000000000002, "end": 229.04000000000002, "text": " individual films but you could say that one of the light sources is a bit too"}, {"start": 229.04000000000002, "end": 234.16000000000003, "text": " bright. I would like to tone it down but if you want to tone it down then you"}, {"start": 234.16000000000003, "end": 237.44, "text": " would have to revender your image because you changed the physical properties"}, {"start": 237.44, "end": 242.68, "text": " of what's going on. Now you can do this if you have this light groups option"}, {"start": 242.68, "end": 247.6, "text": " because they are stored into individual buffers so you can just dim one of these"}, {"start": 247.6, "end": 253.08, "text": " images and just add them up together and then you would have the effect of that"}, {"start": 253.08, "end": 260.12, "text": " light source a bit dimmer. You can for instance completely turn off sunlight or"}, {"start": 260.12, "end": 265.08, "text": " television that you don't that you don't want to use in the scene. It sounded like"}, {"start": 265.08, "end": 269.76, "text": " a good idea but it wasn't. You can just turn it off without rendering the scene."}, {"start": 269.76, "end": 274.44, "text": " You can operate all of these things in the through the Luxor under GUI."}, {"start": 274.44, "end": 279.8, "text": " Now before we go into algorithms let's talk about algorithm classes what kinds"}, {"start": 279.8, "end": 285.44, "text": " of algorithms are we interested in. First what we are interested in is consistent"}, {"start": 285.44, "end": 290.52, "text": " algorithms. Consistent means that if I use an infinite number of Monte Carlo"}, {"start": 290.52, "end": 295.59999999999997, "text": " samples then I would converge exactly to the right answer. I would get back the"}, {"start": 295.6, "end": 303.04, "text": " exact interval of the function. Intuitively it says if I run this algorithm sooner or"}, {"start": 303.04, "end": 310.28000000000003, "text": " later it will converge. It also is important to note that no one said anything"}, {"start": 310.28000000000003, "end": 315.08000000000004, "text": " about when this sooner or later happens. So if an algorithm is consistent it"}, {"start": 315.08000000000004, "end": 319.24, "text": " doesn't mean that it is fast it doesn't mean that it's slow it can be anything"}, {"start": 319.24, "end": 326.56, "text": " absolutely anything. It may be that there is an algorithm that's theoretically consistent."}, {"start": 326.56, "end": 331.84000000000003, "text": " So after an infinite amount of samples you would get the right answer but it"}, {"start": 331.84000000000003, "end": 336.56, "text": " really feels like infinity. So it may be that after two weeks you still don't"}, {"start": 336.56, "end": 340.40000000000003, "text": " get the correct image. There are algorithms like that and theoretically that's"}, {"start": 340.40000000000003, "end": 344.88, "text": " consistent that's fine because you can prove that it's gonna converge sooner or"}, {"start": 344.88, "end": 353.76, "text": " later. The more difficult class that many people seem to mess up is unbiased"}, {"start": 353.76, "end": 358.04, "text": " algorithms. Now what does it mean? If you just read the formula then you can see"}, {"start": 358.04, "end": 364.36, "text": " that the expected error of the estimation is zero and we have to note that this"}, {"start": 364.36, "end": 370.08, "text": " is completely independent of n. n is the number of samples that we have taken."}, {"start": 370.08, "end": 376.15999999999997, "text": " The expected error of the estimation is zero. It doesn't mean that the error is"}, {"start": 376.15999999999997, "end": 380.8, "text": " zero because it's independent of the number of samples. It doesn't mean that"}, {"start": 380.8, "end": 386.0, "text": " after one sample per pixel I get the right result. It says that the expected"}, {"start": 386.0, "end": 390.56, "text": " error is zero. I will give you many intuitions of this for this because it is"}, {"start": 390.56, "end": 395.64, "text": " very easy to misunderstand and misinterpret because in statistics there is a"}, {"start": 395.64, "end": 401.68, "text": " difference between expected value and variance and this doesn't say anything"}, {"start": 401.68, "end": 406.4, "text": " about the variance. This only tells you about the expected values. So for instance"}, {"start": 406.4, "end": 410.4, "text": " if you are a mathematician and think a bit about this and you could say that if"}, {"start": 410.4, "end": 415.88, "text": " I have an unbiased algorithm and I have two noisy images you render something"}, {"start": 415.88, "end": 419.71999999999997, "text": " on your machine. I render something on my machine that's two noisy images. I"}, {"start": 419.71999999999997, "end": 424.64, "text": " could merge them together. I could average them because they are unbiased"}, {"start": 424.64, "end": 428.15999999999997, "text": " samples. It doesn't matter where they come from. I would add these samples"}, {"start": 428.15999999999997, "end": 432.8, "text": " together every step and I would get a better solution. We will see an example"}, {"start": 432.8, "end": 439.0, "text": " for that. My favorite intuition is that the algorithm has the very same chance of"}, {"start": 439.0, "end": 444.84, "text": " over and underestimating the integrand. So it means that if I would try to"}, {"start": 444.84, "end": 451.12, "text": " estimate the outcome of a dice roll the expected value you can you can roll"}, {"start": 451.12, "end": 456.88, "text": " from 1 to 6 with equal probabilities. The expected value is 3.5. So this means"}, {"start": 456.88, "end": 464.2, "text": " that I would have the very same probability of saying 4 as I would have the"}, {"start": 464.2, "end": 469.08, "text": " probability for saying 3. So it's the very same chance to under and overestimate"}, {"start": 469.08, "end": 473.64, "text": " the integrand. And I'll give you my other favorite intuition. This is what"}, {"start": 473.64, "end": 478.16, "text": " journalists tend to like the best. It means that there is no systematic error in"}, {"start": 478.16, "end": 484.08000000000004, "text": " the algorithm. The algorithm doesn't cut corners. And if there are errors in the"}, {"start": 484.08000000000004, "end": 489.04, "text": " image then this can be only noise and this noise comes because you don't have"}, {"start": 489.04, "end": 494.88, "text": " enough samples and if you add more you're guaranteed to get better. Now let's"}, {"start": 494.88, "end": 500.04, "text": " take another look at this really good intuition. So I can combine together two"}, {"start": 500.04, "end": 507.16, "text": " noisy images. So this means that I should be able to do network rendering"}, {"start": 507.16, "end": 515.28, "text": " without actually using a network which sounds a bit mind-boggling. I really"}, {"start": 515.28, "end": 521.9200000000001, "text": " like the parallel to this which is a famous saying of Einstein from long ago"}, {"start": 521.9200000000001, "end": 527.48, "text": " where they talked about sending electromagnetic waves out and they talked"}, {"start": 527.48, "end": 533.28, "text": " about the telephone and people could not grasp the idea of a telephone. And he"}, {"start": 533.28, "end": 539.92, "text": " said that we would have a super super long cat. One the tail of the cat would be"}, {"start": 539.92, "end": 545.4399999999999, "text": " in Manhattan and if you would just pull the tail of the cat in Manhattan then"}, {"start": 545.4399999999999, "end": 550.04, "text": " the front of the cat would be in New York and if you pull the tail in Manhattan"}, {"start": 550.04, "end": 554.88, "text": " then she would say meow in New York. And he asked the people is this"}, {"start": 554.88, "end": 559.56, "text": " understandable? Yes this is understandable. Okay perfect we're almost there. Now"}, {"start": 559.56, "end": 565.3599999999999, "text": " imagine that there's no cat. And this is the exact same thing. So this is"}, {"start": 565.3599999999999, "end": 570.5999999999999, "text": " network rendering without an actual network. Well okay mathematical theories"}, {"start": 570.5999999999999, "end": 575.2399999999999, "text": " okay but let's actually let's give it a try. So what I did here is I rendered"}, {"start": 575.2399999999999, "end": 580.5999999999999, "text": " this interior scene and this is how it looks like after two minutes. It's really"}, {"start": 580.5999999999999, "end": 587.76, "text": " noisy right? Now what I did is I ran 10 of these rendering processes and saved"}, {"start": 587.76, "end": 594.96, "text": " the images 10 times. So I didn't run one rendering process for long. I ran"}, {"start": 594.96, "end": 600.28, "text": " many completely independent rendering processes for two two minutes and what"}, {"start": 600.28, "end": 604.6, "text": " I did is I merged the images together. What it means is that I averaged the"}, {"start": 604.6, "end": 611.04, "text": " images. I added them together and averaged them. Now basically this means that"}, {"start": 611.04, "end": 614.96, "text": " you could do this on completely independent computers that have never heard of"}, {"start": 614.96, "end": 620.0400000000001, "text": " each other. And now let's take a look. This is the noisy image that we had and"}, {"start": 620.0400000000001, "end": 625.0400000000001, "text": " now let's merge 10 of these together. This is what we will get. Look closely."}, {"start": 625.0400000000001, "end": 630.72, "text": " Look at that. Now one more time. This is the noisy after two minutes and this is"}, {"start": 630.72, "end": 636.9200000000001, "text": " merging some of these noisy images together. So this is unbelievable that this"}, {"start": 636.9200000000001, "end": 642.6800000000001, "text": " actually works. So if you have unbiased algorithms you can expect this kind of"}, {"start": 642.68, "end": 650.0, "text": " behavior and you don't need to sophisticated networking to use your"}, {"start": 650.0, "end": 654.04, "text": " pathfacer for instance in a network because you don't need the network at all"}, {"start": 654.04, "end": 655.88, "text": " and this is really awesome."}, {"start": 655.88, "end": 669.4799999999999, "text": " No because if you don't add any kind of seed to your computations then you're"}, {"start": 669.48, "end": 673.84, "text": " computing completely independent samples and it doesn't matter if the sample is"}, {"start": 673.84, "end": 678.5600000000001, "text": " computed on the same machine or in a different machine. If you have some kind of"}, {"start": 678.5600000000001, "end": 683.96, "text": " determinism then it may be possible that the same paths are computed by"}, {"start": 683.96, "end": 689.32, "text": " multiple machine and that's indeed wasted time. But otherwise it works just fine."}, {"start": 689.32, "end": 694.32, "text": " Now let's practice a bit. Instead there's a question."}, {"start": 694.32, "end": 700.96, "text": " Yeah. Just how's the biggest difference between one picture of renders 20 minutes"}, {"start": 700.96, "end": 705.72, "text": " and 10 pictures rendered two minutes each and then combined? Nothing. In terms of"}, {"start": 705.72, "end": 710.2800000000001, "text": " samples nothing. The only difference is that you actually need to fire up that"}, {"start": 710.2800000000001, "end": 715.6800000000001, "text": " scene on multiple machines. So if there is like 10 gigabytes of textures then it"}, {"start": 715.6800000000001, "end": 721.2800000000001, "text": " takes longer to load it up on multiple machines and and maybe transfer the data"}, {"start": 721.28, "end": 725.72, "text": " together but if you think only in terms of sample it doesn't matter where it"}, {"start": 725.72, "end": 731.1999999999999, "text": " comes from. Okay let's practice a bit. We have different techniques and this is"}, {"start": 731.1999999999999, "end": 739.24, "text": " how the error is evolving in time. Now the intuition of consistent means that"}, {"start": 739.24, "end": 743.24, "text": " the error tends to zero over time so if I render for long enough then the error"}, {"start": 743.24, "end": 751.24, "text": " is going to be zero. Is this black one a consistent algorithm?"}, {"start": 757.6800000000001, "end": 765.0, "text": " Nope because it converges here to the dashed line and not to zero. Now what"}, {"start": 765.0, "end": 770.6, "text": " about the other two guys? Are they consistent or not?"}, {"start": 770.6, "end": 783.6800000000001, "text": " Okay. The error seems to converge to zero. Okay now what about these techniques?"}, {"start": 783.6800000000001, "end": 792.46, "text": " Are they biased or unbiased? Which is which? What about this one? This is the"}, {"start": 792.46, "end": 800.0, "text": " darker gray. Is this biased or unbiased? Now if we have this intuition that if"}, {"start": 800.0, "end": 805.04, "text": " we render for more the image is guaranteed to get better or at least not worse"}, {"start": 805.04, "end": 811.04, "text": " then this darker is definitely not unbiased because it is possible that I'm"}, {"start": 811.04, "end": 816.84, "text": " rendering for 10 minutes that's this point for instance and I say okay I"}, {"start": 816.84, "end": 820.96, "text": " almost have a good enough image and I render for another five image and"}, {"start": 820.96, "end": 825.84, "text": " expect it to be better and then I get this maybe a completely garbled up"}, {"start": 825.84, "end": 832.36, "text": " image full of artifacts and errors and that is entirely possible with biased"}, {"start": 832.36, "end": 838.64, "text": " algorithms. No one said that it's likely but it is possible so you cannot really"}, {"start": 838.64, "end": 842.6, "text": " predict how the error would evolve in time and if you take a look at the other"}, {"start": 842.6, "end": 847.88, "text": " two lines you can see that they are unbiased algorithms so as you render for"}, {"start": 847.88, "end": 858.16, "text": " longer you are guaranteed to get a better image."}]